• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 89
  • 17
  • 14
  • 6
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 157
  • 157
  • 43
  • 34
  • 28
  • 22
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

A reutilização de modelos de requisitos de sistemas por analogia : experimentação e conclusões / Systems requirements reuse by analogy: examination and conclusions

Zirbes, Sergio Felipe January 1995 (has links)
A exemplo de qualquer outra atividade que se destine a produzir um produto, a engenharia de software necessariamente passa por um fase inicial, onde necessário definir o que será produzido. A análise de requisitos é esta fase inicial, e o produto dela resultante é a especificação do sistema a ser construído. As duas atividades básicas durante a analise de requisitos são a eliciação (busca ou descoberta das características do sistema) e a modelagem. Uma especificação completa e consistente é condição indispensável para o adequado desenvolvimento de um sistema. Muitos tem sido, entretanto, os problemas enfrentados pelos analistas na execução desta tarefa. A variedade e complexidade dos requisitos, as limitações humanas e a dificuldade de comunicação entre usuários e analistas são as principais causas destas dificuldades. Ao considerarmos o ciclo de vida de um sistema de informação, verificamos que a atividade principal dos profissionais em computação é a transformação de uma determinada porção do ambiente do usuário, em um conjunto de modelos. Inicialmente, através de um modelo descritivo representamos a realidade. A partir dele derivamos um modelo das necessidades (especificação dos requisitos), transformando-o a seguir num modelo conceitual. Finalizando o ciclo de transformações, derivamos o modelo programado (software), que ira se constituir no sistema automatizado requerido. Apesar da reconhecida importância da analise dos requisitos e da conseqüente representação destes requisitos em modelos, muito pouco se havia inovado nesta área ate o final dos anos 80. Com a evolução do conceito de reutilização de software para reutilização de especificações ou reutilização de modelos de requisitos, finalmente surge não apenas um novo método, mas um novo paradigma: a reutilização sistemática (sempre que possível) de modelos integrantes de especificações de sistemas semelhantes ao que se pretende desenvolver. Muito se tem dito sobre esta nova forma de modelagem e um grande número de pesquisadores tem se dedicado a tornar mais simples e eficientes várias etapas do novo processo. Entretanto, para que a reutilização de modelos assuma seu papel como uma metodologia de use geral e de plena aceitação, resta comprovar se, de fato, ele produz software de melhor quantidade e confiabilidade, de forma mais produtiva. A pesquisa descrita neste trabalho tem por objetivo investigar um dos aspectos envolvido nesta comprovação. A experimentação viabilizou a comparação entre modelos de problemas construídos com reutilização, a partir dos modelos de problemas similares previamente construídos e postos a disposição dos analistas, e os modelos dos mesmos problemas elaborados sem nenhuma reutilização. A comparação entre os dois conjuntos de modelos permitiu concluir, nas condições propostas na pesquisa, serem os modelos construídos com reutilização mais completos e corretos do que os que foram construídos sem reutilização. A apropriação dos tempos gastos pelos analistas durante as diversas etapas da modelagem, permitiu considerações sobre o esforço necessário em cada um dos dois tipos de modelagem. 0 protocolo experimental e a estratégia definida para a pesquisa possibilitaram também que medidas pudessem ser realizadas com duas series de modelos, onde a principal diferença era o grau de similaridade entre os modelos do problema reutilizado e os modelos do problema alvo. A variação da qualidade e completude dos dois conjuntos de modelos, bem como do esforço necessário para produzi-los, evidenciou uma questão fundamental do processo: a reutilização só terá efeitos realmente produtivos se realizada apenas com aplicações integrantes de domínios específicos e bem definidos, compartilhando, em alto grau, dados e procedimentos. De acordo com as diretrizes da pesquisa, o processo de reutilização de modelos de requisitos foi investigado em duas metodologias de desenvolvimento: na metodologia estruturada a modelagem foi realizada com Diagramas de Fluxo de Dados (DFD's) e na metodologia orientada a objeto com Diagramas de Objetos. A pesquisa contou com a participação de 114 alunos/analistas, tendo sido construídos 175 conjuntos de modelos com diagramas de fluxo de dados e 23 modelos com diagramas de objeto. Sobre estas amostras foram realizadas as analises estatísticas pertinentes, buscando-se responder a um considerável número de questões existentes sobre o assunto. Os resultados finais mostram a existência de uma série de benefícios na análise de requisitos com modelagem baseada na reutilização de modelos análogos. Mas, a pesquisa em seu todo mostra, também, as restrições e cuidados necessários para que estes benefícios de fato ocorram. / System Engineering, as well as any other product oriented activity, starts by a clear definition of the product to be obtained. This initial activity is called Requirement Analysis and the resulting product consists of a system specification. The Requirement Analysis is divided in two separated phases: elicitation and modeling. An appropriate system development definition relies in a complete, and consistent system specification phase. However, many problems have been faced by system analysts in the performance of such task, as a result of requirements complexity, and diversity, human limitations, and communication gap between users and developers. If we think of a system life cycle, we'll find out that the main activity performed by software engineers consists in the generation of models corresponding to specific parts of the users environment. This modeling activity starts by a descriptive model of the portion of reality from which the requirement model is derived, resulting in the system conceptual model. The last phase of this evolving modeling activity is the software required for the system implementation. In spite of the importance of requirement analysis and modeling, very little research effort was put in these activities and none significant improvement in available methodologies were presented until the late 80s. Nevertheless, when the concepts applied in software reuse were also applied to system specification and requirements modeling, then a new paradigm was introduced, consisting in the specification of new systems based on systematic reuse of similar available system models. Research effort have been put in this new modeling technique in the aim of make it usable and reliable. However, only after this methodology is proved to produce better and reliable software in a more productive way, it would be world wide accepted by the scientific and technical community. The present work provides a critical analysis about the use of such requirement modeling technique. Experimental modeling techniques based on the reuse of similar existing models are analyzed. Systems models were developed by system analyst with similar skills, with and without reusing previously existing models. The resulting models were compared in terms of correction, consumed time in each modeling phase, effort, etc. An experimental protocol and a special strategy were defined in order to compare and to measure results obtained from the use of two different groups of models. The main difference between the two selected groups were the similarity level between the model available for reuse and the model to be developed. The diversity of resulting models in terms of quality and completeness, as well in the modeling effort, was a corroboration to the hypothesis that reuse effectiveness is related to similarity between domains, data and procedures of pre-existing models and applications being developed. In this work, the reuse of requirements models is investigated in two different methodologies: in the first one, the modeling process is based on the use of Data Flow Diagrams, as in the structured methodology; in the second methodology, based on Object Orientation, Object Diagrams are used for modeling purposes. The research was achieved with the cooperation of 114 students/analysts, resulting in 175 series of Data Flow Diagrams and 23 series of Object Diagrams. Proper statistical analysis were conducted with these samples, in order to clarify questions about requirements reuse. According to the final results, modeling techniques based on the reuse of analogous models provide an improvement in requirement analysis, without disregarding restrictions resulting from differences in domain, data and procedures.
112

A runtime system for data-flow task programming on multicore architectures with accelerators / Uma ferramenta para programação com dependência de dados em arquiteturas multicore com aceleradores / Vers un support exécutif avec dépendance de données pour les architectures multicoeur avec des accélérateurs

Lima, João Vicente Ferreira January 2014 (has links)
Dans cette thèse , nous proposons d’étudier des questions sur le parallélism de tâche avec dépendance de données dans le cadre de machines multicoeur avec des accélérateurs. La solution proposée a été développée en utilisant l’interface de programmation haute niveau XKaapi du projet MOAIS de l’INRIA Rhône-Alpes. D’abord nous avons étudié des questions liés à une approche d’exécution totalement asyncrone et l’ordonnancement par vol de travail sur des architectures multi-GPU. Le vol de travail avec localité de données a montré des résultats significatifs, mais il ne prend pas en compte des différents ressources de calcul. Ensuite nous avons conçu une interface et une modèle de coût qui permettent d’écrire des politiques d’ordonnancement sur XKaapi. Finalement on a évalué XKaapi sur un coprocesseur Intel Xeon Phi en mode natif. Notre conclusion est double. D’abord nous avons montré que le modèle de programmation data-flow peut être efficace sur des accélérateurs tels que des GPUs ou des coprocesseurs Intel Xeon Phi. Ensuite, le support à des différents politiques d’ordonnancement est indispensable. Les modèles de coût permettent d’obtenir de performance significatifs sur des calculs très réguliers, tandis que le vol de travail permet de redistribuer la charge en cours d’exécution. / Esta tese investiga os desafios no uso de paralelismo de tarefas com dependências de dados em arquiteturas multi-CPU com aceleradores. Para tanto, o XKaapi, desenvolvido no grupo de pesquisa MOAIS (INRIA Rhône-Alpes), é a ferramenta de programação base deste trabalho. Em um primeiro momento, este trabalho propôs extensões ao XKaapi a fim de sobrepor transferência de dados com execução através de operações concorrentes em GPU, em conjunto com escalonamento por roubo de tarefas em multi-GPU. Os resultados experimentais sugerem que o suporte a asincronismo é importante à escalabilidade e desempenho em multi-GPU. Apesar da localidade de dados, o roubo de tarefas não pondera a capacidade de processamento das unidades de processamento disponíveis. Nós estudamos estratégias de escalonamento com predição de desempenho em tempo de execução através de modelos de custo de execução. Desenvolveu-se um framework sobre o XKaapi de escalonamento que proporciona a implementação de diferentes algoritmos de escalonamento. Esta tese também avaliou o XKaapi em coprocessodores Intel Xeon Phi para execução nativa. A conclusão desta tese é dupla. Primeiramente, nós concluímos que um modelo de programação com dependências de dados pode ser eficiente em aceleradores, tais como GPUs e coprocessadores Intel Xeon Phi. Não obstante, uma ferramenta de programação com suporte a diferentes estratégias de escalonamento é essencial. Modelos de custo podem ser usados no contexto de algoritmos paralelos regulares, enquanto que o roubo de tarefas poder reagir a desbalanceamentos em tempo de execução. / In this thesis, we propose to study the issues of task parallelism with data dependencies on multicore architectures with accelerators. We target those architectures with the XKaapi runtime system developed by the MOAIS team (INRIA Rhône-Alpes). We first studied the issues on multi-GPU architectures for asynchronous execution and scheduling. Work stealing with heuristics showed significant performance results, but did not consider the computing power of different resources. Next, we designed a scheduling framework and a performance model to support scheduling strategies over XKaapi runtime. Finally, we performed experimental evaluations over the Intel Xeon Phi coprocessor in native execution. Our conclusion is twofold. First we concluded that data-flow task programming can be efficient on accelerators, which may be GPUs or Intel Xeon Phi coprocessors. Second, the runtime support of different scheduling strategies is essential. Cost models provide significant performance results over very regular computations, while work stealing can react to imbalances at runtime.
113

Architecture matérielle et flot de programmation associé pour la conception de systèmes numériques tolérants aux fautes / Hardware architecture and associated programming flow for the design of digital fault-tolerant systems

Peyret, Thomas 02 December 2014 (has links)
Que ce soit dans l’automobile avec des contraintes thermiques ou dans l’aérospatial et lenucléaire soumis à des rayonnements ionisants, l’environnement entraîne l’apparition de fautesdans les systèmes électroniques. Ces fautes peuvent être transitoires ou permanentes et vontinduire des résultats erronés inacceptables dans certains contextes applicatifs. L’utilisation decomposants dits « rad-hard » est parfois compromise par leurs coûts élevés ou les difficultésd’approvisionnement liés aux règles d’exportation.Cette thèse propose une approche conjointe matérielle et logicielle indépendante de la technologied’intégration permettant d’utiliser des composants numériques programmables dans desenvironnements susceptibles de générer des fautes. Notre proposition comporte la définitiond’une Architecture Reconfigurable à Gros Grains (CGRA) capable d’exécuter des codes applicatifscomplets mais aussi l’ensemble des mécanismes matériels et logiciels permettant de rendrecette architecture tolérante aux fautes. Ce résultat est obtenu par l’association de redondance etde reconfiguration dynamique du CGRA en s’appuyant sur une banque de configurations généréepar une chaîne de programmation complète. Cette chaîne outillée repose sur un flot permettantde porter un code sous forme de Control and Data Flow Graph (CDFG) sur l’architecture enobtenant un grand nombre de configurations différentes et qui permet d’exploiter au mieux lepotentiel de l’architecture.Les travaux, qui ont été validés aux travers d’expériences sur des applications du domaine dutraitement du signal et de l’image, ont fait l’objet de publications en conférences internationaleset de dépôts de brevets. / Whether in automotive with heat stress or in aerospace and nuclear field subjected to cosmic,neutron and gamma radiation, the environment can lead to the development of faults in electronicsystems. These faults, which can be transient or permanent, will lead to erroneous results thatare unacceptable in some application contexts. The use of so-called rad-hard components issometimes compromised due to their high costs and supply problems associated with exportrules.This thesis proposes a joint hardware and software approach independent of integrationtechnology for using digital programmable devices in environments that generate faults. Ourapproach includes the definition of a Coarse Grained Reconfigurable Architecture (CGRA) ableto execute entire application code but also all the hardware and software mechanisms to make ittolerant to transient and permanent faults. This is achieved by the combination of redundancyand dynamic reconfiguration of the CGRA based on a library of configurations generated by acomplete conception flow. This implemented flow relies on a flow to map a code represented as aControl and Data Flow Graph (CDFG) on the CGRA architecture by obtaining directly a largenumber of different configurations and allows to exploit the full potential of architecture.This work, which has been validated through experiments with applications in the field ofsignal and image processing, has been the subject of two publications in international conferencesand of two patents.
114

A runtime system for data-flow task programming on multicore architectures with accelerators / Uma ferramenta para programação com dependência de dados em arquiteturas multicore com aceleradores / Vers un support exécutif avec dépendance de données pour les architectures multicoeur avec des accélérateurs

Lima, João Vicente Ferreira January 2014 (has links)
Dans cette thèse , nous proposons d’étudier des questions sur le parallélism de tâche avec dépendance de données dans le cadre de machines multicoeur avec des accélérateurs. La solution proposée a été développée en utilisant l’interface de programmation haute niveau XKaapi du projet MOAIS de l’INRIA Rhône-Alpes. D’abord nous avons étudié des questions liés à une approche d’exécution totalement asyncrone et l’ordonnancement par vol de travail sur des architectures multi-GPU. Le vol de travail avec localité de données a montré des résultats significatifs, mais il ne prend pas en compte des différents ressources de calcul. Ensuite nous avons conçu une interface et une modèle de coût qui permettent d’écrire des politiques d’ordonnancement sur XKaapi. Finalement on a évalué XKaapi sur un coprocesseur Intel Xeon Phi en mode natif. Notre conclusion est double. D’abord nous avons montré que le modèle de programmation data-flow peut être efficace sur des accélérateurs tels que des GPUs ou des coprocesseurs Intel Xeon Phi. Ensuite, le support à des différents politiques d’ordonnancement est indispensable. Les modèles de coût permettent d’obtenir de performance significatifs sur des calculs très réguliers, tandis que le vol de travail permet de redistribuer la charge en cours d’exécution. / Esta tese investiga os desafios no uso de paralelismo de tarefas com dependências de dados em arquiteturas multi-CPU com aceleradores. Para tanto, o XKaapi, desenvolvido no grupo de pesquisa MOAIS (INRIA Rhône-Alpes), é a ferramenta de programação base deste trabalho. Em um primeiro momento, este trabalho propôs extensões ao XKaapi a fim de sobrepor transferência de dados com execução através de operações concorrentes em GPU, em conjunto com escalonamento por roubo de tarefas em multi-GPU. Os resultados experimentais sugerem que o suporte a asincronismo é importante à escalabilidade e desempenho em multi-GPU. Apesar da localidade de dados, o roubo de tarefas não pondera a capacidade de processamento das unidades de processamento disponíveis. Nós estudamos estratégias de escalonamento com predição de desempenho em tempo de execução através de modelos de custo de execução. Desenvolveu-se um framework sobre o XKaapi de escalonamento que proporciona a implementação de diferentes algoritmos de escalonamento. Esta tese também avaliou o XKaapi em coprocessodores Intel Xeon Phi para execução nativa. A conclusão desta tese é dupla. Primeiramente, nós concluímos que um modelo de programação com dependências de dados pode ser eficiente em aceleradores, tais como GPUs e coprocessadores Intel Xeon Phi. Não obstante, uma ferramenta de programação com suporte a diferentes estratégias de escalonamento é essencial. Modelos de custo podem ser usados no contexto de algoritmos paralelos regulares, enquanto que o roubo de tarefas poder reagir a desbalanceamentos em tempo de execução. / In this thesis, we propose to study the issues of task parallelism with data dependencies on multicore architectures with accelerators. We target those architectures with the XKaapi runtime system developed by the MOAIS team (INRIA Rhône-Alpes). We first studied the issues on multi-GPU architectures for asynchronous execution and scheduling. Work stealing with heuristics showed significant performance results, but did not consider the computing power of different resources. Next, we designed a scheduling framework and a performance model to support scheduling strategies over XKaapi runtime. Finally, we performed experimental evaluations over the Intel Xeon Phi coprocessor in native execution. Our conclusion is twofold. First we concluded that data-flow task programming can be efficient on accelerators, which may be GPUs or Intel Xeon Phi coprocessors. Second, the runtime support of different scheduling strategies is essential. Cost models provide significant performance results over very regular computations, while work stealing can react to imbalances at runtime.
115

Deployment of mixed criticality and data driven systems on multi-cores architectures / Déploiement de systèmes à flots de données en criticité mixte pour architectures multi-coeurs

Medina, Roberto 30 January 2019 (has links)
De nos jours, la conception de systèmes critiques va de plus en plus vers l’intégration de différents composants système sur une unique plate-forme de calcul. Les systèmes à criticité mixte permettent aux composants critiques ayant un degré élevé de confiance (c.-à-d. une faible probabilité de défaillance) de partager des ressources de calcul avec des composants moins critiques sans nécessiter des mécanismes d’isolation logicielle.Traditionnellement, les systèmes critiques sont conçus à l’aide de modèles de calcul comme les graphes data-flow et l’ordonnancement temps-réel pour fournir un comportement logique et temporel correct. Néanmoins, les ressources allouées aux data-flows et aux ordonnanceurs temps-réel sont fondées sur l’analyse du pire cas, ce qui conduit souvent à une sous-utilisation des processeurs. Les ressources allouées ne sont ainsi pas toujours entièrement utilisées. Cette sous-utilisation devient plus remarquable sur les architectures multi-cœurs où la différence entre le meilleur et le pire cas est encore plus significative.Le modèle d’exécution à criticité mixte propose une solution au problème susmentionné. Afin d’allouer efficacement les ressources tout en assurant une exécution correcte des composants critiques, les ressources sont allouées en fonction du mode opérationnel du système. Tant que des capacités de calcul suffisantes sont disponibles pour respecter toutes les échéances, le système est dans un mode opérationnel de « basse criticité ». Cependant, si la charge du système augmente, les composants critiques sont priorisés pour respecter leurs échéances, leurs ressources de calcul augmentent et les composants moins/non critiques sont pénalisés. Le système passe alors à un mode opérationnel de « haute criticité ».L’ intégration des aspects de criticité mixte dans le modèle data-flow est néanmoins un problème difficile à résoudre. Des nouvelles méthodes d’ordonnancement capables de gérer des contraintes de précédences et des variations sur les budgets de temps doivent être définies.Bien que plusieurs contributions sur l’ordonnancement à criticité mixte aient été proposées, l’ordonnancement avec contraintes de précédences sur multi-processeurs a rarement été étudié. Les méthodes existantes conduisent à une sous-utilisation des ressources, ce qui contredit l’objectif principal de la criticité mixte. Pour cette raison, nous définissons des nouvelles méthodes d’ordonnancement efficaces basées sur une méta-heuristique produisant des tables d’ordonnancement pour chaque mode opérationnel du système. Ces tables sont correctes : lorsque la charge du système augmente, les composants critiques ne manqueront jamais leurs échéances. Deux implémentations basées sur des algorithmes globaux préemptifs démontrent un gain significatif en ordonnançabilité et en utilisation des ressources : plus de 60 % de systèmes ordonnançables sur une architecture donnée par rapport aux méthodes existantes.Alors que le modèle de criticité mixte prétend que les composants critiques et non critiques peuvent partager la même plate-forme de calcul, l'interruption des composants non critiques réduit considérablement leur disponibilité. Ceci est un problème car les composants non critiques doivent offrir une degré minimum de service. C’est pourquoi nous définissons des méthodes pour évaluer la disponibilité de ces composants. A notre connaissance, nos évaluations sont les premières capables de quantifier la disponibilité. Nous proposons également des améliorations qui limitent l’impact des composants critiques sur les composants non critiques. Ces améliorations sont évaluées grâce à des automates probabilistes et démontrent une amélioration considérable de la disponibilité : plus de 2 % dans un contexte où des augmentations de l’ordre de 10-9 sont significatives.Nos contributions ont été intégrées dans un framework open-source. Cet outil fournit également un générateur utilisé pour l’évaluation de nos méthodes d’ordonnancement. / Nowadays, the design of modern Safety-critical systems is pushing towards the integration of multiple system components onto a single shared computation platform. Mixed-Criticality Systems in particular allow critical components with a high degree of confidence (i.e. low probability of failure) to share computation resources with less/non-critical components without requiring software isolation mechanisms (as opposed to partitioned systems).Traditionally, safety-critical systems have been conceived using models of computations like data-flow graphs and real-time scheduling to obtain logical and temporal correctness. Nonetheless, resources given to data-flow representations and real-time scheduling techniques are based on worst-case analysis which often leads to an under-utilization of the computation capacity. The allocated resources are not always completely used. This under-utilization becomes more notorious for multi-core architectures where the difference between best and worst-case performance is more significant.The mixed-criticality execution model proposes a solution to the abovementioned problem. To efficiently allocate resources while ensuring safe execution of the most critical components, resources are allocated in function of the operational mode the system is in. As long as sufficient processing capabilities are available to respect deadlines, the system remains in a ‘low-criticality’ operational mode. Nonetheless, if the system demand increases, critical components are prioritized to meet their deadlines, their computation resources are increased and less/non-critical components are potentially penalized. The system is said to transition to a ‘high-criticality’ operational mode.Yet, the incorporation of mixed-criticality aspects into the data-flow model of computation is a very difficult problem as it requires to define new scheduling methods capable of handling precedence constraints and variations in timing budgets.Although mixed-criticality scheduling has been well studied for single and multi-core platforms, the problem of data-dependencies in multi-core platforms has been rarely considered. Existing methods lead to poor resource usage which contradicts the main purpose of mixed-criticality. For this reason, our first objective focuses on designing new efficient scheduling methods for data-driven mixed-criticality systems. We define a meta-heuristic producing scheduling tables for all operational modes of the system. These tables are proven to be correct, i.e. when the system demand increases, critical components will never miss a deadline. Two implementations based on existing preemptive global algorithms were developed to gain in schedulability and resource usage. In some cases these implementations schedule more than 60% of systems compared to existing approaches.While the mixed-criticality model claims that critical and non-critical components can share the same computation platform, the interruption of non-critical components degrades their availability significantly. This is a problem since non-critical components need to deliver a minimum service guarantee. In fact, recent works in mixed-criticality have recognized this limitation. For this reason, we define methods to evaluate the availability of non-critical components. To our knowledge, our evaluations are the first ones capable of quantifying availability. We also propose enhancements compatible with our scheduling methods, limiting the impact that critical components have on non-critical ones. These enhancements are evaluated thanks to probabilistic automata and have shown a considerable improvement in availability, e.g. improvements of over 2% in a context where 10-9 increases are significant.Our contributions have been integrated into an open-source framework. This tool also provides an unbiased generator used to perform evaluations of scheduling methods for data-driven mixed-criticality systems.
116

Tools and Techniques for Efficient Transactions

Poudel, Pavan 07 September 2021 (has links)
No description available.
117

Návrh a implementace síťového kolektoru / Design and implementation of network collector

Bošeľa, Jaroslav January 2020 (has links)
This master’s thesis deals with description of information protocol of network flow, mainly definition of Cisco NetFlow version 9. Describes it’s features, message format and attributes of transmitted data. The thesis is primarly focused onto NetFlow v9 transmitted template, which defines fileds and data in consecutive data flow. The essence of the thesis consists in implementation of simple NetFlow v9 parser, which has been programmed in Python prog.language, it’s tests of captured UDP data from file and port capture testing on development server in lab. There is a possibility of saving captured and parsed data into prepared database within implementation as output from capturing.
118

Performance Optimizations and Operator Semantics for Streaming Data Flow Programs

Sax, Matthias J. 01 July 2020 (has links)
Unternehmen sammeln mehr Daten als je zuvor und müssen auf diese Informationen zeitnah reagieren. Relationale Datenbanken eignen sich nicht für die latenzfreie Verarbeitung dieser oft unstrukturierten Daten. Um diesen Anforderungen zu begegnen, haben sich in der Datenbankforschung seit dem Anfang der 2000er Jahre zwei neue Forschungsrichtungen etabliert: skalierbare Verarbeitung unstrukturierter Daten und latenzfreie Datenstromverarbeitung. Skalierbare Verarbeitung unstrukturierter Daten, auch bekannt unter dem Begriff "Big Data"-Verarbeitung, hat in der Industrie schnell Einzug erhalten. Gleichzeitig wurden in der Forschung Systeme zur latenzfreien Datenstromverarbeitung entwickelt, die auf eine verteilte Architektur, Skalierbarkeit und datenparallele Verarbeitung setzen. Obwohl diese Systeme in der Industrie vermehrt zum Einsatz kommen, gibt es immer noch große Herausforderungen im praktischen Einsatz. Diese Dissertation verfolgt zwei Hauptziele: Zuerst wird das Laufzeitverhalten von hochskalierbaren datenparallelen Datenstromverarbeitungssystemen untersucht. Im zweiten Hauptteil wird das "Dual Streaming Model" eingeführt, das eine Semantik zur gleichzeitigen Verarbeitung von Datenströmen und Tabellen beschreibt. Das Ziel unserer Untersuchung ist ein besseres Verständnis über das Laufzeitverhalten dieser Systeme zu erhalten und dieses Wissen zu nutzen um Anfragen automatisch ausreichende Rechenkapazität zuzuweisen. Dazu werden ein Kostenmodell und darauf aufbauende Optimierungsalgorithmen für Datenstromanfragen eingeführt, die Datengruppierung und Datenparallelität einbeziehen. Das vorgestellte Datenstromverarbeitungsmodell beschreibt das Ergebnis eines Operators als kontinuierlichen Strom von Veränderugen auf einer Ergebnistabelle. Dabei behandelt unser Modell die Diskrepanz der physikalischen und logischen Ordnung von Datenelementen inhärent und erreicht damit eine deterministische Semantik und eine minimale Verarbeitungslatenz. / Modern companies are able to collect more data and require insights from it faster than ever before. Relational databases do not meet the requirements for processing the often unstructured data sets with reasonable performance. The database research community started to address these trends in the early 2000s. Two new research directions have attracted major interest since: large-scale non-relational data processing as well as low-latency data stream processing. Large-scale non-relational data processing, commonly known as "Big Data" processing, was quickly adopted in the industry. In parallel, low latency data stream processing was mainly driven by the research community developing new systems that embrace a distributed architecture, scalability, and exploits data parallelism. While these systems have gained more and more attention in the industry, there are still major challenges to operate them at large scale. The goal of this dissertation is two-fold: First, to investigate runtime characteristics of large scale data-parallel distributed streaming systems. And second, to propose the "Dual Streaming Model" to express semantics of continuous queries over data streams and tables. Our goal is to improve the understanding of system and query runtime behavior with the aim to provision queries automatically. We introduce a cost model for streaming data flow programs taking into account the two techniques of record batching and data parallelization. Additionally, we introduce optimization algorithms that leverage our model for cost-based query provisioning. The proposed Dual Streaming Model expresses the result of a streaming operator as a stream of successive updates to a result table, inducing a duality between streams and tables. Our model handles the inconsistency of the logical and the physical order of records within a data stream natively, which allows for deterministic semantics as well as low latency query execution.
119

Analýza řídicí roviny mobilních sítí 4. generace / Control plane analysis in 4th generation mobile networks

Hajn, Pavel January 2014 (has links)
The thesis is focused on the description of LTE system in terms of signaling on interfaces of LTE and EPC subsystems, such as UE initial network connection. The next section describes the types of diagnostic methods for mobile networks using OSS, drive testing and flow analysis. The thesis also aims at description of key performance indicators (RSRP, RSSI, etc.) and the proposal for measuring of the LTE network physical layer and the data transmission speed.
120

Návrh projektu informačního systému malé firmy / Project information system for little firm

Švábenský, David January 2008 (has links)
The objective of this dissertation is to suggest the structure of new IS in the company, to suit the recent data flows, and simultaneously to satisfy the needs of the user. According to the suggestions and concepts mentioned in the thesis, it should be possible to pre – set a completely functional IS.

Page generated in 0.0607 seconds