• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 11
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 45
  • 16
  • 12
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Parallel algorithms and data structures for interactive applications / Algoritmos Paralelos e Estruturas de Dados para Aplicações Interativas / Algorithmes et Structures de Données Parallèles pour Applications Interactives

Toss, Julio January 2017 (has links)
La quête de performance a été une constante à travers l’histoire des systèmes informatiques. Il y a plus d’une décennie maintenant, le modèle de traitement séquentiel montrait ses premiers signes d’épuisement pour satisfaire les exigences de performance. Les barrières du calcul séquentiel ont poussé à un changement de paradigme et ont établi le traitement parallèle comme standard dans les systèmes informatiques modernes. Avec l’adoption généralisée d’ordinateurs parallèles, de nombreux algorithmes et applications ont été développés pour s’adapter à ces nouvelles architectures. Cependant, dans des applications non conventionnelles, avec des exigences d’interactivité et de temps réel, la parallélisation efficace est encore un défi majeur. L’exigence de performance en temps réel apparaît, par exemple, dans les simulations interactives où le système doit prendre en compte l’entrée de l’utilisateur dans une itération de calcul de la boucle de simulation. Le même type de contrainte apparaît dans les applications d’analyse de données en continu. Par exemple, lorsque des donnes issues de capteurs de trafic ou de messages de réseaux sociaux sont produites en flux continu, le système d’analyse doit être capable de traiter ces données à la volée rapidement sur ce flux tout en conservant un budget de mémoire contrôlé La caractéristique dynamique des données soulève plusieurs problèmes de performance tel que la décomposition du problème pour le traitement en parallèle et la maintenance de la localité mémoire pour une utilisation efficace du cache. Les optimisations classiques qui reposent sur des modèles pré-calculés ou sur l’indexation statique des données ne conduisent pas aux performances souhaitées. Dans cette thèse, nous abordons les problèmes dépendants de données sur deux applications différentes : la première dans le domaine de la simulation physique interactive et la seconde sur l’analyse des données en continu. Pour le problème de simulation, nous présentons un algorithme GPU parallèle pour calculer les multiples plus courts chemins et des diagrammes de Voronoi sur un graphe en forme de grille. Pour le problème d’analyse de données en continu, nous présentons une structure de données parallélisable, basée sur des Packed Memory Arrays, pour indexer des données dynamiques géo-référencées tout en conservant une bonne localité de mémoire. / A busca por desempenho tem sido uma constante na história dos sistemas computacionais. Ha mais de uma década, o modelo de processamento sequencial já mostrava seus primeiro sinais de exaustão pare suprir a crescente exigência por performance. Houveram "barreiras"para a computação sequencial que levaram a uma mudança de paradigma e estabeleceram o processamento paralelo como padrão nos sistemas computacionais modernos. Com a adoção generalizada de computadores paralelos, novos algoritmos foram desenvolvidos e aplicações reprojetadas para se adequar às características dessas novas arquiteturas. No entanto, em aplicações menos convencionais, com características de interatividade e tempo real, alcançar paralelizações eficientes ainda representa um grande desafio. O requisito por desempenho de tempo real apresenta-se, por exemplo, em simulações interativas onde o sistema deve ser capaz de reagir às entradas do usuário dentro do tempo de uma iteração da simulação. O mesmo tipo de exigência aparece em aplicações de monitoramento de fluxos contínuos de dados (streams). Por exemplo, quando dados provenientes de sensores de tráfego ou postagens em redes sociais são produzidos em fluxo contínuo, o sistema de análise on-line deve ser capaz de processar essas informações em tempo real e ao mesmo tempo manter um consumo de memória controlada A natureza dinâmica desses dados traz diversos problemas de performance, tais como a decomposição do problema para processamento em paralelo e a manutenção da localidade de dados para uma utilização eficiente da memória cache. As estratégias de otimização tradicionais, que dependem de modelos pré-computados ou de índices estáticos sobre os dados, não atendem às exigências de performance necessárias nesses cenários. Nesta tese, abordamos os problemas dependentes de dados em dois contextos diferentes: um na área de simulações baseada em física e outro em análise de dados em fluxo contínuo. Para o problema de simulação, apresentamos um algoritmo paralelo, em GPU, para computar múltiplos caminhos mínimos e diagramas de Voronoi em um grafo com topologia de grade. Para o problema de análise de fluxos de dados, apresentamos uma estrutura de dados paralelizável, baseada em Packed Memory Arrays, para indexar dados dinâmicos geo-localizados ao passo que mantém uma boa localidade de memória. / The quest for performance has been a constant through the history of computing systems. It has been more than a decade now since the sequential processing model had shown its first signs of exhaustion to keep performance improvements. Walls to the sequential computation pushed a paradigm shift and established the parallel processing as the standard in modern computing systems. With the widespread adoption of parallel computers, many algorithms and applications have been ported to fit these new architectures. However, in unconventional applications, with interactivity and real-time requirements, achieving efficient parallelizations is still a major challenge. Real-time performance requirement shows up, for instance, in user-interactive simulations where the system must be able to react to the user’s input within a computation time-step of the simulation loop. The same kind of constraint appears in streaming data monitoring applications. For instance, when an external source of data, such as traffic sensors or social media posts, provides a continuous flow of information to be consumed by an online analysis system. The consumer system has to keep a controlled memory budget and deliver a fast processed information about the stream Common optimizations relying on pre-computed models or static index of data are not possible in these highly dynamic scenarios. The dynamic nature of the data brings up several performance issues originated from the problem decomposition for parallel processing and from the data locality maintenance for efficient cache utilization. In this thesis we address data-dependent problems on two different applications: one on physically based simulations and another on streaming data analysis. To deal with the simulation problem, we present a parallel GPU algorithm for computing multiple shortest paths and Voronoi diagrams on a grid-like graph. Our contribution to the streaming data analysis problem is a parallelizable data structure, based on packed memory arrays, for indexing dynamic geo-located data while keeping good memory locality.
32

Méthodes et algorithmes de dématriçage et de filtrage du bruit pour la photographie numérique / Demosaicing and denoising methods and algorithms for digital photography

Phelippeau, Harold 03 April 2009 (has links)
Ces dernières années, les appareils photos/vidéos numériques grand public sont devenus omniprésents. On peut aujourd’hui trouver des systèmes de captures d’images dans toutes sortes d’appareils numériques comme les téléphones portables, les assistants personnels numériques etc. Malgré une augmentation croissante de la puissance et de la complexité de ces appareils, laqualité de la chaîne de capture d’image, composée du couple système optique/capteur est toujours contrainte à des limitations d’espace et de coût. Les défauts introduits sont nombreuxet dégradent considérablement la qualité des images produites : flou, déformations géométriques, artefacts de couleurs, effets de moire, bruits statiques et dynamiques, etc. Une idée intéressante est de corriger ces défauts de manière algorithmique en utilisant la puissance toujours croissante des architectures de traitements. Dans cette thèse nous nous intéressons particulièrement à deux problèmes issues de l’acquisition de l’image par le capteur : le dématriçage de la matrice de Bayer et la réduction du bruit. Dans la première partie, nous décrivons la structure générale de la chaîne de capture d’image dans les appareils photos/vidéos numériques. Nous présentons le rôle, le fonctionnement et les défauts introduits par chacun de ses éléments. Enfin, nous illustrons comment ces défauts peuvent être corriges par des traitements algorithmiques. Dans la deuxième partie, nous montrons comment l’information de couleur est introduite dans les capteurs numériques. Nous présentons ensuite un état de l’art des algorithmes de dématriçage. Un nouvel algorithme de reconstruction de la matrice de Bayer base sur le principe de l’interpolation directionnelle est propose. Il permet d’associer une qualité d’image produite sans artefacts avec une faible complexité de calculs. Pour mieux comprendre les comportements du bruit dans les capteurs numériques, nous énumérons ses différentes sources et leurs dépendances par rapport aux conditions de prises de vues. Apres avoir présenté l’état de l’art des méthodes de restauration des images bruitées, nous nous intéressons particulièrement aux algorithmes de débruitage à voisinage local et plus précisément au filtre bilatéral. Nous proposons un filtre bilatéral pour la mosaïque de Bayer, adaptatif en fonction de la puissance du bruit dans les images. Dans la troisième partie, nous présentons l’implémentation, l’optimisation et la simulation de l’exécution des algorithmes de dématriçage et de réduction du bruit proposes. La plateforme d’implémentation est le processeur TriMedia TM3270 de NXP semiconductors. Nous montrons que nous arrivons à traiter des images de taille 5 méga-pixels en moins de 0,5 secondes et des images de résolution VGA à une cadence supérieure à 25 images par seconde. Finalement, pour des raisons de standardisation, de rapidité d’exécution et de consommation d’énergie, nous avons conçu une architecture dédiée à l’algorithme de dématriçage propose. Cette architecture permet de multiplier par 10 la rapidité d’exécution obtenue sur le processeur TriMedia TM3270 / Digital cameras are now present everywhere. They are commonly included in portable digital devices such as mobile phones and personal digital assistants. In spite of constant improvements in terms of computing power and complexity, the digital imaging chain quality, including sensor and lenses system, is still limited by space and cost constraints. An important number of degradations are introduced by this chain that significantly decrease overall image quality : including blurring effects, geometric distortions, color artefacts, moiré effects, static and dynamic noise. Correcting these defects in an algorithmic way, using the increasing power of embedded processing architecture present in mobile phones and PDAs may appear like an interesting solution. In this thesis we are especially interested in reducing two major defects of the sensor acquisition chain : Bayer matrix demosaicing artefacts and photon noise. In the first part, we describe the general imaging chain commonly used in digital cameras and video devices. We show the function, the inner working and the defects introduced by each of its elements. Finally we exhibit possible ways to correct these defects using algorithmic solutions. In the second part, we introduce the principle of Bayer demosaicing. We present the state of the art and we propose a new method based on a directed interpolation principle. Our method yields a good image quality while retaining a low computational complexity. We then enumerate several noise sources present in imaging digital sensors and their dependencies with imaging conditions. We are particularly interested in local algorithms and more specifically in the bilateral filter. After presenting the state of the art in denoising algorithm, we propose a new adaptive bilateral filter for sensor colour mosaic denoising. In the third part, we present the implementation, the optimization and the execution simulation of the proposed demosaicing and denoising algorithms. The implementation target is the TM3270 TriMedia processor from NXP Semiconductors. We show that it is possible to process 5 megapixels images in less than 0.5 seconds and more than 25 images per second at VGA resolution. Finally, for standardization, execution speed and power consumption reasons, we describe a dedicated architecture for our proposed demosaicing algorithm. This architecture improves the execution speed by a factor of 10 compared to the TriMedia TM3270 processor
33

Real Time Vehicle Detection for Intelligent Transportation Systems

Shurdhaj, Elda, Christián, Ulehla January 2023 (has links)
This thesis aims to analyze how object detectors perform under winter weather conditions, specifically in areas with varying degrees of snow cover. The investigation will evaluate the effectiveness of commonly used object detection methods in identifying vehicles in snowy environments, including YOLO v8, Yolo v5, and Faster R-CNN. Additionally, the study explores the method of labeling vehicle objects within a set of image frames for the purpose of high-quality annotations in terms of correctness, details, and consistency. Training data is the cornerstone upon which the development of machine learning is built. Inaccurate or inconsistent annotations can mislead the model, causing it to learn incorrect patterns and features. Data augmentation techniques like rotation, scaling, or color alteration have been applied to enhance some robustness to recognize objects under different alterations. The study aims to contribute to the field of deep learning by providing valuable insights into the challenges of detecting vehicles in snowy conditions and offering suggestions for improving the accuracy and reliability of object detection systems. Furthermore, the investigation will examine edge devices' real-time tracking and detection capabilities when applied to aerial images under these weather conditions. What drives this research is the need to delve deeper into the research gap concerning vehicle detection using drones, especially in adverse weather conditions. It highlights the scarcity of substantial datasets before Mokayed et al. published the Nordic Vehicle Dataset. Using unmanned aerial vehicles(UAVs) or drones to capture real images in different settings and under various snow cover conditions in the Nordic region contributes to expanding the existing dataset, which has previously been restricted to non-snowy weather conditions. In recent years, the leverage of drones to capture real-time data to optimize intelligent transport systems has seen a surge. The potential of drones in providing an aerial perspective efficiently collecting data over large areas to precisely and timely monitor vehicular movement is an area that is imperative to address. To a greater extent, snowy weather conditions can create an environment of limited visibility, significantly complicating data interpretation and object detection. The emphasis is set on edge devices' real-time tracking and detection capabilities, which in this study introduces the integration of edge computing in drone technologies to explore the speed and efficiency of data processing in such systems.
34

[en] A METHOD FOR REAL-TIME GENERATION OF VIDEOKE FROM VIDEO STREAMING / [pt] UM MÉTODO PARA GERAÇÃO EM TEMPO REAL DE VIDEOKÊ A PARTIR DE STREAMING DE VÍDEO

MATHEUS ADLER SOARES PINTO 21 March 2024 (has links)
[pt] Sistemas tradicionais de karaokê geralmente utilizam vídeos pré-editados, o que limita a criação de experiências de videokê. Nesta dissertação, propomos um novo método para a geração de videokê em tempo real a partir de fontes de streaming de vídeo, chamado Gerador de Videokê. Este método combina técnicas de processamento de vídeo e áudio para gerar automaticamente videokê e é projetado para realizar o processamento em tempo real ou próximo a isso. Os principais objetivos deste estudo são formular uma metodologia para processar vídeos em fluxo contínuo e gerar videokê em tempo real, mantendo características essenciais como a supressão das vozes principais da música e a geração automática de legendas destacando palavras. Os resultados obtidos representam uma contribuição significativa para o campo da geração de multimídia em tempo real. O método foi implementado em uma arquitetura cliente/servidor para testes. Essas contribuições representam um avanço no campo dos sistemas de entretenimento e multimídia, pois introduzem uma nova metodologia para a criação de experiências de videokê. Até onde sabemos, este é o primeiro trabalho que aborda o desenvolvimento de um gerador de videokê em tempo real que realiza sincronização automática e destaque a nível de palavras, com base em uma revisão da literatura. / [en] Traditional karaoke systems typically use pre-edited videos, which limits the creation of videoke experiences. In this dissertation, we propose a new method for generating videoke in real-time from video streaming sources, called the videoke Generator. This method combines video and audio processing techniques to automatically generate videoke and is designed to perform processing in real-time or near real-time. The main objectives of this study are to formulate a methodology to process videos in continuous flow and to generate videoke in real-time while maintaining essential features such as the suppression of the main voices of the music and the automatic generation of subtitles highlighting words. The results obtained represent a significant contribution to the field of real-time multimedia generation. The method was implemented in a client/server architecture for testing. These contributions represent a step forward in the field of entertainment and multimedia systems as they introduce a new methodology for the creation of videoke experiences. To our knowledge, this is the first work that addresses the development of a real-time videoke generator that performs automatic synchronization and highlighting at the word level, based on a literature review.
35

Méthodes de reconstruction et de quantification pour la microscopie de super-résolution par localisation de molécules individuelles / Reconstruction and quantification methods for single-molecule based super-resolution microscopy

Kechkar, Mohamed Adel 20 December 2013 (has links)
Le domaine de la microscopie de fluorescence a connu une réelle révolution ces dernières années, permettant d'atteindre des résolutions nanométriques, bien en dessous de la limite de diffraction prédite par Abbe il y a plus d’un siècle. Les techniques basées sur la localisation de molécules individuelles telles que le PALM (Photo-Activation Light Microscopy) ou le (d)STORM (direct Stochastic Optical Reconstruction Microscopy) permettent la reconstruction d’images d’échantillons biologiques en 2 et 3 dimensions, avec des résolutions quasi-moléculaires. Néanmoins, même si ces techniques nécessitent une instrumentation relativement simple, elles requièrent des traitements informatiques conséquents, limitant leur utilisation en routine. En effet, plusieurs dizaines de milliers d’images brutes contenant plus d’un million de molécules doivent être acquises et analysées pour reconstruire une seule image. La plupart des outils disponibles nécessitent une analyse post-acquisition, alourdissant considérablement le processus d’acquisition. Par ailleurs la quantification de l’organisation, de la dynamique mais aussi de la stœchiométrie des complexes moléculaires à des échelles nanométriques peut constituer une clé déterminante pour élucider l’origine de certaines maladies. Ces nouvelles techniques offrent de telles capacités, mais les méthodes d’analyse pour y parvenir restent à développer. Afin d’accompagner cette nouvelle vague de microscopie de localisation et de la rendre utilisable en routine par des expérimentateurs non experts, il est primordial de développer des méthodes de localisation et d’analyse efficaces, simples d’emploi et quantitatives. Dans le cadre de ce travail de thèse, nous avons développé dans un premier temps une nouvelle technique de localisation et reconstruction en temps réel basée sur la décomposition en ondelettes et l‘utilisation des cartes GPU pour la microscopie de super-résolution en 2 et 3 dimensions. Dans un second temps, nous avons mis au point une méthode quantitative basée sur la visualisation et la photophysique des fluorophores organiques pour la mesure de la stœchiométrie des récepteurs AMPA dans les synapses à l’échelle nanométrique. / The field of fluorescence microscopy has witnessed a real revolution these last few years, allowing nanometric spatial resolutions, well below the diffraction limit predicted by Abe more than a century ago. Single molecule-based super-resolution techniques such as PALM (Photo-Activation Light Microscopy) or (d)STORM (direct Stochastic Optical Reconstruction Microscopy) allow the image reconstruction of biological samples in 2 and 3 dimensions, with close to molecular resolution. However, while they require a quite straightforward instrumentation, they need heavy computation, limiting their use in routine. In practice, few tens of thousands of raw images with more than one million molecules must be acquired and analyzed to reconstruct a single super-resolution image. Most of the available tools require post-acquisition processing, making the acquisition protocol much heavier. In addition, the quantification of the organization, dynamics but also the stoichiometry of biomolecular complexes at nanometer scales can be a key determinant to elucidate the origin of certain diseases. Novel localization microscopy techniques offer such capabilities, but dedicated analysis methods still have to be developed. In order to democratize this new generation of localization microscopy techniques and make them usable in routine by non-experts, it is essential to develop simple and easy to use localization and quantitative analysis methods. During this PhD thesis, we first developed a new technique for real-time localization and reconstruction based on wavelet decomposition and the use of GPU cards for super-resolution microscopy in 2 and 3 dimensions. Second, we have proposed a quantitative method based on the visualization and the photophysics of organic fluorophores for measuring the stoichiometry of AMPA receptors in synapses at the molecular scale.
36

Pupilometria dinâmica : uma proposta de rastreamento da posição e tamanho da pupila humana em tempo real

Dias, Alessandro Gontijo da Costa 17 January 2014 (has links)
Fundação de Amparo a Pesquisa do Estado de Minas Gerais / The study of the pupil\'s movements (contraction and dilatation) has a significant clinical interest because it is used as a method of evaluating both, the human visual system and nervous system. This paper presents a technique that can make the human pupil tracking in real time, both the position and the diameter. That is crucial in extracting parameters related to pupillary movements, and these help in diagnosis of various diseases, among other applications. The suggested technique uses the information of the pupil found in the previous frame, size and position, that are used in an algorithm that aims to reduce the time spent on the next frame tracking. Using this procedure it is possible to make the human pupil tracking in real time, in some cases with an average of up to 140 frames per second. Comparisons with other studies were performed both, for the processing time of each frame and for accuracy in pupil\'s location and diameter found in each image . The pupil\'s position tracking is crucial in extracting parameters related to pupillary movements, and these help in diagnosis of various diseases, among other applications. The present work does not track only the diameter but also the position of the pupil in real time. / O estudo dos movimentos (contração e dilatação) da pupila tem interesse clínico relevante, pois é utilizado como método de avaliação tanto do sistema visual humano quanto do sistema nervoso. Este trabalho apresenta uma técnica para efetuar o rastreamento tanto da posição quanto do diâmetro da pupila humana em tempo real. A técnica sugerida utiliza as informações da pupila encontrada no frame anterior, tamanho e posição, que são utilizadas em um algoritmo que objetiva a redução do tempo gasto no rastreamento do próximo frame. Utilizando este algoritmo é possível efetuar o rastreamento da posição e tamanho da pupila humana em tempo real e em alguns casos com uma média de até 140 frames por segundo. Foram realizadas comparações com outros trabalhos tanto em relação a tempo de processamento de cada frame, quanto em precisão na localização e diâmetro da pupila encontrada em cada imagem. O rastreamento do diâmetro da pupila é fundamental na extração de parâmetros relacionados aos movimentos pupilares, e estes auxiliam no diagnósticos de diversas doenças, entre outras aplicações. O presente trabalho faz não só o rastreamento do diâmetro mas também da posição da pupila em tempo real. / Mestre em Ciências
37

Méthodes et systèmes pour la détection adaptative et temps réel d’activité dans les signaux biologiques / Systems and methods for adaptive and real-time detection of biological activity

Quotb, Adam 12 October 2012 (has links)
L’intéraction entre la biologie et l’électronique est une discpline en pleine essort. De nom-breux systèmes électroniques tentent de s’interconnecter avec des tissus ou des cellules vivantesafin de décoder l’information biologique. Le Potentiel d’action (PA) est au coeur de codagebiologique et par conséquent il est nécéssaire de pouvoir les repérer sur tout type de signal bio-logique. Par conséquent, nous étudions dans ce manuscrit la possibilité de concevoir un circuitélectronique couplé à un système de microélectrodes capable d’effectuer une acquisition, unedétection des PAs et un enregistrement des signaux biologiques. Que ce soit en milieu bruitéou non, nous considérons le taux de détection de PA et la contrainte de temps réel commedes notions primordiales et la consommation en silicium comme un prix à payer. Initialementdéveloppés pour l’étude de signaux neuronaux et pancréatiques, ces systèmes conviennent par-faitement pour d’autres type de cellules. / Interaction between biology and electronic is in expansion. Many electronic systems aretrying to interconnect with tissues or living cells to decode biological information. The ActionPotential (AP) is the heart of biological coding and therefore it is necessary to be able to locateit from any type of biological signal. Therefore, we study in this manuscript the possibility ofdesigning an electronic circuit coupled to microelectrodes capable of acquisition, detection ofPAs and recording of biological signals. Whether or not in a noisy environment, we consider thedetection rate of PA and the real time-computing constraint as an hard specificationand andsilicon area as a price to pay. Initially developed for the study of neural signals and pancreatic,these systems are ideal for other types of cells.
38

Commit Processing In Distributed On-Line And Real-Time Transaction Processing Systems

Gupta, Ramesh Kumar 03 1900 (has links) (PDF)
No description available.
39

The Multiprocessor Scheduling Of Periodic And Sporadic Hard Realtime Systems

Reddy, Vikrama 02 1900 (has links) (PDF)
Real time systems have been a major area of study for many years. Advancements in electronics, computers, information technology and digital networks are fueling major changes in the area of real time systems. In this thesis, we look at some of the most commonly modeled real time task systems, such as the periodic task model, including more complex task models such as the sporadic task systems. Primary focus of researchers in these fields include how to guarantee hard real time requirement of any task specification, with the minimal utilization of available hardware resources. Advancement in technology has brought multi-cored architectures with shared memory and massively parallel computing devices within the reach of ordinary computer users. Hence, it makes sense to study existing and newer task models on a wide variety of hardware platforms. Periodic task model and systems with such task models have been designed and well understood. Newer models such as the sporadic task models have been proposed to capture a more larger variety of real time systems being designed and used. We focus on designing more efficient scheduling algorithms for the sporadic LL task model, and propose simpler proofs to some of the algorithms existing in current literature. This thesis also focuses on scheduling sporadic task systems, under both multiprocessor full-migration and multiprocessor partitioned scheme. We also provide approximation algorithms to efficiently determine feasibility of such task systems.
40

FORMÁLNÍ MODEL ROZHODOVACÍHO PROCESU PRO ZPRACOVÁNÍ VYSOKOFREKVENČNÍCH DAT / FORMAL MODEL OF DECISION MAKING PROCESS FOR HIGH-FREQUENCY DATA PROCESSING

Zámečníková, Eva Unknown Date (has links)
Tato disertační práce se zabývá problematikou zpracování vysokofrekvenčních časových řad. Zaměřuje se na návrh algoritmů a metod pro podporu predikce těchto dat. Výsledkem je model pro podporu řízení rozhodovacího procesu implementovaný do platformy pro komplexní zpracování dat. Model navrhuje způsob formalizace množiny podnikových pravidel, které popisují rozhodovací proces. Navržený model musí vyhovovat splnění požadavků na robustnost, rozšiřitelnost, zpracování v reálném čase a požadavkům ekonometriky. Práce shrnuje současné poznatky a metodologie pro zpracování vysokofrekvenčních finančních dat, jejichž zdrojem jsou nejčastěji burzy. První část práce se věnuje popisu základních principů a přístupů používaných pro zpracování vysokofrekvenčních časových dat v současné době. Další část se věnuje popisu podnikových pravidel, rozhodovacího procesu a komplexní platformy pro zpracování vysokofrekvenčních dat a samotnému zpracování dat pomocí zvolené komplexní platformy. Důraz je kladen na výběr a úpravu množiny pravidel, které řídí rozhodovací proces. Navržený model popisuje množinu pravidel pomocí maticové gramatiky. Tato gramatika spadá do oblasti gramatik s řízeným přepisováním a pomocí definovaných matic umožňuje ovlivnit zpracování dat.

Page generated in 0.069 seconds