• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Automatiserat plocksystem för dagligvaruhandel / Automated Picking System for Grocery Stores

Wennerbo, Theodor, Bildhjerd, Martin C. January 2024 (has links)
This is a bachelor's thesis in the area of electrical engineering as well as automation. This thesis investigates motor choice, sensor choices, and the methodology of automating a grocery store picking system prototype. The general structure of the system is composed out of two shelfs with a plane, between them, that is movable in the vertical direction. On the movable plane a conveyor belt was installed. We evaluated the motor choice for the application and the safety aspects of the system to gather a general understanding of potential risk factors involved in realizing a system that can work autonomously. It includes both electrical safety factors as well as mechanical safety factors. Additionally, we considered safety precautions to prevent malfunctions that can lead to these risks. To realize the prototype, a prestudy to investigate motor alternatives was conducted to determine the best motor choice for the application. The chosen motor was a DC steppermotor with snail gearbox. This will ensure simple regulation of the motor to sufficiently handle the weights the system needs to manage. The snail gearbox can eliminate some mechanical risk factors due to its self-locking mechanism, ensuring that if there is an electrical malfunction, such as a power failure, the system will not collapse on itself and drop the payload flat on the floor. Also, a model of the large prototype was made in a 3D-drawing software, which was then 3D-printed and set up to mimic the large prototype. The goal of the miniature model is to test the functionality and evaluate if the algorithms described in this thesis can be preserved and re-used for the large system and prototype.
272

JPEG 2000 and parity bit replenishment for remote video browsing

Devaux, François-Olivier 19 September 2008 (has links)
This thesis is devoted to the study of a compression and transmission framework for video. It exploits the JPEG 2000 standard and the coding with side information principles to enable an efficient interactive browsing of video sequences. During the last decade, we have witnessed an explosion of digital visual information as well as a significant diversification of visualization devices. In terms of viewing experience, many applications now enable users to interact with the content stored on a distant server. Pausing video sequences to observe details by zooming and panning or, at the opposite, browsing low resolutions of high quality HD videos are becoming common tasks. The video distribution framework envisioned in this thesis targets such devices and applications. Based on the conditional replenishment framework, the proposed system combines two complementary coding methods. The first one is JPEG 2000, a scalable and very efficient compression algorithm. The second method is based on the coding with side information paradigm. This technique is relatively novel in a video context, and has been adapted to the particular scalable image representation adopted in this work. Interestingly, it has been improved by integrating an image source model and by exploiting the temporal correlation inherent to the sequence. A particularity of this work is the emphasis on the system scalability as well as on the server complexity. The proposed browsing architecture can scale to handle large volumes of content and serve a possibly very large number of heterogeneous users. This is achieved by defining a scheduler that adapts its decisions to the channel conditions and to user requirements expressed in terms of computational capabilities and spatio-temporal interest. This scheduling is carried out in real-time at low computational cost and in a post-compression way, without re-encoding the sequences.
273

Analysis of network management protocols in optical networks

Lim, Kok Seng 03 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, the scalability issues of Simple Network Management Protocol (SNMP) in optical network management are explored. It is important to understand the effect of varying the number of nodes, the request inter-arrival times and the polling interval on the performance of SNMP and number of nodes that can be effectively managed. The current study explored the effect of varying these parameters in a controlled test environment using the OPNET simulation package. In addition, traffic analysis was performed on measured SNMP traffic and statistics were developed from the traffic analysis. With this understanding of SNMP traffic, an SNMPv1 model was defined and integrated into an OPNET network model to study the performance of SNMP. The simulation results obtained were useful in providing needed insight into the allowable number of nodes an optical network management system can effectively manage. / Civilian, Singapore Ministry of Defense
274

Passage à l’échelle des méthodes de recherche sémantique dans les grandes bases d’images / Scalable search engines for content-based image retrieval task in huge image database

Gorisse, David 17 December 2010 (has links)
Avec la révolution numérique de cette dernière décennie, la quantité de photos numériques mise à disposition de chacun augmente plus rapidement que la capacité de traitement des ordinateurs. Les outils de recherche actuels ont été conçus pour traiter de faibles volumes de données. Leur complexité ne permet généralement pas d'effectuer des recherches dans des corpus de grande taille avec des temps de calculs acceptables pour les utilisateurs. Dans cette thèse, nous proposons des solutions pour passer à l'échelle les moteurs de recherche d'images par le contenu. Dans un premier temps, nous avons considéré les moteurs de recherche automatique traitant des images indexées sous la forme d'histogrammes globaux. Le passage à l'échelle de ces systèmes est obtenu avec l'introduction d'une nouvelle structure d'index adaptée à ce contexte qui nous permet d'effectuer des recherches de plus proches voisins approximées mais plus efficaces. Dans un second temps, nous nous sommes intéressés à des moteurs plus sophistiqués permettant d'améliorer la qualité de recherche en travaillant avec des index locaux tels que les points d'intérêt. Dans un dernier temps, nous avons proposé une stratégie pour réduire la complexité de calcul des moteurs de recherche interactifs. Ces moteurs permettent d'améliorer les résultats en utilisant des annotations que les utilisateurs fournissent au système lors des sessions de recherche. Notre stratégie permet de sélectionner rapidement les images les plus pertinentes à annoter en optimisant une méthode d'apprentissage actif. / In this last decade, would the digital revolution and its ancillary consequence of a massive increases in digital picture quantities. The database size grow much faster than the processing capacity of computers. The current search engine which conceived for small data volumes do not any more allow to make searches in these new corpus with acceptable response times for users.In this thesis, we propose scalable content-based image retrieval engines.At first, we considered automatic search engines where images are indexed with global histograms. Secondly, we were interested in more sophisticated engines allowing to improve the search quality by working with bag of feature. In a last time, we proposed a strategy to reduce the complexity of interactive search engines. These engines allow to improve the results by using labels which the users supply to the system during the search sessions.
275

Berufsbezogene Handlungs- vs. Lageorientierung: Skalierbarkeit und Beziehung zu beruflicher Arbeitsleistung / Occupational action state orientation: Scalability and its relation to job performance

Stadelmaier, Ulrich W. 12 December 2016 (has links) (PDF)
Die vorliegende Arbeit verknüpft die Theorie der Interaktion psychischer Systeme von Kuhl (2000, 2001) mit dem Modell beruflicher Arbeitsleistung von Tett und Burnett (2003). Unter Anwendung reizorientierter arbeitspsychologischer Stressmodelle werden Hypothesen über einen durch das subjektive Bedrohungs- und Belastungspotenzial der Arbeitssituation moderierten Zusammenhang zwischen dem berufsbezogenen und nach Maßgabe der Item Response Theorie skalierbaren Persönlichkeitsmerkmal Handlungs- vs. Lageorientierung und beruflicher Arbeitsleistung aufgestellt. In drei Befragungen an N = 415, N = 331 sowie N = 49 Berufstätigen wurden querschnittliche Daten zur Hypothesenprüfung erhoben. Berufsbezogene Handlungs- vs. Lageorientierung zeigt sich als valides Subkonstrukt der allgemeinen Handlungs- vs. Lageorientierung, welches gemäß Graded Response Modell von Samejima (1969, 1997) mit 14 Items skalierbar ist. Prospektive berufsbezogene Handlungs- vs. Lageorientierung erklärt in multiplen hierarchischen Regressionsanalysen, im Gegensatz zu allgemeiner Handlungs- vs. Lageorientierung, inkrementell zu Gewissenhaftigkeit, Extraversion und Neurotizismus Anteile kontextueller und aufgabenbezogener Arbeitsleistung. Hypothesenkonträr werden diese Zusammenhänge nur marginal vom subjektiven Belastungspotential der Arbeitssituation moderiert. Die Prädiktorfunktion prospektiver berufsbezogener Handlungs- vs. Lageorientierung für berufliche Arbeitsleistung bleibt auch unter pfadanalytischer Kontrolle eines vorhandenen Common Method Bias erhalten. Die dispositionelle Fähigkeit, durch berufliche Hindernisse gehemmten positiven Affekt vorbewusst gegenregulieren zu können, scheint demnach ein bedeutender Prädiktor beruflicher Arbeitsleistung zu sein, insbesondere bei Führungskräften. Für die berufliche Eignungsbeurteilung ist es damit von diagnostischem Mehrwert, Handlungs- vs. Lageorientierung kontextualisiert zu erheben. Der Einsatz probabilistisch- testtheoretisch konstruierter Skalen steigert dabei die Effizienz des Beurteilungsprozesses. / The current paper combines personality systems interaction theory (Kuhl, 2000, 2001) with the model of job performance by Tett and Burnett (2003). Using established stress models from work psychology it is hypothesized that there is a relation between occupational action state orientation, scalable by means of items response theory, and job performance, which is moderated by the subjective stress level of job characteristics. Three surveys among samples of N = 415, N = 331, and N = 49 professionals yielded cross sectional data for investigating the hypotheses. Occupational action state orientation proves a valid construct which is compatible with Samejima’s (1969, 1997) Graded Response Model using a 14-item scale. As a result of multiple hierarchical regression analyses, the hesitation dimension of specifically occupational, in contrast to general action state orientation is a predictor of both contextual and task performance, incremental to conscientiousness, extraversion, and neuroticism. Contrary to expectations this relation is only marginally moderated by stress-relevant job characteristics. Even when controlling for an occurring common method bias by means of path analysis the occupational hesitation dimension’s predictor role perseveres. Therefore, the dispositional ability in subconsciously regulating inhibited positive affect due to occupational obstacles, seems to be a crucial predictor of job performance, especially regarding leaders. Hence, professional aptitude assessment benefits from assessing action state orientation in a contextualized manner. Application of item response theory-based scales further enhances assessment process efficiency.
276

Eliot: uma arquitetura para internet das coisas: explorando a elasticidade da computação em nuvem com alto desempenho

Gomes, Márcio Miguel 26 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-15T19:38:10Z No. of bitstreams: 1 Márcio Miguel Gomes_.pdf: 2811232 bytes, checksum: 771d1fb5e5429c1093d60d2dfadb36e3 (MD5) / Made available in DSpace on 2015-06-15T19:38:10Z (GMT). No. of bitstreams: 1 Márcio Miguel Gomes_.pdf: 2811232 bytes, checksum: 771d1fb5e5429c1093d60d2dfadb36e3 (MD5) Previous issue date: 2015-02-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O universo digital vem crescendo a taxas expressivas nos últimos anos. Um dos principais responsáveis por esse aumento no volume de dados é a Internet das Coisas, que em uma definição simplista, consiste em identificar unicamente objetos de forma eletrônica, rastreá-los e armazenar suas informações para posterior utilização. Para lidar com tamanha carga de dados, são necessárias soluções a nível de software, hardware e arquitetura. Estudos realizados neste trabalho apontam que a arquitetura adotada atualmente apresenta limitações, principalmente no quesito escalabilidade. Como escalabilidade é uma característica fundamental para atender a demanda crescente de coleta, processamento e armazenamento de dados, este trabalho apresenta a arquitetura intitulada Eliot, com propostas para resolver justamente a escalabilidade além de oferecer elasticidade ao sistema. Para isso, está sendo proposto o uso de técnicas de bancos de dados distribuídos, processamento paralelo e nuvem computacional, além de reestruturação da arquitetura atual. Os resultados obtidos após a implantação e avaliação do Eliot em um ambiente de nuvem computacional comprovam a viabilidade, eficiência e confiabilidade dessa nova arquitetura proposta. Foi possível identificar melhora do desempenho através da redução nos tempos de resposta e aumento do volume de requisições processadas e trafegadas na rede além da redução nas falhas de conexão e de comunicação de dados. / The digital universe is growing at significant rates in recent years. One of the main responsible for this increase in the volume of data is the Internet of Things, which in a simplistic definition, is to uniquely identify objects electronically, track them and store their information for later use. To deal with such data load, solutions are needed at software, hardware and architecture levels. Studies conducted in this work show that the architecture adopted currently has limitations, specially regarding scalability. As scalability is a key feature to meet the growing demand for collection, processing and storage of data, this paper presents the architecture entitled Eliot, with proposals to precisely resolve the scalability and offers elasticity to the system. For this, is proposed the use of techniques of distributed databases, parallel processing and cloud computing, as well as restructuring of the current architecture. The results obtained after the implementation and evaluation of Eliot in a cloud computing environment demonstrate the feasibility, efficiency and reliability of this new proposed architecture. It was possible to improve performance by reducing response times and increased volume of requisitions processed and trafficked in the network, in addition to the reduction in connection failures and data communication.
277

Passage à l'échelle pour les contraintes d'ordonnancement multi-ressources / Scalable multi-dimensional resources scheduling constraints

Letort, Arnaud 28 October 2013 (has links)
La programmation par contraintes est une approche régulièrement utilisée pour résoudre des problèmes combinatoires d’origines diverses. Dans cette thèse nous nous focalisons sur les problèmes d’ordonnancement cumulatif. Un problème d’ordonnancement consiste à déterminer les dates de débuts et de fins d’un ensemble de tâches, tout en respectant certaines contraintes de capacité et de précédence. Les contraintes de capacité concernent aussi bien des contraintes cumulatives classiques où l’on restreint la somme des hauteurs des tâches intersectant un instant donné, que des contraintes cumulatives colorées où l’on restreint le nombre maximum de couleurs distinctes prises par les tâches. Un des objectifs récemment identifiés pour la programmation par contraintes est de traiter des problèmes de grandes tailles, habituellement résolus à l’aide d’algorithmes dédiés et de métaheuristiques. Par exemple, l’utilisation croissante de centres de données virtualisés laisse apparaitre des problèmes d’ordonnancement et de placement multi-dimensionnels de plusieurs milliers de tâches. Pour atteindre cet objectif, nous utilisons l’idée de balayage synchronisé considérant simultanément une conjonction de contraintes cumulative et des précédences, ce qui nous permet d’accélérer la convergence au point fixe. De plus, de ces algorithmes de filtrage nous dérivons des procédures gloutonnes qui peuvent être appelées à chaque nœud de l’arbre de recherche pour tenter de trouver plus rapidement une solution au problème. Cette approche permet de traiter des problèmes impliquant plus d’un million de tâches et 64 ressources cumulatives. Ces algorithmes ont été implémentés dans les solveurs de contraintes Choco et SICStus, et évalués sur divers problèmes déplacement et d’ordonnancement. / Constraint programming is an approach often used to solve combinatorial problems in different application areas. In this thesis we focus on the cumulative scheduling problems. A scheduling problem is to determine the starting dates of a set of tasks while respecting capacity and precedence constraints. Capacity constraints affect both conventional cumulative constraints where the sum of the heights of tasks intersecting a given time point is limited, and colored cumulative constraints where the number of distinct colors assigned to the tasks intersecting a given time point is limited. A newly identified challenge for constraint programming is to deal with large problems, usually solved by dedicated algorithms and metaheuristics. For example, the increasing use of virtualized datacenters leads to multi dimensional placement problems of thousand of jobs. Scalability is achieved by using a synchronized sweep algorithm over the different cumulative and precedence constraints that allows to speed up convergence to the fix point. In addition, from these filtering algorithms we derive greedy procedures that can be called at each node of the search tree to find a solution more quickly. This approach allows to deal with scheduling problems involving more than one million jobs and 64 cumulative resources. These algorithms have been implemented within Choco and SICStussolvers and evaluated on a variety of placement and scheduling problems.
278

Some visualization models applied to the analysis of parallel applications / Alguns modelos de visualização aplicados para a análise de aplicações paralelas / Quelques modèles de visualisation pour l’analyse des applications parallèles

Schnorr, Lucas Mello January 2009 (has links)
Les systèmes distribués, tels que les grilles, sont utilisés aujourd’hui pour l’exécution des grandes applications parallèles. Quelques caractéristiques de ces systèmes sont l’interconnexion complexe de ressources qui pourraient être présent et de la facile passage à l’échelle. La complexité d’interconnexion vient, par exemple, d’un nombre plus grand de directives de routage pour la communication entre les processus et une latence variable dans le temps. La passage à l’échelle signifie que des ressources peuvent être ajoutées indéfiniment simplement en les reliant à l’infrastructure existante. Ces caractéristiques influencent directement la façon dont la performance des applications parallèles doit être analysée. Les techniques de visualisation traditionnelles pour cette analyse sont généralement basées sur des diagrammes de Gantt que disposent la liste des composants de l’application verticalement et metent la ligne du temps sur l’axe horizontal. Ces représentations visuelles ne sont généralement pas adaptés à l’analyse des applications exécutées en parallèle dans les grilles. La première raison est qu’elles n’ont pas été conçues pour offrir aux développeurs une analyse qui montre aussi la topologie du réseau des ressources. La deuxième raison est que les techniques de visualisation traditionnels ne s’adaptent pas bien quand des milliers d’entités doivent être analysés ensemble. Cette thèse tente de résoudre les problèmes des techniques traditionnelles dans la visualisation des applications parallèles. L’idée principale est d’exploiter le domaine de la visualisation de l’information et essayer d’appliquer ses concepts dans le cadre de l’analyse des programmes parallèles. Portant de cette idée, la thèse propose deux modèles de visualisation : les trois dimensions et le modèle d’agrégation visuelle. Le premier peut être utilisé pour analyser les programmes parallèles en tenant compte de la topologie du réseau. L’affichage lui-même se compose de trois dimensions, où deux sont utilisés pour indiquer la topologie et la troisième est utilisée pour représenter le temps. Le second modèle peut être utilisé pour analyser des applications parallèles comportant un très grand nombre de processsus. Ce deuxième modèle exploite une organisation hiérarchique des données utilisée par une technique appelée Treemap pour représenter visuellement la hiérarchie. Les implications de cette thèse sont directement liées à l’analyse et la compréhension des applications parallèles exécutés dans les systèmes distribués. Elle améliore la compréhension des modes de communication entre les processus et améliore la possibilité d’assortir les motifs avec cette topologie de réseau réel sur des grilles. Bien que nous utilisons abondamment l’exemple de la topologie du réseau, l’approche pourrait être adapté, avec presque pas de changements, à l’interconnexion fourni par un middleware d’une interconnexion logique. Avec la technique d’agrégation, les développeurs sont en mesure de rechercher des patterns et d’observer le comportement des applications à grande échelle. / Sistemas distribuídos tais como grids são usados hoje para a execução de aplicações paralelas com um grande número de processos. Algumas características desses sistemas são a presença de uma complexa rede de interconexão e a escalabilidade de recursos. A complexidade de rede vem, por exemplo, de largura de banda e latências variáveis ao longo do tempo. Escalabilidade é a característica pela qual novos recursos podem ser adicionados em um grid apenas através da conexão em uma infraestrutura pré-existente. Estas características influenciam a forma como o desempenho de aplicações paralelas deve ser analisado. Esquemas tradicionais de visualização de desempenho são usualmente baseados em gráficos Gantt com uma dimensão para listar entidades monitoradas e outra para o tempo. Visualizações como essa não são apropriadas para a análise de aplicações paralelas executadas em grid. A primeira razão para tal é que elas não foram concebidas para oferecer ao desenvolvedor uma análise que mostra a topologia dos recursos e a relação disso com a aplicação. A segunda razão é que técnicas tradicionais não são escaláveis quando milhares de entidades monitoradas devem ser analisadas conjuntamente. Esta tese tenta resolver estes problemas encontrados em técnicas de visualização tradicionais para a análise de aplicações paralelas. A idéia principal consiste em explorar técnicas da área de visualização da informação e aplicá-las no contexto de análise de programas paralelos. Levando em conta isto, esta tese propõe dois modelos de visualização: o de três dimensões e o modelo de agregação visual. O primeiro pode ser utilizado para analisar aplicações levando-se em conta a topologia da rede dos recursos. A visualização em si é composta por três dimensões, onde duas são usadas para mostrar a topologia e a terceira é usada para representar o tempo. O segundo modelo pode ser usado para analisar aplicações paralelas com uma grande quantidade de processos. Ela explora uma organização hierárquica dos dados de monitoramento e uma técnica de visualização chamada Treemap para representar visualmente a hierarquia. Os dois modelos representam uma nova forma de analisar aplicação paralelas visualmente, uma vez que eles foram concebidos para larga-escala e sistemas distribuídos complexos, como grids. As implicações desta tese estão diretamente relacionadas à análise e ao entendimento do comportamento de aplicações paralelas executadas em sistemas distribuídos. Um dos modelos de visualização apresentados aumenta a compreensão dos padrões de comunicação entre processos e oferece a possibilidade de observar tal padrão com a topologia de rede. Embora a topologia de rede seja usada, a abordagem pode ser adaptada sem grandes mudanças para levar em conta interconexões lógicas de bibliotecas de comunicação. Com a técnica de agregação apresentada nesta tese, os desenvolvedores são capazes de observar padrões de aplicações paralelas de larga escala. / Highly distributed systems such as grids are used today for the execution of large-scale parallel applications. Some characteristics of these systems are the complex resource interconnection that might be present and the scalability. The interconnection complexity comes from the different number of hops to provide communication among applications processes and differences in network latencies and bandwidth. The scalability means that the resources can be added indefinitely just by connecting them to the existing infrastructure. These characteristics influence directly the way parallel applications performance must be analyzed. Current traditional visualization schemes to this analysis are usually based on Gantt charts with one dimension to list the monitored entities and the other dimension dedicated to time. These visualizations are generally not suited to parallel applications executed in grids. The first reason is that they were not built to offer to the developer an analysis that also shows the network topology of the resources. The second reason is that traditional visualization techniques do not scale well when thousands of monitored entities must be analyzed together. This thesis tries to overcome the issues encountered on traditional visualization techniques for parallel applications. The main idea behind our efforts is to explore techniques from the information visualization research area and to apply them in the context of parallel applications analysis. Based on this main idea, the thesis proposes two visualization models: the three-dimensional and the visual aggregation model. The former might be used to analyze parallel applications taking into account the network topology of the resources. The visualization itself is composed of three dimensions, where two of them are used to render the topology and the third is used to represent time. The later model can be used to analyze parallel applications composed of several thousands of processes. It uses hierarchical organization of monitoring data and an information visualization technique called Treemap to represent that hierarchy. Both models represent a novel way to visualize the behavior of parallel applications, since they are conceived considering large-scale and complex distributed systems, such as grids. The implications of this thesis are directly related to the analysis and understanding of parallel applications executed in distributed systems. It enhances the comprehension of patterns in communication among processes and improves the possibility of matching this patterns with real network topology of grids. Although we extensively use the network topology example, the approach could be adapted with almost no changes to the interconnection provided by a middleware of a logical interconnection. With the scalable visualization technique, developers are able to look for patterns and observe the behavior of large-scale applications.
279

Codeur vidéo scalable haute-fidélité SHVC modulable et parallèle / Modulr and parallel scalable high efficiency SHVC video encoder

Parois, Ronan 27 February 2018 (has links)
Après l'entrée dans l'ère du numérique, la consommation vidéo a évolué définissant de nouvelles tendances. Les contenus vidéo sont désormais accessibles sur de nombreuses plateformes (télévision, ordinateur, tablette, smartphone ... ) et par de nombreux moyens, comme les réseaux mobiles, les réseaux satellites, les réseaux terrestres, Internet ou le stockage Blu-ray par exemple. Parallèlement, l'expérience utilisateur s'améliore grâce à la définition de nouveaux formats comme l'Ultra Haute Définition (UHD), le « High Dynamic Range » (HDR) ou le « High Frame Rate » (HFR). Ces formats considèrent une augmentation respectivement de la résolution, de la dynamique des couleurs et de la fréquence d'image. Les nouvelles tendances de consommation et les améliorations des formats imposent de nouvelles contraintes auxquelles doivent répondre les codeurs vidéo actuels et futurs. Dans ce contexte, nous proposons une solution de codage vidéo permettant de répondre à des contraintes de codage multi-formats, multi-destinations, rapide et efficace en termes de compression. Cette solution s'appuie sur l'extension Scalable du standard de compression vidéo « High Efficiency Video Coding » (HEVC) définie en fin d'année 2014, aussi appelée SHVC. Elle permet de réaliser des codages scalables en produisant un unique bitstream à partir d'un codage sur plusieurs couches construites à partir d'une même vidéo à différentes échelles de résolutions, fréquences, niveaux de qualité, profondeurs des pixels ou espaces de couleur. Le codage SHVC améliore l'efficacité du codage HEVC grâce à une prédiction inter-couches qui consistent à employer les informations de codage issues des couches les plus basses. La solution proposée dans cette thèse s'appuie sur un codeur HEVC professionnel développé par la société Ateme qui intègre plusieurs niveaux de parallélisme (inter-images, intra-images, inter-blocs et inter-opérations) grâce à une architecture en pipeline. Deux instances parallèles de ce codeur sont synchronisées via un décalage inter-pipelines afin de réaliser une prédiction inter-couches. Des compromis entre complexité et efficacité de codage sont effectués au sein de cette prédiction au niveau des types d'image et des outils de prédiction. Dans un cadre de diffusion, par exemple, la prédiction inter-couches est effectuée sur les textures pour une image sur deux. A qualité constante, ceci permet d'économiser 18.5% du débit pour une perte de seulement 2% de la vitesse par rapport à un codage HEVC. L'architecture employée permet alors de réaliser tous les types de scalabilité supportés par l'extension SHVC. De plus, pour une scalabilité en résolution, nous proposons un filtre de sous-échantillonnage, effectué sur la couche de base, qui optimise le coût en débit global. Nous proposons des modes de qualité intégrant plusieurs niveaux de parallélisme et optimisations à bas niveau qui permettent de réaliser des codages en temps-réel sur des formats UHD. La solution proposée a été intégrée dans une chaîne de diffusion vidéo temps-réel et montrée lors de plusieurs salons, conférences et meetinqs ATSC 3.0. / After entering the digital era, video consumption evolved and defined new trends. Video contents can now be accessed with many platforms (television, computer, tablet, smartphones ... ) and from many medias such as mobile network or satellite network or terrestrial network or Internet or local storage on Blu-ray disc for instance. In the meantime, users experience improves thanks to new video format such as Ultra High Definition (UHD) or High Dynamic Range (HOR) or High Frame Rate (HFR). These formats respectively enhance quality through resolution, dynamic range and frequency. New consumption trends and new video formats define new constraints that have to be resolved by currents and futures video encoders. In this context, we propose a video coding solution able to answer constraints such as multi-formats coding, multi­destinations coding, coding speed and coding efficiency in terms of video compression. This solution relies on the scalable extension of the standard « High Efficiency Video Coding » (HEVC) defined in 2014 also called SHVC. This extension enables scalable video coding by producing a single bitstream on several layers built from a common video at different scales of resolution, frequency, quality, bit depth per pixel or even colour gamut. SHVC coding enhance HEVC coding thanks to an inter-layer prediction that use coding information from lower layers. In this PhD thesis, the proposed solution is based on a professional video encoder, developed by Ateme company, able to perform parallelism on several levels (inter-frames, intra-frames, inter-blocks, inter-operations) thanks to a pipelined architecture. Two instances of this encoder run in parallel and are synchronised at pipeline level to enable inter-layer predictions. Some trade-off between complexity and coding efficiency are proposed on inter-layer prediction at slice and prediction tools levels. For instance, in a broadcast configuration, inter-layer prediction is processed on reconstructed pictures only for half the frames of the bitstream. In a constant quality configuration, it enables to save 18.5% of the coding bitrate for only 2% loss in terms of coding speed compared to equivalent HEVC coding. The proposed architecture is also able to perform all kinds of scalability supported in the SHVC extension. Moreover, in spatial scalability, we propose a down-sampling filter processed on the base layer that optimized global coding bitrate. We propose several quality modes with parallelism on several levels and low-level optimization that enable real-time video coding on UHD sequences. The proposed solution was integrated in a video broadcast chain and showed in several professional shows, conferences and at ATSC 3.0 meetings.
280

Skalierbare Ausführung von Prozessanwendungen in dienstorientierten Umgebungen

Preißler, Steffen 19 November 2012 (has links) (PDF)
Die Strukturierung und Nutzung von unternehmensinternen IT-Infrastrukturen auf Grundlage dienstorientierter Architekturen (SOA) und etablierter XML-Technologien ist in den vergangenen Jahren stetig gewachsen. Lag der Fokus anfänglicher SOA-Realisierungen auf der flexiblen Ausführung klassischer, unternehmensrelevanter Geschäftsprozesse, so bilden heutzutage zeitnahe Datenanalysen sowie die Überwachung von geschäftsrelevanten Ereignissen weitere wichtige Anwendungsklassen, um sowohl kurzfristig Probleme des Geschäftsablaufes zu identifizieren als auch um mittel- und langfristige Veränderungen im Markt zu erkennen und die Geschäftsprozesse des Unternehmens flexibel darauf anzupassen. Aufgrund der geschichtlich bedingten, voneinander unabhängigen Entwicklung der drei Anwendungsklassen, werden die jeweiligen Anwendungsprozesse gegenwärtig in eigenständigen Systemen modelliert und ausgeführt. Daraus resultiert jedoch eine Reihe von Nachteilen, welche diese Arbeit aufzeigt und ausführlich diskutiert. Vor diesem Hintergrund beschäftigte sich die vorliegende Arbeit mit der Ableitung einer konsolidierten Ausführungsplattform, die es ermöglicht, Prozesse aller drei Anwendungsklassen gemeinsam zu modellieren und in einer SOA-basierten Infrastruktur effizient auszuführen. Die vorliegende Arbeit adressiert die Probleme einer solchen konsolidierten Ausführungsplattform auf den drei Ebenen der Dienstkommunikation, der Prozessausführung und der optimalen Verteilung von SOA-Komponenten in einer Infrastruktur.

Page generated in 0.0458 seconds