• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Människor, skjortor och siffror : reducera komplexitet och en order blir till / Human beings, shirts and numbers : reduce complexity and an order will emerge

Carlson Ingdahl, Tina January 2012 (has links)
More than 35 years ago, calls were made for research on the constitutive role of accounting. Since then, many statements have been made to specify what accounting is or is not. This study describes what accounting does, in order to amalgamate a fragmented picture of accounting in practice, instead of seeking the answer to the existential question of what accounting really is. The purpose of this study was to investigate and describe what accounting does, and how this is done on the basis of business meetings in order to contribute to a better understanding of the role of accounting in practice.This study is based on the actor-network theory approach. Particular attention hasbeen paid to accounting as named numbers, when becoming a performative participantin framed situations. The framed situations of business meetings contained three elements; 1) pure calculation, 2) qualculation which include both calculation and judgments, and 3) calqulation as a collective social process. An ethnographically inspired field studywas carried out at Eton Fashion AB, a Swedish shirt making company. Data was collected by participant observations of business meetings supported by interviews. Photography, sound recording, and field notes were used as techniques for documentation.Diagnoses of five business meetings revealed that; 1) accounting restricted time,place and content, 2) accounting brought past and future into the present, 3) accounting summarized and obscured discontinuities, 4) accounting defined people and things, and 5) accounting called for the filling of content. Accounting became an actor in these five ways as they were allied with people and things that appeared in the meetings. Accounting was in a context where people made sense of situations by making both estimates and judgments. During the meetings, an ongoing reduction of complexity was taking place. Step by step, diversity and complexity were reduced until an order filled with numbers was the only thing remaining. At the same time, something was gained, as we step by step achieved greater legibility, transportability and universality. In this way the situation could subsist. It might move to new situations and it might allow for new summaries and new situations to take place. The situation of a meeting contained elements of pure calculation representing the cold, anonymous and empty part. Oftenthough, calculation, because of its emptiness, initiated for qualculation and calqulation to begin. Accounting as an idea is a taken for granted phenomenon, with influence, often far beyond what we can see when we find ourselves in a given situation. I conclude that it could have been some other way. It is not accounting in itself, its own excellence or ability to represent the truth, which makes it successful. The success story of accounting is simply about “the others” with whom accounting is an ally. / För avläggande av ekonomie doktorsexamen i företagsekonomi som med tillstånd av Handelshögskolans fakultetsnämnd vid Göteborgs universitet framlägges för offentlig granskning fredagen den 30 mars kl. 13.15 i CGsalen vid Företagsekonomiska institutionen, Vasagatan 1, Göteborg.
2

Machine learning mode decision for complexity reduction and scaling in video applications

Grellert, Mateus January 2018 (has links)
As recentes inovações em técnicas de Aprendizado de Máquina levaram a uma ampla utilização de modelos inteligentes para resolver problemas complexos que são especialmente difíceis de computar com algoritmos e estruturas de dados convencionais. Em particular, pesquisas recentes em Processamento de Imagens e Vídeo mostram que é possível desenvolver modelos de Aprendizado de Máquina que realizam reconhecimento de objetos e até mesmo de ações com altos graus de confiança. Além disso, os últimos avanços em algoritmos de treinamento para Redes Neurais Profundas (Deep Learning Neural Networks) estabeleceram um importante marco no estudo de Aprendizado de Máquina, levando a descobertas promissoras em Visão Computacional e outras aplicações. Estudos recentes apontam que também é possível desenvolver modelos inteligentes capazes de reduzir drasticamente o espaço de otimização do modo de decisão em codificadores de vídeo com perdas irrelevantes em eficiência de compressão. Todos esses fatos indicam que Aprendizado de Máquina para redução de complexidade em aplicações de vídeo é uma área promissora para pesquisa. O objetivo desta tese é investigar técnicas baseadas em aprendizado para reduzir a complexidade das decisões da codificação HEVC, com foco em aplicações de codificação e transcodificação rápidas. Um perfilamento da complexidade em codificadores é inicialmente apresentado, a fim de identificar as tarefas que requerem prioridade para atingir o objetivo dessa tese. A partir disso, diversas variáveis e métricas são extraídas durante os processos de codificação e decodificação para avaliar a correlação entre essas variáveis e as decisões de codificação associadas a essas tarefas. Em seguida, técnicas de Aprendizado de Máquina são empregadas para construir classificadores que utilizam a informação coletada para prever o resultado dessas decisões, eliminando o custo computacional necessário para computá-las. As soluções de codificação e transcodificação foram desenvolvidas separadamente, pois o tipo de informação é diferente em cada caso, mas a mesma metologia foi aplicada em ambos os casos. Além disso, mecanismos de complexidade escalável foram desenvolvidos para permitir o melhor desempenho taxa-compressão para um dado valor de redução de complexidade. Resultados experimentais apontam que as soluções desenvolvidas para codificação rápida atingiram reduções de complexidade entre 37% e 78% na média, com perdas de qualidade entre 0.04% e 4.8% (medidos em Bjontegaard Delta Bitrate – BD-BR). Já as soluções para trancodificação rápida apresentaram uma redução de 43% até 67% na complexidade, com BD-BR entre 0.34% e 1.7% na média. Comparações com o estado da arte confirmam a eficácia dos métodos desenvolvidos, visto que são capazes de superar os resultados atingidos por soluções similares. / The recent innovations in Machine Learning techniques have led to a large utilization of intelligent models to solve complex problems that are especially hard to compute with traditional data structures and algorithms. In particular, the current research on Image and Video Processing shows that it is possible to design Machine Learning models that perform object recognition and even action recognition with high confidence levels. In addition, the latest progress on training algorithms for Deep Learning Neural Networks was also an important milestone in Machine Learning, leading to prominent discoveries in Computer Vision and other applications. Recent studies have also shown that it is possible to design intelligent models capable of drastically reducing the optimization space of mode decision in video encoders with minor losses in coding efficiency. All these facts indicate that Machine Learning for complexity reduction in visual applications is a very promising field of study. The goal of this thesis is to investigate learning-based techniques to reduce the complexity of the HEVC encoding decisions, focusing on fast video encoding and transcoding applications. A complexity profiling of HEVC is first presented to identify the tasks that must be prioritized to accomplish our objective. Several variables and metrics are then extracted during the encoding and decoding processes to assess their correlation with the encoding decisions associated with these tasks. Next, Machine Learning techniques are employed to construct classifiers that make use of this information to accurately predict the outcome of these decisions, eliminating the timeconsuming operations required to compute them. The fast encoding and transcoding solutions were developed separately, as the source of information is different on each case, but the same methodology was followed in both cases. In addition, mechanisms for complexity scalability were developed to provide the best rate-distortion performance given a target complexity reduction. Experimental results demonstrated that the designed fast encoding solutions achieve time savings of 37% up to 78% on average, with Bjontegaard Delta Bitrate (BD-BR) increments between 0.04% and 4.8%. In the transcoding results, a complexity reduction ranging from 43% to 67% was observed, with average BD-BR increments from 0.34% up to 1.7%. Comparisons with state of the art confirm the efficacy of the designed methods, as they outperform the results achieved by related solutions.
3

Complexity Reduced Behavioral Models for Radio Frequency Power Amplifiers’ Modeling and Linearization

Fares, Marie-Claude January 2009 (has links)
Radio frequency (RF) communications are limited to a number of frequency bands scattered over the radio spectrum. Applications over such bands increasingly require more versatile, data extensive wireless communications that leads to the necessity of high bandwidth efficient interfaces, operating over wideband frequency ranges. Whether for a base station or mobile device, the regulations and adequate transmission of such schemes place stringent requirements on the design of transmitter front-ends. Increasingly strenuous and challenging hardware design criteria are to be met, especially so in the design of power amplifiers (PA), the bottle neck of the transmitter’s design tradeoff between linearity and power efficiency. The power amplifier exhibits a nonideal behavior, characterized by both nonlinearity and memory effects, heavily affecting that tradeoff, and therefore requiring an effective linearization technique, namely Digital Predistortion (DPD). The effectiveness of the DPD is highly dependent on the modeling scheme used to compensate for the PA’s nonideal behavior. In fact, its viability is determined by the scheme’s accuracy and implementation complexity. Generic behavioral models for nonlinear systems with memory have been used, considering the PA as a black box, and requiring RF designers to perform extensive testing to determine the minimal complexity structure that achieves satisfactory results. This thesis first proposes a direct systematic approach based on the parallel Hammerstein structure to determine the exact number of coefficients needed in a DPD. Then a physical explanation of memory effects is detailed, which leads to a close-form expression for the characteristic behavior of the PA entirely based on circuit properties. The physical expression is implemented and tested as a modeling scheme. Moreover, a link between this formulation and the proven behavioral models is explored, namely the Volterra series and Memory Polynomial. The formulation shows the correlation between parameters of generic behavioral modeling schemes when applied to RF PAs and demonstrates redundancy based on the physical existence or absence of modeling terms, detailed for the proven Memory polynomial modeling and linearization scheme.
4

Complexity Reduced Behavioral Models for Radio Frequency Power Amplifiers’ Modeling and Linearization

Fares, Marie-Claude January 2009 (has links)
Radio frequency (RF) communications are limited to a number of frequency bands scattered over the radio spectrum. Applications over such bands increasingly require more versatile, data extensive wireless communications that leads to the necessity of high bandwidth efficient interfaces, operating over wideband frequency ranges. Whether for a base station or mobile device, the regulations and adequate transmission of such schemes place stringent requirements on the design of transmitter front-ends. Increasingly strenuous and challenging hardware design criteria are to be met, especially so in the design of power amplifiers (PA), the bottle neck of the transmitter’s design tradeoff between linearity and power efficiency. The power amplifier exhibits a nonideal behavior, characterized by both nonlinearity and memory effects, heavily affecting that tradeoff, and therefore requiring an effective linearization technique, namely Digital Predistortion (DPD). The effectiveness of the DPD is highly dependent on the modeling scheme used to compensate for the PA’s nonideal behavior. In fact, its viability is determined by the scheme’s accuracy and implementation complexity. Generic behavioral models for nonlinear systems with memory have been used, considering the PA as a black box, and requiring RF designers to perform extensive testing to determine the minimal complexity structure that achieves satisfactory results. This thesis first proposes a direct systematic approach based on the parallel Hammerstein structure to determine the exact number of coefficients needed in a DPD. Then a physical explanation of memory effects is detailed, which leads to a close-form expression for the characteristic behavior of the PA entirely based on circuit properties. The physical expression is implemented and tested as a modeling scheme. Moreover, a link between this formulation and the proven behavioral models is explored, namely the Volterra series and Memory Polynomial. The formulation shows the correlation between parameters of generic behavioral modeling schemes when applied to RF PAs and demonstrates redundancy based on the physical existence or absence of modeling terms, detailed for the proven Memory polynomial modeling and linearization scheme.
5

Machine learning mode decision for complexity reduction and scaling in video applications

Grellert, Mateus January 2018 (has links)
As recentes inovações em técnicas de Aprendizado de Máquina levaram a uma ampla utilização de modelos inteligentes para resolver problemas complexos que são especialmente difíceis de computar com algoritmos e estruturas de dados convencionais. Em particular, pesquisas recentes em Processamento de Imagens e Vídeo mostram que é possível desenvolver modelos de Aprendizado de Máquina que realizam reconhecimento de objetos e até mesmo de ações com altos graus de confiança. Além disso, os últimos avanços em algoritmos de treinamento para Redes Neurais Profundas (Deep Learning Neural Networks) estabeleceram um importante marco no estudo de Aprendizado de Máquina, levando a descobertas promissoras em Visão Computacional e outras aplicações. Estudos recentes apontam que também é possível desenvolver modelos inteligentes capazes de reduzir drasticamente o espaço de otimização do modo de decisão em codificadores de vídeo com perdas irrelevantes em eficiência de compressão. Todos esses fatos indicam que Aprendizado de Máquina para redução de complexidade em aplicações de vídeo é uma área promissora para pesquisa. O objetivo desta tese é investigar técnicas baseadas em aprendizado para reduzir a complexidade das decisões da codificação HEVC, com foco em aplicações de codificação e transcodificação rápidas. Um perfilamento da complexidade em codificadores é inicialmente apresentado, a fim de identificar as tarefas que requerem prioridade para atingir o objetivo dessa tese. A partir disso, diversas variáveis e métricas são extraídas durante os processos de codificação e decodificação para avaliar a correlação entre essas variáveis e as decisões de codificação associadas a essas tarefas. Em seguida, técnicas de Aprendizado de Máquina são empregadas para construir classificadores que utilizam a informação coletada para prever o resultado dessas decisões, eliminando o custo computacional necessário para computá-las. As soluções de codificação e transcodificação foram desenvolvidas separadamente, pois o tipo de informação é diferente em cada caso, mas a mesma metologia foi aplicada em ambos os casos. Além disso, mecanismos de complexidade escalável foram desenvolvidos para permitir o melhor desempenho taxa-compressão para um dado valor de redução de complexidade. Resultados experimentais apontam que as soluções desenvolvidas para codificação rápida atingiram reduções de complexidade entre 37% e 78% na média, com perdas de qualidade entre 0.04% e 4.8% (medidos em Bjontegaard Delta Bitrate – BD-BR). Já as soluções para trancodificação rápida apresentaram uma redução de 43% até 67% na complexidade, com BD-BR entre 0.34% e 1.7% na média. Comparações com o estado da arte confirmam a eficácia dos métodos desenvolvidos, visto que são capazes de superar os resultados atingidos por soluções similares. / The recent innovations in Machine Learning techniques have led to a large utilization of intelligent models to solve complex problems that are especially hard to compute with traditional data structures and algorithms. In particular, the current research on Image and Video Processing shows that it is possible to design Machine Learning models that perform object recognition and even action recognition with high confidence levels. In addition, the latest progress on training algorithms for Deep Learning Neural Networks was also an important milestone in Machine Learning, leading to prominent discoveries in Computer Vision and other applications. Recent studies have also shown that it is possible to design intelligent models capable of drastically reducing the optimization space of mode decision in video encoders with minor losses in coding efficiency. All these facts indicate that Machine Learning for complexity reduction in visual applications is a very promising field of study. The goal of this thesis is to investigate learning-based techniques to reduce the complexity of the HEVC encoding decisions, focusing on fast video encoding and transcoding applications. A complexity profiling of HEVC is first presented to identify the tasks that must be prioritized to accomplish our objective. Several variables and metrics are then extracted during the encoding and decoding processes to assess their correlation with the encoding decisions associated with these tasks. Next, Machine Learning techniques are employed to construct classifiers that make use of this information to accurately predict the outcome of these decisions, eliminating the timeconsuming operations required to compute them. The fast encoding and transcoding solutions were developed separately, as the source of information is different on each case, but the same methodology was followed in both cases. In addition, mechanisms for complexity scalability were developed to provide the best rate-distortion performance given a target complexity reduction. Experimental results demonstrated that the designed fast encoding solutions achieve time savings of 37% up to 78% on average, with Bjontegaard Delta Bitrate (BD-BR) increments between 0.04% and 4.8%. In the transcoding results, a complexity reduction ranging from 43% to 67% was observed, with average BD-BR increments from 0.34% up to 1.7%. Comparisons with state of the art confirm the efficacy of the designed methods, as they outperform the results achieved by related solutions.
6

Machine learning mode decision for complexity reduction and scaling in video applications

Grellert, Mateus January 2018 (has links)
As recentes inovações em técnicas de Aprendizado de Máquina levaram a uma ampla utilização de modelos inteligentes para resolver problemas complexos que são especialmente difíceis de computar com algoritmos e estruturas de dados convencionais. Em particular, pesquisas recentes em Processamento de Imagens e Vídeo mostram que é possível desenvolver modelos de Aprendizado de Máquina que realizam reconhecimento de objetos e até mesmo de ações com altos graus de confiança. Além disso, os últimos avanços em algoritmos de treinamento para Redes Neurais Profundas (Deep Learning Neural Networks) estabeleceram um importante marco no estudo de Aprendizado de Máquina, levando a descobertas promissoras em Visão Computacional e outras aplicações. Estudos recentes apontam que também é possível desenvolver modelos inteligentes capazes de reduzir drasticamente o espaço de otimização do modo de decisão em codificadores de vídeo com perdas irrelevantes em eficiência de compressão. Todos esses fatos indicam que Aprendizado de Máquina para redução de complexidade em aplicações de vídeo é uma área promissora para pesquisa. O objetivo desta tese é investigar técnicas baseadas em aprendizado para reduzir a complexidade das decisões da codificação HEVC, com foco em aplicações de codificação e transcodificação rápidas. Um perfilamento da complexidade em codificadores é inicialmente apresentado, a fim de identificar as tarefas que requerem prioridade para atingir o objetivo dessa tese. A partir disso, diversas variáveis e métricas são extraídas durante os processos de codificação e decodificação para avaliar a correlação entre essas variáveis e as decisões de codificação associadas a essas tarefas. Em seguida, técnicas de Aprendizado de Máquina são empregadas para construir classificadores que utilizam a informação coletada para prever o resultado dessas decisões, eliminando o custo computacional necessário para computá-las. As soluções de codificação e transcodificação foram desenvolvidas separadamente, pois o tipo de informação é diferente em cada caso, mas a mesma metologia foi aplicada em ambos os casos. Além disso, mecanismos de complexidade escalável foram desenvolvidos para permitir o melhor desempenho taxa-compressão para um dado valor de redução de complexidade. Resultados experimentais apontam que as soluções desenvolvidas para codificação rápida atingiram reduções de complexidade entre 37% e 78% na média, com perdas de qualidade entre 0.04% e 4.8% (medidos em Bjontegaard Delta Bitrate – BD-BR). Já as soluções para trancodificação rápida apresentaram uma redução de 43% até 67% na complexidade, com BD-BR entre 0.34% e 1.7% na média. Comparações com o estado da arte confirmam a eficácia dos métodos desenvolvidos, visto que são capazes de superar os resultados atingidos por soluções similares. / The recent innovations in Machine Learning techniques have led to a large utilization of intelligent models to solve complex problems that are especially hard to compute with traditional data structures and algorithms. In particular, the current research on Image and Video Processing shows that it is possible to design Machine Learning models that perform object recognition and even action recognition with high confidence levels. In addition, the latest progress on training algorithms for Deep Learning Neural Networks was also an important milestone in Machine Learning, leading to prominent discoveries in Computer Vision and other applications. Recent studies have also shown that it is possible to design intelligent models capable of drastically reducing the optimization space of mode decision in video encoders with minor losses in coding efficiency. All these facts indicate that Machine Learning for complexity reduction in visual applications is a very promising field of study. The goal of this thesis is to investigate learning-based techniques to reduce the complexity of the HEVC encoding decisions, focusing on fast video encoding and transcoding applications. A complexity profiling of HEVC is first presented to identify the tasks that must be prioritized to accomplish our objective. Several variables and metrics are then extracted during the encoding and decoding processes to assess their correlation with the encoding decisions associated with these tasks. Next, Machine Learning techniques are employed to construct classifiers that make use of this information to accurately predict the outcome of these decisions, eliminating the timeconsuming operations required to compute them. The fast encoding and transcoding solutions were developed separately, as the source of information is different on each case, but the same methodology was followed in both cases. In addition, mechanisms for complexity scalability were developed to provide the best rate-distortion performance given a target complexity reduction. Experimental results demonstrated that the designed fast encoding solutions achieve time savings of 37% up to 78% on average, with Bjontegaard Delta Bitrate (BD-BR) increments between 0.04% and 4.8%. In the transcoding results, a complexity reduction ranging from 43% to 67% was observed, with average BD-BR increments from 0.34% up to 1.7%. Comparisons with state of the art confirm the efficacy of the designed methods, as they outperform the results achieved by related solutions.
7

Etude de la classification dans un trés grand nombre de catégories. / Very large number of classes classification study

Puget, Raphael 04 July 2016 (has links)
La croissance des données disponibles aujourd'hui génère de nouvelles problématiques pour lesquelles l'apprentissage statistique ne possède pas de réponses adaptées. Ainsi le cadre classique de la classification qui consiste à affecter une ou plusieurs classes à une instance est étendu à des problèmes avec des milliers, voire des millions de classes différentes. Avec ces problèmes viennent de nouveaux axes de recherches comme \deleted{le temps} \added{la réduction de la compléxité} de classification qui est habituellement linéaire en fonction du nombre de classes du problème\deleted{.} \added{, ce qui est problématique lorsque le nombre de classe devient trop important.} Plusieurs familles de solutions pour cette problématique ont émergé comme la construction d'une hiérarchie de classifieurs ou bien l'adaptation de méthodes ensemblistes de type ECOC. Le travail présenté ici propose deux nouvelles méthodes pour répondre au problème de classification extrême. Le premier travail consiste en une nouvelle mesure asymétrique pour le partitionnement de classes dans le cadre d'une classification hiérarchique alors que le second axe explore l'élaboration d'un algorithme séquentiel actif d'agrégation des classifieurs les plus intéressants. / The increase in volume of the data nowadays is at the origin of new problematics for which machine learning does not possess adapted answers. The usual classification task which requires to assign one or more classes to an example is extended to problems with thousands or even millions of different classes. Those problems bring new research fields like the complexity reduction of the classification process. That classification process has a complexity usually linear with the number of classes of the problem, which can be an issue if the number of classes is too large. Various ways to deal with those new problems have emerged like the construction of a hierarchy of classifiers or the adaptation of ECOC ensemble methods. The work presented here describes two new methods to answer this extreme classification task. The first one consists in a new asymmetrical measure to help the partitioning of the classes in order to build a hierarchy of classes. The second one proposes a sequential way to aggregate effectively the most interesting classifiers.
8

Early Skip/DIS: uma heurística para redução de complexidade no codificador de mapas de profundidade do 3D-HEVC / Early Skip/DIS: A Complexity-Reduction Heuristic for 3D-HEVC Depth Coder

Conceição, Ruhan Avila da 26 February 2016 (has links)
Submitted by Aline Batista (alinehb.ufpel@gmail.com) on 2017-05-05T22:17:01Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) DissertacaoRuhan.pdf: 10210248 bytes, checksum: 75e231362cecb5676bd783b82978d99d (MD5) / Approved for entry into archive by Aline Batista (alinehb.ufpel@gmail.com) on 2017-05-05T22:17:49Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) DissertacaoRuhan.pdf: 10210248 bytes, checksum: 75e231362cecb5676bd783b82978d99d (MD5) / Made available in DSpace on 2017-05-05T22:18:00Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) DissertacaoRuhan.pdf: 10210248 bytes, checksum: 75e231362cecb5676bd783b82978d99d (MD5) Previous issue date: 2016-02-26 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Vídeos 3D provêem uma experiência visual elevada aos espectadores devido à percepção de profundidade das imagens. Apesar disto, o tamanho destes vídeos tende a crescer linearmente conforme o número de vistas codificadas, considerando formatos de vídeos convencionais. Neste cenário surge o formato Multiview plus Depth (MVD), o qual associa informações de distância entre os objetos da cena e a câmera (mapas de profundidade), permitindo um processo eficiente de síntese de vistas intermediárias, reduzindo o número de vistas a serem transmitidas. Ao contrário de padrões multivistas anteriores, o 3D-HEVC é capaz de processar mapas de profundidade, criando novas ferramentas para manipula-los e codifica-los. Embora este fato proporcione um aumento na eficiência de compressão, o acréscimo de novas ferramentas no codificador acarreta no aumento da complexidade do processo. Assim, cresce a relevância de soluções que reduzam o tempo de codificação do 3D-HEVC, sem impactar significativamente a eficiência de codificação. Este trabalho apresenta uma heurística de redução de complexidade para o codificador de mapas de profundidade do 3D-HEVC, chamada de Early Skip/DIS. Uma análise sobre mapas de profundidade do 3D-HEVC é apresentada nesta dissertação, demonstrando que o particionamento 2Nx2N é largamente utilizado pelo codificador, visto que diversas ferramentas eficientes de codificação, utilizam exclusivamente este modo. A análise demonstrou que, além do 2Nx2N ser o modo de particionamento mais usado, a exclusão dos demais modos gera um impacto desprezível em eficiência de codificação, com ganhos mínimos em termos de tempo de processamento. Este fato conduziu ao desenvolvimento da heurística Early Skip/DIS, a qual visa evitar o teste dos demais modos/ferramentas de predição com base no custo RD gerado por estes modos. Os thresholds utilizados nesta solução são definidos de forma adaptativa. Resultados de simulação demonstraram que a solução é capaz de reduzir o tempo de codificação dos mapas de profundidade em até 33,7%, com um impacto médio de apenas 0,047% na eficiência de compressão da textura. A heurística proposta apresenta os melhores resultados de redução de complexidade para o codificador de mapas de profundidade entre os trabalhos relacionados. / 3D videos provides a visual experience with depth perception through the usage of special displays that project a three-dimensional scene from slightly different directions for the left and right eyes. Despite this improved visual experience, the coded-video data volume tends to linearly increases with the number of processed views, mainly considering conventional 3D video formats. In this scenario emerges the Multiview plus Depth (MVD) format, which informs the distance between scene objects and the recording camera (depth maps), allowing an eficiently view-synthesis process while reducing the number of views to be transmitted. Unlike previous multiview video coding standards, 3D-HEVC is able to manipulate depth maps in an eficient way due the new defined tools which explores the depth maps characteristics. Although this fact leads to an improvement of 3D-HEVC compression eficiency, the addition of new coding tools also increases the coding process complexity. Thus, solutions, which reduces the 3D-HEVC coding time while does not affecting the compression eficiency at all, are important in this scenario. This work presents a complexity reduction heuristic for the 3D-HEVC depth maps coder, called Early Skip/DIS. Initially, an analysis about 3D-HEVC depth-maps coder is presented. This analysis showed that the 2Nx2N is the most used partitioning mode, since some eficient coding tools, like Skip and DIS, are applied exclusively over this partitioning mode. This analysis also showed that, beyond the 2Nx2N partitioning mode is the most used mode, the exclusion of the other partition modes causes an imperceptible impact in the encoding eficiency and a low impact in processing time. This fact leads to the development of an Early decision heuristic called Early Skip/DIS, which avoids the encoder checking unnecessary modes based on the RD cost generated by the Skip and DIS modes. The thresholds used in this solution are defined in an adaptively way, observing the occurrence rate of those modes as a function of its generated RD costs. Simulation results demonstrated that the proposed solution is able to reduce the depth-map coding time up to 33.7% while affecting the texture compression eficiency in 0.047% (in terms of BD-rate). The propose heuristic presented the best depth-map complexity reduction result among other related works.
9

Etude de turbocodes non binaires pour les futurs systèmes de communication et de diffusion / Study of non-binary turbo codes for future communication and broadcasting systems

Klaimi, Rami 03 July 2019 (has links)
Les systèmes de téléphonie mobile de 4ème et 5ème générations ont adopté comme techniques de codage de canal les turbocodes, les codes LDPC et les codes polaires binaires. Cependant, ces codes ne permettent pas de répondre aux exigences, en termes d’efficacité spectrale et de fiabilité, pour les réseaux de communications futurs (2030 et au-delà), qui devront supporter de nouvelles applications telles que les communications holographiques, les véhicules autonomes, l’internet tactile … Un premier pas a été fait il y a quelques années vers la définition de codes correcteurs d’erreurs plus puissants avec l’étude de codes LDPC non binaires, qui ont montré une meilleure performance que leurs équivalents binaires pour de petites tailles de code et/ou lorsqu'ils sont utilisés sur des canaux non binaires. En contrepartie, les codes LDPC non binaires présentent une complexité de décodage plus importante que leur équivalent binaire. Des études similaires ont commencé à émerger du côté des turbocodes. Tout comme pour leurs homologues LDPC, les turbocodes non binaires présentent d’excellentes performances pour de petites tailles de blocs. Du point de vue du décodage, les turbocodes non binaires sont confrontés au même problème d’augmentation de la complexité de traitement que les codes LDPC non binaire. Dans cette thèse nous avons proposé une nouvelle structure de turbocodes non binaires en optimisant les différents blocs qui la constituent. Nous avons réduit la complexité de ces codes grâce à la définition d’un algorithme de décodage simplifié. Les codes obtenus ont montré des performances intéressantes en comparaison avec les codes correcteur d’erreur de la littérature. / Nowadays communication standards have adopted different binary forward error correction codes. Turbo codes were adopted for the long term evolution standard, while binary LDPC codes were standardized for the fifth generation of mobile communication (5G) along side with the polar codes. Meanwhile, the focus of the communication community is shifted towards the requirement of beyond 5G standards. Networks for the year 2030 and beyond are expected to support novel forward-looking scenarios, such as holographic communications, autonomous vehicles, massive machine-type communications, tactile Internet… To respond to the expected requirements of new communication systems, non-binary LDPC codes were defined, and they are shown to achieve better error correcting performance than the binary LDPC codes. This performance gain was followed by a high decoding complexity, depending on the field order.Similar studies emerged in the context of turbo codes, where the non-binary turbo codes were defined, and have shown promising error correcting performance, while imposing high complexity. The aim of this thesis is to propose a new low-complex structure of non-binary turbocodes. The constituent blocks of this structure were optimized in this work, and a new low complexity decoding algorithm was proposed targeting a future hardware implementation. The obtained results are promising, where the proposed codes are shown to outperform existing binary and non-binary codes from the literature.
10

Modélisation comportementale de drivers de ligne de transmission pour des besoins d'intégrité du signal et de compatibilité électromagnétique / Behavioral modeling of transmission line drivers for signal integrity and electromagnetic compatibility assessments

Diouf, Cherif El Valid 11 June 2014 (has links)
La miniaturisation de circuits intégrés, les hautes fréquences de fonctionnement, la baisse des potentiels d'alimentation, les fortes densités d'intégration rendent les signaux numériques propagés sur les interconnexions très susceptibles à la dégradation voire à la corruption. En vue d’évaluer la compatibilité électromagnétique et l’intégrité du signal il est nécessaire de disposer dès les premières phases de développement de modèles précis de ces interconnexions pour les insérer dans les simulateurs temporels. Nos travaux s'inscrivent dans ce contexte et concernent plus particulièrement la modélisation comportementale des buffers et drivers de ligne de transmission. Ils ont abouti à une approche originale de modélisation notamment basée sur les séries de Volterra-Laguerre. Les modèles boites noires développés disposent d’une implémentation SPICE assez simple autorisant ainsi une très bonne portabilité. Ils sont faciles à identifier et disposent d’une complexité paramétrique permettant un gain important de temps de simulation vis-à-vis des modèles transistors des drivers. En outre les méthodes développées permettent une modélisation dynamique non linéaire plus précise du port de sortie, et une gestion plus générale des entrées autorisant notamment une très bonne prise en compte du régime de sur-cadencement ce que par exemple ne fait pas le standard IBIS. / Integrated circuits miniaturization, high operating frequencies, lower supply voltages, high-density integration make digital signals propagating on interconnects highly vulnerable to degradation. Assessing EMC and signal integrity in the early stages of the design flow requires accurate interconnect models allowing for efficient time-domain simulations. In this context, our work addressed the issue of behavioral modeling of transmission line buffers, and particularly that of drivers. The main result is an original modeling approach partially based on Volterra-Laguerre series. The black box models we developed have a fairly simple implementation in SPICE thus allowing a very good portability. They are easy to identify and have a parametric complexity allowing a large gain in simulation time with respect to transistor driver models. In addition, the developed methods allow a more accurate output port nonlinear dynamics modeling, and a more general management of inputs. A very good reproduction of driver behaviour in overclocking conditions provides a significant advantage over standard IBIS models.

Page generated in 0.0977 seconds