• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1448
  • 256
  • 208
  • 132
  • 65
  • 62
  • 51
  • 44
  • 23
  • 21
  • 17
  • 17
  • 17
  • 17
  • 17
  • Tagged with
  • 3030
  • 1010
  • 624
  • 388
  • 375
  • 306
  • 298
  • 296
  • 272
  • 271
  • 266
  • 265
  • 264
  • 259
  • 254
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Content-aware video transmission in HEVC context : optimization of compression, of error resilience and concealment, and of visual quality / Transmission vidéo «contenu»-adaptée dans le contexte HEVC : optimisation de la compression, de la tolérance aux erreurs de la transmission, et de la qualité visuelle

Aldahdooh, Ahmed 25 August 2017 (has links)
Dans cette étude, nous utilisons des caractéristiques locales/globales en vue d’améliorer la chaîne de transmission des séquences de vidéos. Ce travail est divisé en quatre parties principales qui mettent à profit les caractéristiques de contenu vidéo. La première partie introduit un modèle de prédiction de paramètres d’un encodeur basé sur la complexité du contenu. Ce modèle utilise le débit, la distorsion, ainsi que la complexité de différentes configurations de paramètres afin d’obtenir des valeurs souhaitables (recommandées) de paramètres d’encodage. Nous identifions ensuite le lien en les caractéristiques du contenu et ces valeurs recommandées afin de construire le modèle de prédiction. La deuxième partie illustre le schéma de l’encodage à description multiple (Multiple Description Coding ou MDC, en anglais) que nous proposons dans ces travaux. Celui-ci est optimisé pour des MDC d’ordre-hauts. Le décodage correspondant et la procédure de récupération de l’erreur contenu-dépendant sont également étudiés et identifiés. La qualité de la vidéo reçue a été évaluée subjectivement. En analysant les résultats des expériences subjectives, nous introduisons alors un schéma adaptatif, c’est-à-dire adapté à la connaissance du contenu vidéo. Enfin, nous avons simulé un scénario d’application afin d’évaluer un taux de débit réaliste. Dans la troisième partie, nous utilisons une carte de déplacement, calculées au travers des propriétés de mouvement du contenu vidéo, comme entrée pour l’algorithme de masquage d’erreur par recouvrement (inpainting based error concealment algorithm). Une expérience subjective a été conduite afin d’évaluer l’algorithme et d’étudier la perturbation de l’observateur au visionnage de la vidéo traitée. La quatrième partie possèdent deux sous-parties. La première se penche sur les algorithmes de sélections par HRC pour les grandes bases de données de vidéos. La deuxième partie introduit l’évaluation de la qualité vidéo utilisant la connaissance du contenu global non-référencé. / In this work, the global/local content characteristics are utilized in order to improve the delivery chain of the video sequences. The work is divided into four main parts that take advantages of video content features. The first part introduces a joint content-complexity encoder parameters prediction model. This model uses bitrate, distortion, and complexity of different parameters configurations in order to get the recommended encoder parameters value. Then, the links between content features and the recommended values are identified. Finally, the prediction model is built using these features and the recommended encoder parameter values. The second part illustrates the proposed multiple description coding (MDC) scheme that is optimized for high-order MDC. The corresponding decoding and content-dependent error recovery procedures are also identified. The quality of the received videos is evaluated subjectively. By analyzing the subjective experiment results, an adaptive, i.e. content-aware, scheme is introduced. Finally, an application scenario is simulated to study the realistic bitrate consumption. The third part uses the motion properties of a content to introduce a motion map that will be used as an input for the modified state-of-the-art inpainting based error concealment algorithm. A subjective experiment was conducted to evaluate the algorithm and also to study the content-aware observer’s disturbance when perceiving the processed videos. The fourth part has two sub-parts, the first one is about HRC selection algorithms for the large-scale video database with an improved performance evaluation measures for video quality assessment algorithms using training and validation sets. The second part introduces global content aware no-reference video quality assessment.
232

New contributions to source and channel coding

Van der Walt, Werner 12 September 2012 (has links)
D.Ing. / Due to continuous research there is a large variety of new source and channel coding technologies which are constantly being introduced and refined. This study presents a new approach that enables one to use enumerative source coding on variable length codewords. The technique introduced are shown to provide an effective and fast decoder. The encoding process depends on the code used and its qualities. This new technique is illustrated by applying it to Huffman source coding. As a result, an efficient and fast Huffman decoder was constructed. The new decoder also resulted in small codebook representations. An efficient source coding mechanism, must be complemented by a channel and error correction coding mechanism which is just as efficient to ensure an optimal communication channel. We conclude this study by investigating channel and constrained coding. We know that the implementation of error correction and detection codes, like Reed-Solomon codes, are resource intensive with longer codewords. This problem is circumvented by the introduction of an alternative channel architecture. In this new architecture, a channel code is applied to the source data before an error correction code is applied to the channel data. For long codewords in the channel code, this new approach is shown to be equal or superior to block and sliding window codes. This new approach is block based, but unlike block codes, usable in most types of channels.
233

Network Coding Performance Evaluation and an Application to Underwater Networks

Ding, Xiake January 2015 (has links)
Network coding is a promising technology that many researchers have advocated due to its potentially significant benefits to improve the efficiency of data transmission. In this thesis, we use simulations to evaluate the performance of different network topologies using network coding. By comparing the results with networks without network coding, we confirm that network coding can improve the network throughput. It also has a potential to decrease the end to end delay and improve the reliability. However, there are tradeoff (between delay and reliability) when network coding is used, and some limitations which we summarize. Finally, we have also implemented network coding to a three-dimensional underwater network by using parameters that truly reflect the underwater channel. Our performance evaluations show a better throughput and end-to-end delay but not the PDR (Packet Delivery Rate) in the underwater topology we used.
234

IMPROVING CODING BEHAVIORS AMONG PHYSICIANS IN A RURAL FAMILY MEDICINE RESIDENCY PROGRAM

Allred, Delayne, Helmly, Laura, Stoltz, Amanda 05 April 2018 (has links)
Appropriate coding is a daunting task for new physicians just entering the world of medical practice. This is mostly due to the ever-changing standards for reimbursement, and the constant demand on provider time from an ever-growing number of patients to serve from a primary care perspective. It has been shown that family physicians lose up to 10- 20 percent of reimbursement each year because of incorrect coding. Physicians are the ones responsible for appropriately coding their work and documentation so that the clinic can be fairly reimbursed. In the East Tennessee State University Family Physicians of Bristol residency program, there is a strong tendency for most physicians to under-code the majority of office visits to a 99213, despite the fact that their documentation of these visits reflects coding at much higher levels. The goal of this project is to provide more intensive education to resident physicians on the requirements for coding, and thus change behaviors that led to continued under-coding. Researchers in this project utilized aggregate data collected in the course of the usual practice of business to show the present state of coding behaviors as broken down by resident, and then re-assessed these numbers after the presentation of more intensive education regarding appropriate coding. Education was provided in a variety of formats, including 4 short lectures at didactic sessions over the course of several months, as well as handouts for residents to keep with them at nurses’ stations containing guidelines for coding. Data analysis is currently underway. It is expected that the implementation of the educational program will have led to a statistically significant increase in appropriate coding within the clinic. This result has important implications regarding education to improve appropriate coding and reimbursement, particularly for small clinics operating in rural regions that are at the highest risk of harm from under-reimbursement due to inaccurate coding.
235

Source-channel coding for wireless networks

Wernersson, Niklas January 2006 (has links)
The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source–channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis two particular areas in joint source–channel coding are studied: multiple description coding (MDC) and soft decoding. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al is studied. A modification of the quantization in this method is proposed which yields a performance gain. A well known result in joint source–channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied. / QC 20101124
236

Parameter Estimation by Conditional Coding

Duersch, Taylor 01 May 1995 (has links)
Conditional coding is an application of Markov Chain Monte Carlo methods for sampling from conditional distributions. It is applied here to the problem of estimating the parameters of a computer-simulated pattern of fractures in an isomorphic, homotropic material under plane strain. We investigate the theory and procedures of conditional coding and show the viability of the technique by its application.
237

Adaptive Transform Coding of Images Using a Mixture of Principal Components

Dony, Douglas Robert 07 1900 (has links)
<p>The optimal linear block transform for coding images is well known to be the Karhunen-Loève transformation (KLT). However, the assumption of stationarity in the optimality condition is far from valid for images. Images are composed of regions whose local statistics may vary widely across an image. A new approach to data representation, a mixture of principal components (MPC), is developed in this thesis. It combines advantages of both principal components analysis and vector quantization and is therefore well suited to the problem of compressing images. The author proposes a number of new transform coding methods which optimally adapt to such local differences based on neural network methods using the MPC representation. The new networks are modular, consisting of a number of modules corresponding to different classes of the input data. Each module consists of a linear transformation, whose bases are calculated during an initial training period. The appropriate class for a given input vector is determined by an optimal classifier. The performance of the resulting adaptive networks is shown to be superior to that of the optimal nonadaptive linear transformation, both in terms of rate-distortion and computational complexity. When applied to the problem of compressing digital chest radiographs, compression ratios of between 30:1 and 40:1 are possible without any significant loss in image quality. In addition, the quality of the images were consistently judged to be as good as or better than the KLT at equivalent compression ratios.</p> <p>The new networks can also be used as segmentors with the resulting segmentation being independent of variations in illumination. In addition, the organization of the resulting class representations are analogous to the arrangement of the directionally sensitive columns in the visual cortex.</p> / Thesis / Doctor of Philosophy (PhD)
238

Scalable Multimedia Communication using Network Coding

Shao, Mingkai 01 1900 (has links)
This dissertation devotes itself to algorithmic approaches to the problem of scalable multicast with network coding. Several original contributions can be concluded as follows. We have proved that the scalable multicast problem is NP-hard, even with the ability to perform network coding at the network nodes. Several approximations are derived based on different heuristics, and systematic approaches have been devised to solve those problems. We showed that those traditional routing methods reduce to a special case in the new network coding context. Two important frameworks usually found in traditional scalable multicast solutions, i.e. layered multicast and rainbow multicast, are studied and extended to the network coding scenario. Solutions based on these two frameworks are also presented and compared. Suprisingly, these two distinctive approaches in the traditional sense become connected and share a similar essence of data mixing in the light of network coding. Cases are presented where these two approaches become equivalent and achieve the same Performance. We have made significant advances in constructing good solutions to the scalable multicast problem by solving various optimization problems formulated in our approaches. In the layered multicast framework, we started with a straight-forward extension of the traditional layered multicast to the network coding context. The proposed method features an intra-layer network coding technique which is applied on different optimized multicast graphs. Later on, we further improved this method by introducing the inter-layer network coding concept. By allowing network coding among data from different data layers, more leverage is gained when optimizing the network flow, thus higher performance is achieved. In the rainbow multicast framework, we choose uneven erasure protection (UEP) technique as the practical way of constructing balanced MDC, and optimize this MDC design using the max-flow information of receivers. After the MDC design is finalized, a single linear network broadcast code is employed to deliver MDC encoded data to receivers while satisfying the individual max-flow of all the receivers. Although this rainbow multicast based solution may sacrifice the performance in some cases, it greatly simplifies the rate allocation problem raised in the layered multicast framework. The use of one single network code also makes the network codes construction process a lot clearer. Extensive amount of simulation is performed and the results show that network coding based scalable multicast solutions can significantly outperform those traditional routing based solutions. In addition to the imaginary linear objective function used in the simulation, the practical convex objective function and real video data are also used to verify the effectiveness of the proposed solutions. The role of different parameters in the proposed approaches are analyzed, which gives us more guidelines on how to fine-tune the system. / Thesis / Doctor of Philosophy (PhD)
239

Coding theorems for systems of channels /

Gemma, James Leo January 1970 (has links)
No description available.
240

DEVELOPMENT OF DNA CONSTRUCTS, BACTERIAL STRAINS AND METHODOLOGIES TO CHARACTERIZE THE IBS/SIB FAMILY OF TYPE I TOXIN-ANTITOXINS IN ESCHERICHIA COLI

Jahanshahi, Shahrzad January 2019 (has links)
Almost all bacteria contain genes that may lead to their growth stasis and death.Normally, these toxins are believed to be neutralized with their cognate antitoxinsfrom a toxin-antitoxin (TA) operon. These modules are also abundant in pathogenic bacteria suggesting a role for them both in normal bacterial physiology and pathogenicity. Their functions have been subject to intense debates. Due to the cell killing capability of the toxin and the gene silencing capability of the antitoxin, they have been utilized for basic research, biotechnology and medical applications. However, further advancements of these applications have been impeded by our limited knowledge of the biology of TAs. Among these TA systems is the Ibs/Sib (A-E) family. Here, we discuss our efforts in characterizing these systems, with a focus on the IbsC/SibC member. Studying them has shown to not be straightforward due to the complexity of their underlying mechanisms and the current approaches being laborious and lacking sensitivity to be applied to these low abundant molecules. We have developed fluorescence-based platforms to take advantage of sensitive and high throughput and resolution techniques such as Fluorescence Assisted Cell Sorting (FACS) to study these molecules instead of relying on traditional culturing methods. While developing these platforms, we gained insights about the biology and regulation of these molecules. To expand this knowledge, we actively pursued investigating the regulation of these molecules at the transcriptional and post-transcriptional levels, both in their native context and in artificial systems. The rest of this thesis summarizes our efforts in solving one of the biggest pieces of the Ibs/Sib puzzle, namely their physiological expressions. With the strategies we have optimized for specific detection of these low abundance molecules, and the knowledge of their biology and regulation presented, we are now at an exciting phase to interrupt the long pause in the study of functions by these molecules and advancement of TA-based applications. / Thesis / Doctor of Philosophy (PhD) / Almost all bacteria contain genes that may lead to their growth stasis or death. Normally, these toxins are believed to be neutralized with their cognate antitoxins. In spite of the efforts to understand these toxin-antitoxin (TA) systems, their physiological roles are subject to intense debate. These systems are hard to study mainly because 1) they are only activated under specific conditions and 2) they are low in abundance. Current approaches are not high throughput and sensitive enough. In this thesis, we developed DNA constructs, bacterial strains and methodologies to facilitate the study of these molecules, particularly the Ibs-Sib family. We next employed these tools to gain a fundamental knowledge of their expression under different conditions, which revealed surprising information about the function of these molecules. We believe that future studies can greatly benefit from the tools offered here to tremendously enhance our understanding of these systems and lead to useful applications.

Page generated in 0.0189 seconds