• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 24
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Codebook design for distributed relay beamforming system

Zheng, Min 01 April 2012 (has links)
In FDD amplify-and-forward distributed relay network, codebook techniques are utilized to feedback quantized CSI with limited cost. First, this thesis focuses on the phaseonly codebook and with-power-control codebook design methods under individual relay power constraints. Phase-only codebooks can be generated off-line with the Grassmannian beamforming criterion. Due to non-uniform distribution of the optimal beamforming vector in the vector space, The Lloyd’s algorithm is proposed for with-power-control codebook designs. To reduce search complexity, a suboptimal method for the codebook update stage in the Lloyd’s algorithm is proposed. Its performance is compared to the performance of the global search method which provides the optimal solution but incurs high computation complexity. Second, this thesis investigates the performance difference between phaseonly and with-power-control codebooks. It is found that the power control gain is tightly related to the relay locations. When the relays are close to the source node, the gain from power control is negligible and using phase-only codebooks becomes a viable choice for feedback due to its simple implantation and off-line computation. Finally, the problem of codebook design extends to the total relay power constraint case, and the Lloyd’s algorithm with primary eigenvector method is proposed to design a suboptimal codebook. / UOIT
2

Improved multipath channel estimation and data transmission throughbeamforming training using hierarchical codebook

Sun, Yi-Ming 04 January 2022 (has links)
Multiple-input and multiple-output (MIMO) technology with antenna arrays is a vital solution to achieve the advertised features in the next generation wireless communication. Multiple antennas at the transmitter and receiver can achieve diversity as well as multiplexing gain during data transmission. In order to take advantage of the multiplexing gain of MIMO systems, two or more channel paths are required to send multiple signal streams simultaneously. Beamforming (BF) training using low resolution and high resolution array beams is already implemented in the IEEE 802.11ad standard, making hierarchical codebook design an attractive approach. In this thesis, our goal is to improve multi-path channel estimation and data transmission through BF training using hierarchical codebook design. Kaiser Window sector array design and restricted orthogonal projection are applied during the beam training phase. The pre-defined hybrid-implemented codewords selected after the BF training are used for data transmission directly. With these combined efforts, a 30\% higher spectral efficiency compared to the reference design [1] is achieved. / Graduate
3

Moving Object Detection Based on Ordered Dithering Codebook Model

Guo, Jing-Ming, Thinh, Nguyen Van, Lee, Hua 10 1900 (has links)
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA / This paper presents an effective multi-layer background modeling method to detect moving objects by exploiting the advantage of novel distinctive features and hierarchical structure of the Codebook (CB) model. In the block-based structure, the mean-color feature within a block often does not contain sufficient texture information, causing incorrect classification especially in large block size layers. Thus, the Binary Ordered Dithering (BOD) feature becomes an important supplement to the mean RGB feature In summary, the uniqueness of this approach is the incorporation of the halftoning scheme with the codebook model for superior performance over the existing methods.
4

Analyse wissenschaftlicher Konferenz-Tweets mittels Codebook und der Software Tweet Classifier

Lemke, Steffen, Mazarakis, Athanasios 26 March 2018 (has links) (PDF)
Mit seiner fokussierten Funktionsweise hat der Mikrobloggingdienst Twitter im Laufe des vergangenen Jahrzehnts eine beachtliche Präsenz als Kommunikationsmedium in diversen Bereichen des Lebens erreicht. Eine besondere Weise, auf die sich die gestiegene Sichtbarkeit Twitters in der täglichen Kommunikation häufig manifestiert, ist die gezielte Verwendung von Hashtags. So nutzen Unternehmen Hashtags um die auf Twitter stattfindenden Diskussionen über ihre Produkte zu bündeln, während Organisatoren von Großveranstaltungen und Fernsehsendungen durch Bekanntgabe ihrer eigenen, offiziellen Hashtags Zuschauer dazu ermutigen, den Dienst parallel zum eigentlichen Event als Diskussionsplattform zu nutzen. [... aus der Einleitung]
5

A High-Performance Vector Quantizer Based on Fuzzy Pattern Reduction

Lin, Chung-fu 17 February 2011 (has links)
Recent years have witnessed increasing interest in using metaheuristics to solve the codebook generation problem (CGP) of vector quantization as well as increasing interest in reducing the computation time of metaheuristics. One of the recently proposed methods aimed at reducing the computation time of metaheuristics is based on the notion of pattern reduction (PR). The problem with PR is in that it may compress and remove patterns that are not supposed to be compressed and removed, thus decreasing the quality of the solution. In this thesis, we proposed a fuzzy version of PR called fuzzy pattern reduction (FPR) to reduce the possibility of compressing and removing patterns that are not supposed to be compressed and removed. To evaluate the performance of the proposed algorithm, we apply it to the following four metaheuristics: generalized Lloyd algorithm, code displacement, genetic k-means algorithm, and particle swarm optimization and use them to solve the CGP. Our experimental results show that the proposed algorithm can not only significantly reduce the computation time but also improve the quality of all the metaheuristics evaluated.
6

Analyse wissenschaftlicher Konferenz-Tweets mittels Codebook und der Software Tweet Classifier

Lemke, Steffen, Mazarakis, Athanasios January 2017 (has links)
Mit seiner fokussierten Funktionsweise hat der Mikrobloggingdienst Twitter im Laufe des vergangenen Jahrzehnts eine beachtliche Präsenz als Kommunikationsmedium in diversen Bereichen des Lebens erreicht. Eine besondere Weise, auf die sich die gestiegene Sichtbarkeit Twitters in der täglichen Kommunikation häufig manifestiert, ist die gezielte Verwendung von Hashtags. So nutzen Unternehmen Hashtags um die auf Twitter stattfindenden Diskussionen über ihre Produkte zu bündeln, während Organisatoren von Großveranstaltungen und Fernsehsendungen durch Bekanntgabe ihrer eigenen, offiziellen Hashtags Zuschauer dazu ermutigen, den Dienst parallel zum eigentlichen Event als Diskussionsplattform zu nutzen. [... aus der Einleitung]
7

Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique / Mooving objects segmentation by RGB-D fusion and color constancy

Murgia, Julian 24 May 2016 (has links)
Cette thèse s'inscrit dans un cadre de vidéo-surveillance, et s'intéresse plus précisément à la détection robustesd'objets mobiles dans une séquence d'images. Une bonne détection d'objets mobiles est un prérequis indispensableà tout traitement appliqué à ces objets dans de nombreuses applications telles que le suivi de voitures ou depersonnes, le comptage des passagers de transports en commun, la détection de situations dangereuses dans desenvironnements spécifiques (passages à niveau, passages piéton, carrefours, etc.), ou encore le contrôle devéhicules autonomes. Un très grand nombre de ces applications utilise un système de vision par ordinateur. Lafiabilité de ces systèmes demande une robustesse importante face à des conditions parfois difficiles souventcausées par les conditions d'illumination (jour/nuit, ombres portées), les conditions météorologiques (pluie, vent,neige) ainsi que la topologie même de la scène observée (occultations). Les travaux présentés dans cette thèsevisent à améliorer la qualité de détection d'objets mobiles en milieu intérieur ou extérieur, et à tout moment de lajournée.Pour ce faire, nous avons proposé trois stratégies combinables :i) l'utilisation d'invariants colorimétriques et/ou d'espaces de représentation couleur présentant des propriétésinvariantes ;ii) l'utilisation d'une caméra stéréoscopique et d'une caméra active Microsoft Kinect en plus de la caméra couleurafin de reconstruire l'environnement 3D partiel de la scène, et de fournir une dimension supplémentaire, à savoirune information de profondeur, à l'algorithme de détection d'objets mobiles pour la caractérisation des pixels ;iii) la proposition d'un nouvel algorithme de fusion basé sur la logique floue permettant de combiner les informationsde couleur et de profondeur tout en accordant une certaine marge d'incertitude quant à l'appartenance du pixel aufond ou à un objet mobile. / This PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification.
8

Reconnaissance d'activités humaines à partir de séquences multi-caméras : application à la détection de chute de personne / Recognition of human activities based on multi-camera sequences : application to people fall detection

Mousse, Ange Mikaël 10 December 2016 (has links)
La vision artificielle est un domaine de recherche en pleine évolution. Les nouvelles stratégies permettent d'avoir des réseaux de caméras intelligentes. Cela induit le développement de beaucoup d'applications de surveillance automatique via les caméras. Les travaux développés dans cette thèse concernent la mise en place d'un système de vidéosurveillance intelligente pour la détection de chutes en temps réel. La première partie de nos travaux consiste à pouvoir estimer de façon robuste la surface d'une personne à partir de deux (02) caméras ayant des vues complémentaires. Cette estimation est issue de la détection de chaque caméra. Dans l'optique d'avoir une détection robuste, nous avons fait recours à deux approches. La première approche consiste à combiner un algorithme de détection de mouvements basé sur la modélisation de l'arrière plan avec un algorithme de détection de contours. Une approche de fusion a été proposée pour rendre beaucoup plus efficiente le résultat de la détection. La seconde approche est basée sur les régions homogènes de l'image. Une première ségmentation est effectuée dans le but de déterminer les régions homogènes de l'image. Et pour finir, nous faisons la modélisation de l'arrière plan en se basant sur les régions. Une fois les pixels de premier plan obtenu, nous faisons une approximation par un polygone dans le but de réduire le nombre d'informations à manipuler. Pour l'estimation de cette surface nous avons proposé une stratégie de fusion dans le but d'agréger les détections des caméras. Cette stratégie conduit à déterminer l'intersection de la projection des divers polygones dans le plan de masse. La projection est basée sur les principes de l'homographie planaire. Une fois l'estimation obtenue, nous avons proposé une stratégie pour détecter les chutes de personnes. Notre approche permet aussi d'avoir une information précise sur les différentes postures de l'individu. Les divers algorithmes proposés ont été implémentés et testés sur des banques de données publiques dans le but de juger l'efficacité des approches proposées par rapport aux approches existantes dans l'état de l'art. Les résultats obtenus et qui ont été détaillés dans le présent manuscrit montrent l'apport de nos algorithmes. / Artificial vision is an involving field of research. The new strategies make it possible to have some autonomous networks of cameras. This leads to the development of many automatic surveillance applications using the cameras. The work developed in this thesis concerns the setting up of an intelligent video surveillance system for real-time people fall detection. The first part of our work consists of a robust estimation of the surface area of a person from two (02) cameras with complementary views. This estimation is based on the detection of each camera. In order to have a robust detection, we propose two approaches. The first approach consists in combining a motion detection algorithm based on the background modeling with an edge detection algorithm. A fusion approach has been proposed to make much more efficient the results of the detection. The second approach is based on the homogeneous regions of the image. A first segmentation is performed to find homogeneous regions of the image. And finally we model the background using obtained regions.
9

Exploiting spatial and temporal redundancies for vector quantization of speech and images

Meh Chu, Chu 07 January 2016 (has links)
The objective of the proposed research is to compress data such as speech, audio, and images using a new re-ordering vector quantization approach that exploits the transition probability between consecutive code vectors in a signal. Vector quantization is the process of encoding blocks of samples from a data sequence by replacing every input vector from a dictionary of reproduction vectors. Shannon’s rate-distortion theory states that signals encoded as blocks of samples have a better rate-distortion performance relative to when encoded on a sample-to-sample basis. As such, vector quantization achieves a lower coding rate for a given distortion relative to scalar quantization for any given signal. Vector quantization does not take advantage of the inter-vector correlation between successive input vectors in data sequences. It has been demonstrated that real signals have significant inter-vector correlation. This correlation has led to vector quantization approaches that encode input vectors based on previously encoded vectors. Some methods have been proposed in literature to exploit the dependence between successive code vectors. Predictive vector quantization, dynamic codebook re-ordering, and finite-state vector quantization are examples of vector quantization schemes that use intervector correlation. Predictive vector quantization and finite-state vector quantization predict the reproduction vector for a given input vector by using past input vectors. Dynamic codebook re-ordering vector quantization has the same reproduction vectors as standard vector quantization. The dynamic codebook re-ordering algorithm is based on the concept of re-ordering indices whereby existing reproduction vectors are assigned new channel indices according a structure that orders the reproduction vectors in an order of increasing dissimilarity. Hence, an input vector encoded in the standard vector quantization method is transmitted through a channel with new indices such that 0 is assigned to the closest reproduction vector to the past reproduction vector. Larger index values are assigned to reproduction vectors that have larger distances from the previous reproduction vector. Dynamic codebook re-ordering assumes that the reproduction vectors of two successive vectors of real signals are typically close to each other according to a distance metric. Sometimes, two successively encoded vectors may have relatively larger distances from each other. Our likelihood codebook re-ordering vector quantization algorithm exploits the structure within a signal by exploiting the non-uniformity in the reproduction vector transition probability in a data sequence. Input vectors that have higher probability of transition from prior reproduction vectors are assigned indices of smaller values. The code vectors that are more likely to follow a given vector are assigned indices closer to 0 while the less likely are given assigned indices of higher value. This re-ordering provides the reproduction dictionary a structure suitable for entropy coding such as Huffman and arithmetic coding. Since such transitions are common in real signals, it is expected that our proposed algorithm when combined with entropy coding algorithms such binary arithmetic and Huffman coding, will result in lower bit rates for the same distortion as a standard vector quantization algorithm. The re-ordering vector quantization approach on quantized indices can be useful in speech, images, audio transmission. By applying our re-ordering approach to these data types, we expect to achieve lower coding rates for a given distortion or perceptual quality. This reduced coding rate makes our proposed algorithm useful for transmission and storage of larger image, speech streams for their respective communication channels. The use of truncation on the likelihood codebook re-ordering scheme results in much lower compression rates without significantly distorting the perceptual quality of the signals. Today, texts and other multimedia signals may be benefit from this additional layer of likelihood re-ordering compression.
10

Foreground Segmentation of Moving Objects

Molin, Joel January 2010 (has links)
<p>Foreground segmentation is a common first step in tracking and surveillance applications.  The purpose of foreground segmentation is to provide later stages of image processing with an indication of where interesting data can be found.  This thesis is an investigation of how foreground segmentation can be performed in two contexts: as a pre-step to trajectory tracking and as a pre-step in indoor surveillance applications.</p><p>Three methods are selected and detailed: a single Gaussian method, a Gaussian mixture model method, and a codebook method.  Experiments are then performed on typical input video using the methods.  It is concluded that the Gaussian mixture model produces the output which yields the best trajectories when used as input to the trajectory tracker.  An extension is proposed to the Gaussian mixture model which reduces shadow, improving the performance of foreground segmentation in the surveillance context.</p>

Page generated in 0.0436 seconds