• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 41
  • 41
  • 11
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Algorithmic Trading : Analyse von computergesteuerten Prozessen im Wertpapierhandel unter Verwendung der Multifaktorenregression / Algorithmic Trading : analysis of computer driven processes in securities trading using a multifactor regression model

Gomolka, Johannes January 2011 (has links)
Die Elektronisierung der Finanzmärkte ist in den letzten Jahren weit vorangeschritten. Praktisch jede Börse verfügt über ein elektronisches Handelssystem. In diesem Kontext beschreibt der Begriff Algorithmic Trading ein Phänomen, bei dem Computerprogramme den Menschen im Wertpapierhandel ersetzen. Sie helfen dabei Investmententscheidungen zu treffen oder Transaktionen durchzuführen. Algorithmic Trading selbst ist dabei nur eine unter vielen Innovationen, welche die Entwicklung des Börsenhandels geprägt haben. Hier sind z.B. die Erfindung der Telegraphie, des Telefons, des FAX oder der elektronische Wertpapierabwicklung zu nennen. Die Frage ist heute nicht mehr, ob Computerprogramme im Börsenhandel eingesetzt werden. Sondern die Frage ist, wo die Grenze zwischen vollautomatischem Börsenhandel (durch Computer) und manuellem Börsenhandel (von Menschen) verläuft. Bei der Erforschung von Algorithmic Trading wird die Wissenschaft mit dem Problem konfrontiert, dass keinerlei Informationen über diese Computerprogramme zugänglich sind. Die Idee dieser Dissertation bestand darin, dieses Problem zu umgehen und Informationen über Algorithmic Trading indirekt aus der Analyse von (Fonds-)Renditen zu extrahieren. Johannes Gomolka untersucht daher die Forschungsfrage, ob sich Aussagen über computergesteuerten Wertpapierhandel (kurz: Algorithmic Trading) aus der Analyse von (Fonds-)Renditen ziehen lassen. Zur Beantwortung dieser Forschungsfrage formuliert der Autor eine neue Definition von Algorithmic Trading und unterscheidet mit Buy-Side und Sell-Side Algorithmic Trading zwei grundlegende Funktionen der Computerprogramme (die Entscheidungs- und die Transaktionsunterstützung). Für seine empirische Untersuchung greift Gomolka auf das Multifaktorenmodell zur Style-Analyse von Fung und Hsieh (1997) zurück. Mit Hilfe dieses Modells ist es möglich, die Zeitreihen von Fondsrenditen in interpretierbare Grundbestandteile zu zerlegen und den einzelnen Regressionsfaktoren eine inhaltliche Bedeutung zuzuordnen. Die Ergebnisse dieser Dissertation zeigen, dass man mit Hilfe der Style-Analyse Aussagen über Algorithmic Trading aus der Analyse von (Fonds-)Renditen machen kann. Die Aussagen sind jedoch keiner technischen Natur, sondern auf die Analyse von Handelsstrategien (Investment-Styles) begrenzt. / During the last decade the electronic trading on the stock exchanges advanced rapidly. Today almost every exchange is running an electronic trading system. In this context the term algorithmic trading describes a phenomenon, where computer programs are replacing the human trader, when making investment decisions or facilitating transactions. Algorithmic trading itself stands in a row of many other innovations that helped to develop the financial markets technologically (see for example telegraphy, the telephone, FAX or electronic settlement). Today the question is not, whether computer programs are used or not. The question arising is rather, where the border between automatic, computer driven and human trading can be drawn. Conducting research on algorithmic trading confronts scientists always with the problem of limited availability of information. The idea of this dissertation is to circumnavigate this problem and to extract information indirectly from an analysis of a time series of (fund)-returns data. The research question here is: Is it possible to draw conclusions about algorithmic trading from an analysis of (funds-)return data? To answer this question, the author develops a complete definition of algorithmic trading. He differentiates between Buy-Side and Sell-Side algorithmic trading, depending on the functions of the computer programs (supporting investment-decisions or transaction management). Further, the author applies the multifactor model of the style analysis, formely introduced by Fung and Hsieh (1997). The multifactor model allows to separate fund returns into regression factors that can be attributed to different reasons. The results of this dissertation do show that it is possible to draw conclusions about algorithmic trading out of the analysis of funds returns. Yet these conclusions cannot be of technical nature. They rather have to be attributed to investment strategies (investment styles).
22

GCC-NMF : séparation et rehaussement de la parole en temps-réel à faible latence / GCC-NMF: low latency real-time speech separation and enhancement

Wood, Sean January 2017 (has links)
Le phénomène du cocktail party fait référence à notre remarquable capacité à nous concentrer sur une seule voix dans des environnements bruyants. Dans cette thèse, nous concevons, implémentons et évaluons une approche computationnelle nommée GCC-NMF pour résoudre ce problème. GCC-NMF combine l’apprentissage automatique non supervisé par la factorisation matricielle non négative (NMF) avec la méthode de localisation spatiale à corrélation croisée généralisée (GCC). Les atomes du dictionnaire NMF sont attribués au locuteur cible ou à l’interférence à chaque instant en fonction de leurs emplacements spatiaux estimés. Nous commençons par étudier GCC-NMF dans le contexte hors ligne, où des mélanges de 10 secondes sont traités à la fois. Nous développons ensuite une variante temps réel de GCC-NMF et réduisons par la suite sa latence algorithmique inhérente de 64 ms à 2 ms avec une méthode asymétrique de transformée de Fourier de courte durée (STFT). Nous montrons que des latences aussi faibles que 6 ms, dans la plage des délais tolérables pour les aides auditives, sont possibles sur les plateformes embarquées actuelles. Nous évaluons la performance de GCC-NMF sur des données publiquement disponibles de la campagne d’évaluation de séparation des signaux SiSEC. La qualité de séparation objective est quantifiée avec les méthodes PEASS, estimant les évaluations subjectives humaines, ainsi que BSS Eval basée sur le rapport signal sur bruit (SNR) traditionnel. Bien que GCC-NMF hors ligne ait moins bien performé que d’autres méthodes du défi SiSEC en termes de métriques SNR, ses scores PEASS sont comparables aux meilleurs résultats. Dans le cas de GCC-NMF en ligne, alors que les métriques basées sur le SNR favorisent à nouveau d’autres méthodes, GCC-NMF surpasse toutes les approches précédentes sauf une en termes de scores PEASS globaux, obtenant des résultats comparables au masque binaire idéale. Nous montrons que GCC-NMF augmente la qualité objective et les métriques d’intelligibilité STOI et ESTOI sur une large gamme de SNR d’entrée de -30 à 20 dB, avec seulement des réductions mineures pour les SNR d’entrée supérieurs à 20 dB. GCC-NMF présente plusieurs caractéristiques souhaitables lorsqu’on le compare aux approches existantes. Contrairement aux méthodes d’analyse de scène auditive computationnelle (CASA), GCC-NMF ne nécessite aucune connaissance préalable sur la nature des signaux d’entrée et pourrait donc convenir aux applications de séparation et de débruitage de source dans un grand nombre de domaines. Dans le cas de GCC-NMF en ligne, seule une petite quantité de données non étiquetées est nécessaire pour apprendre le dictionnaire NMF. Cela se traduit par une plus grande flexibilité et un apprentissage beaucoup plus rapide par rapport aux approches supervisées, y compris les solutions basées sur NMF et les réseaux neuronaux profonds qui reposent sur de grands ensembles de données étiquetées. Enfin, contrairement aux méthodes de séparation de source aveugle (BSS) qui reposent sur des statistiques de signal accumulées, GCC-NMF fonctionne indépendamment pour chaque trame, ce qui permet des applications en temps réel à faible latence. / Abstract: The cocktail party phenomenon refers to our remarkable ability to focus on a single voice in noisy environments. In this thesis, we design, implement, and evaluate a computational approach to solving this problem named GCC-NMF. GCC-NMF combines unsupervised machine learning via non-negative matrix factorization (NMF) with the generalized cross-correlation (GCC) spatial localization method. Individual NMF dictionary atoms are attributed to the target speaker or background interference at each point in time based on their estimated spatial locations. We begin by studying GCC-NMF in the offline context, where entire 10-second mixtures are treated at once. We then develop an online, instantaneous variant of GCC-NMF and subsequently reduce its inherent algorithmic latency from 64 ms to 2 ms with an asymmetric short-time Fourier transform (STFT) windowing method. We show that latencies as low as 6 ms, within the range of tolerable delays for hearing aids, are possible on current hardware platforms. We evaluate the performance of GCC-NMF on publicly available data from the Signal Separation Evaluation Campaign (SiSEC), where objective separation quality is quantified using the signal-to-noise ratio (SNR)-based BSS Eval and perceptually-motivated PEASS toolboxes. Though offline GCC-NMF underperformed other methods from the SiSEC challenge in terms of the SNR-based metrics, its PEASS scores were comparable with the best results. In the case of online GCC-NMF, while SNR-based metrics again favoured other methods, GCC-NMF outperformed all but one of the previous approaches in terms of overall PEASS scores, achieving comparable results to the ideal binary mask (IBM) baseline. Furthermore, we show that GCC-NMF increases objective speech quality and the STOI and ETOI speech intelligibility metrics over a wide range of input SNRs from -30 dB to 20 dB, with only minor reductions for input SNRs greater than 20 dB. GCC-NMF exhibits a number of desirable characteristics when compared existing approaches. Unlike computational auditory scene analysis (CASA) methods, GCC-NMF requires no prior knowledge about the nature of the input signals, and may thus be suitable for source separation and denoising applications in a wide range of fields. In the case of online GCC-NMF, only a small amount of unlabeled data is required to pre-train the NMF dictionary. This results in much greater flexibility and significantly faster training when compared to supervised approaches including NMF and deep neural network-based solutions that rely on large, supervised datasets. Finally, in contrast with blind source separation (BSS) methods that rely on accumulated signal statistics, GCC-NMF operates independently for each time frame, allowing for low latency, real-time applications.
23

"Processamento distribuído de áudio em tempo real" / "Distributed Real-Time Audio Processing"

Nelson Posse Lago 04 June 2004 (has links)
Sistemas computadorizados para o processamento de multimídia em tempo real demandam alta capacidade de processamento. Problemas que exigem grandes capacidades de processamento são comumente abordados através do uso de sistemas paralelos ou distribuídos; no entanto, a conjunção das dificuldades inerentes tanto aos sistemas de tempo real quanto aos sistemas paralelos e distribuídos tem levado o desenvolvimento com vistas ao processamento de multimídia em tempo real por sistemas computacionais de uso geral a ser baseado em equipamentos centralizados e monoprocessados. Em diversos sistemas para multimídia há a necessidade de baixa latência durante a interação com o usuário, o que reforça ainda mais essa tendência para o processamento em um único nó. Neste trabalho, implementamos um mecanismo para o processamento síncrono e distribuído de áudio com características de baixa latência em uma rede local, permitindo o uso de um sistema distribuído de baixo custo para esse processamento. O objetivo primário é viabilizar o uso de sistemas computacionais distribuídos para a gravação e edição de material musical em estúdios domésticos ou de pequeno porte, contornando a necessidade de hardware dedicado de alto custo. O sistema implementado consiste em duas partes: uma, genérica, implementada sob a forma de um middleware para o processamento síncrono e distribuído de mídias contínuas com baixa latência; outra, específica, baseada na primeira, voltada para o processamento de áudio e compatível com aplicações legadas através da interface padronizada LADSPA. É de se esperar que pesquisas e aplicações futuras em que necessidades semelhantes se apresentem possam utilizar o middleware aqui descrito para outros tipos de processamento de áudio bem como para o processamento de outras mídias, como vídeo. / Computer systems for real-time multimedia processing require high processing power. Problems that depend on high processing power are usually solved by using parallel or distributed computing techniques; however, the combination of the difficulties of both real-time and parallel programming has led the development of applications for real-time multimedia processing for general purpose computer systems to be based on centralized and single-processor systems. In several systems for multimedia processing, there is a need for low latency during the interaction with the user, which reinforces the tendency towards single-processor development. In this work, we implemented a mechanism for synchronous and distributed audio processing with low latency on a local area network which makes the use of a low cost distributed system for this kind of processing possible. The main goal is to allow the use of distributed systems for recording and editing of musical material in home and small studios, bypassing the need for high-cost equipment. The system we implemented is made of two parts: the first, generic, implemented as a middleware for synchronous and distributed processing of continuous media with low latency; and the second, based on the first, geared towards audio processing and compatible with legacy applications based on the standard LADSPA interface. We expect that future research and applications that share the needs of the system developed here make use of the middleware we developed, both for other kinds of audio processing as well as for the processing of other media forms, such as video.
24

The Liebherr Intelligent Hydraulic Cylinder as building block for innovative hydraulic concepts

Leutenegger, Paolo, Braun, Sebastian, Dropmann, Markus, Kipp, Michael, Scheidt, Michael, Zinner, Tobias, Lavergne, Hans-Peter, Stucke, Michael January 2016 (has links)
We present hereafter the development of the Liebherr Intelligent Hydraulic Cylinder, in which the hydraulic component is used as smart sensing element providing useful information for the system in which the cylinder is operated. The piston position and velocity are the most important signals derived from this new measuring approach. The performance under various load and temperature conditions (measured both on dedicated test facilities and in field in a real machine) will be presented. An integrated control electronics, which is performing the cylinder state processing, additionally allows the synchronized acquisition of external sensors. Providing comprehensive state information, such as temperature and system pressure, advanced control techniques or monitoring functions can be realized with a monolithic device. Further developments, trends and benefits for the system architecture will be briefly analyzed and discussed.
25

Efficient speaker diarization and low-latency speaker spotting / Segmentation et regroupement efficaces en locuteurs et détection des locuteurs à faible latence

Patino Villar, José María 24 October 2019 (has links)
La segmentation et le regroupement en locuteurs (SRL) impliquent la détection des locuteurs dans un flux audio et les intervalles pendant lesquels chaque locuteur est actif, c'est-à-dire la détermination de ‘qui parle quand’. La première partie des travaux présentés dans cette thèse exploite une approche de modélisation du locuteur utilisant des clés binaires (BKs) comme solution à la SRL. La modélisation BK est efficace et fonctionne sans données d'entraînement externes, car elle utilise uniquement des données de test. Les contributions présentées incluent l'extraction des BKs basée sur l'analyse spectrale multi-résolution, la détection explicite des changements de locuteurs utilisant les BKs, ainsi que les techniques de fusion SRL qui combinent les avantages des BKs et des solutions basées sur un apprentissage approfondi. La tâche de la SRL est étroitement liée à celle de la reconnaissance ou de la détection du locuteur, qui consiste à comparer deux segments de parole et à déterminer s'ils ont été prononcés par le même locuteur ou non. Même si de nombreuses applications pratiques nécessitent leur combinaison, les deux tâches sont traditionnellement exécutées indépendamment l'une de l'autre. La deuxième partie de cette thèse porte sur une application où les solutions de SRL et de reconnaissance des locuteurs sont réunies. La nouvelle tâche, appelée détection de locuteurs à faible latence, consiste à détecter rapidement les locuteurs connus dans des flux audio à locuteurs multiples. Il s'agit de repenser la SRL en ligne et la manière dont les sous-systèmes de SRL et de détection devraient être combinés au mieux. / Speaker diarization (SD) involves the detection of speakers within an audio stream and the intervals during which each speaker is active, i.e. the determination of ‘who spoken when’. The first part of the work presented in this thesis exploits an approach to speaker modelling involving binary keys (BKs) as a solution to SD. BK modelling is efficient and operates without external training data, as it operates using test data alone. The presented contributions include the extraction of BKs based on multi-resolution spectral analysis, the explicit detection of speaker changes using BKs, as well as SD fusion techniques that combine the benefits of both BK and deep learning based solutions. The SD task is closely linked to that of speaker recognition or detection, which involves the comparison of two speech segments and the determination of whether or not they were uttered by the same speaker. Even if many practical applications require their combination, the two tasks are traditionally tackled independently from each other. The second part of this thesis considers an application where SD and speaker recognition solutions are brought together. The new task, coined low latency speaker spotting (LLSS), involves the rapid detection of known speakers within multi-speaker audio streams. It involves the re-thinking of online diarization and the manner by which diarization and detection sub-systems should best be combined.
26

Algorithm Design for Low Latency Communication in Wireless Networks

ElAzzouni, Sherif 11 September 2020 (has links)
No description available.
27

Secure Virtual Mobile Small Cells: A Stepping Stone Towards 6G

Rodriguez, J., Koudouridis, X., Gelabert, M., Tayyab, M., Bassoli, R., Fitzek, F.H.P., Torre, R., Abd-Alhameed, Raed, Sajedin, M., Elfergani, Issa T., Irum, S., Schulte, G., Diogo, P., Marzouk, F., de Ree, M., Mantas, G., Politis, I. 08 May 2021 (has links)
Yes / As 5th Generation research reaches the twilight, the research community must go beyond 5G and look towards the 2030 connectivity landscape, namely 6G. In this context, this work takes a step towards the 6G vision by proposing a next generation communication platform, which aims to extend the rigid coverage area of fixed deployment networks by considering virtual mobile small cells (MSC) that are created on demand. Relying on emerging computing paradigms such as NFV (Network Function Virtualization) and SDN (Software Defined Networking), these cells can harness radio and networking capability locally reducing protocol signalling latency and overhead. These MSCs constitute an intelligent pool of networking resources that can collaborate to form a wireless network of MSCs providing a communication platform for localized, ubiquitous and reliable connectivity. The technology enablers for implementing the MSC concept are also addressed in terms of virtualization, lightweight wireless security, and energy efficient RF. The benefits of the MSC architecture towards reliable and efficient cell-offloading are demonstrated as a use-case. / This project has received funding from the European Union's H2020 research and innovation program under grant agreement H2020-MCSAITN- 2016-SECRET 722424 [2].
28

Performance Analysis and Improvement of 5G based Mission Critical Motion Control Applications

Bhimavarapu, Koushik January 2022 (has links)
The industrial needs in the production of goods and control of processes within the factory keep leapfrogging daily by the necessities to fulfil the needs of the ever-growing population. In recent times, the industries are looking towards Industry 4.0 to improve their overall productivity and scalability. One of the significant aspects that are required to meet the requirements of Industry 4.0 is communication networks among industrial applications. Nowadays, industries from the cross markets are looking to replace their existing wired networks with wireless networks, which indeed brings many use-cases and a lot of new business models into existence. To make all these options possible, wireless networks need to meet the stringent requirements of these industrial applications in the form of reliability, latency, and service availability. This thesis focuses on a systematic methodology to integrate wireless networks like 5G, Wi-Fi 6, etc., into real-life automation devices. It also describes a methodology to evaluate their communication and control performance by varying control parameters like topology, cycle time, and type of networks. It also devises some techniques and methods that can improve the overall performance, i.e., both control and communication performance of the control applications. The method used to implement this work is a case study. This work integrates and tests the industrial applications in a real-life scenario. It is the best effort to bring a unique perspective of communication engineers and control engineers together regarding the performance of the industrial applications. This work tries to verify the suitability of the wireless in mission-critical control application scenarios with respect to their communication and control performance. Software for data analysis and visualization and its methodology for analyzing the traffic flow of the control applications via different wireless networks is demonstrated by varying different control parameters. It is shown that it is challenging for 5G to support the shorter cycle time values, and performance will get better and more stable with the increase in the cycle time of the control application. It is also found that the 1-Hop wireless topologies have a comparatively better control performance than 2-Hop wireless topologies. In the end, it is found that the communication and control performance of the motion control application can be improved by using the hybrid topology, which is a mixture of 5G and Wi-Fi 6, by modifying some key aspects. The thesis work helps to introduce a novel systematic methodology for measuring and analyzing the communication and control applications via different wireless networks. It also gives a better idea for the control engineers in the industry about which cycle times the different wireless networks and their topologies support when integrated with industrial automation devices. It also describes which wireless networks support industrial applications better. It ends with a novel methodology that could improve the performance of the mission-critical motion applications by using existing wireless technologies.
29

L4S in 5G networks / L4S i 5G-nätverk

Brunello, Davide January 2020 (has links)
Low Latency Low Loss Scalable Throughput (L4S) is a technology which aims to provide high throughput and low latency for the IP traffic, lowering also the probability of packet loss. To reach this goal, it relies on Explicit Con- gestion Notification (ECN), a mechanism to signal congestion in the network avoiding packets drop. The congestion signals are then managed at sender and receiver side thanks to scalable congestion control algorithms. Initially, in this work the challenges to implement L4S in a 5G network have been analyzed. Using a proprietary state-of-the-art network simulator, L4S have been imple- mented at the Packed Data Convergence Protocol layer in a 5G network. The 5G network scenario represents a context where the physical layer has a carrier frequency of 600 MHz, a transmission bandwidth of 9 MHz, and the proto- col stack follows the New Radio (NR) specifications. L4S has been adopted to support Augmented Reality (AR) video gaming traffic, using the IETF ex- perimental standard Self-Clocked Rate Adaptation for Multimedia (SCReAM) for congestion control. The results showed that when supported by L4S, the video gaming traffic experiences lower delay than without L4S support. The improvement on latency comes with an intrinsic trade-off between throughput and latency. In all the cases analyzed, L4S yields to average application layer throughput above the minimum requirements of high-rate latency-critical ap- plication, even at high system load. Furthermore, the packet loss rate has been significantly reduced thanks to the introduction of L4S, and if used in combi- nation with a Delay Based Scheduler (DBS), a packet loss rate very close to zero has been reached. / Low Latency Low Loss Scalable Throughput (L4S) är en teknik som syftar till att ge hög bittakt och låg fördröjning för IP-trafik, vilket också minskar sanno- likheten för paketförluster. För att nå detta mål förlitar det sig på Explicit Cong- estion Notification (ECN), en mekanism för att signalera "congestion", det vill säga köuppbyggnad i nätverket för att undvika att paketet kastas. Congestion- signalerna hanteras sedan vid avsändare och mottagarsida där skalbar anpass- ning justerar bittakten efter rådande omständigheter. I detta arbete har utma- ningarna att implementera L4S i ett 5G-nätverk analyserats. Sedan har L4S implementerats på PDCP lagret i ett 5G-nätverkssammanhang genom att an- vända en proprietär nätverkssimulator. För att utvärdera fördelarna med imple- menteringen har L4S-funktionerna använts för att stödja Augmented Reality (AR) videospelstrafik, med IETF-experimentella standard Self-Clocked Rate Adaptation for Multimedia (SCReAM) för bitrate-kontroll. Resultaten visade att med stöd av L4S upplever videospelstrafiken lägre latens än utan stöd av L4S. Förbättringen av latens kommer med nackdelen av en minskning av bit- takt som dikteras av den inneboende avvägningen mellan bittakt och latens. I vilket fall som helst är kapacitetsminskningen med L4S rimlig, eftersom goda kapacitetsprestanda har uppnåtts även vid hög systembelastning. Vidare har paketförlustfrekvensen reducerats avsevärt tack vare införandet av L4S, och om den används i kombination med en Delay baserad schemaläggare (DBS) har en paketförluster mycket nära noll uppnåtts.
30

Robust Wireless Communications with Applications to Reconfigurable Intelligent Surfaces

Buvarp, Anders Martin 12 January 2024 (has links)
The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. For low-latency communications, joint source-channel coding (JSCC) is an attractive method for error correction coding and compared to highly complex digital systems that are currently in use. I propose the use of complex-valued and quaternionic neural networks (QNN) to decode JSCC codes, where the complex-valued neural networks show a significant improvement over real-valued networks and the QNNs have an exceptionally high performance. Furthermore, I propose mapping encoded JSCC code words to the baseband of the frequency domain in order to enable time/frequency synchronization as well as to mitigate fading using robust estimation theory. Additionally, I perform robust statistical signal processing on the high-dimensional JSCC code showing significant noise immunity with drastic performance improvements at low signal-to-noise ratio (SNR) levels. The performance of the proposed JSCC codes is within 5 dB of the optimal performance theoretically achievable and outperforms the maximum likelihood decoder at low SNR while exhibiting the smallest possible latency. I designed a Bayesian minimum mean square error estimator for decoding high-dimensional JSCC codes achieving 99.96% accuracy. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. In order to decode polarization-space modulated RIS reflections, robust polarization state decoders are proposed using the Weiszfeld algorithm and an generalized Huber M-estimator. Additionally, QNNs are trained and evaluated for the recovery of the polarization state. Furthermore, I propose a novel 64-ary signal constellation based on scaled and shifted Eisenstein integers and generated using media-based modulation with a RIS. The waveform is received using an antenna array and decoded with complex-valued convolutional neural networks. I employ the circular cross-correlation function and a-priori knowledge of the phase angle distribution of the constellation to blindly resolve phase offsets between the transmitter and the receiver without the need for pilots or reference signals. Furthermore, the channel attenuation is determined using statistical methods exploiting that the constellation has a particular distribution of magnitudes. After resolving the phase and magnitude ambiguities, the noise power of the channel can also be estimated. Finally, I tune an Sq-estimator to robustly decode the Eisenstein waveform. / Doctor of Philosophy / This dissertation covers three novel wireless communications methods; analog coding, communications using the electromagnetic polarization and communications with a novel signal constellation. The concepts of a digital twin and extended reality have recently emerged, which require a massive amount of sensor data to be transmitted with low latency and high reliability. Contemporary digital communication systems are highly complex with high reliability at the expense of high latency. In order to reduce the complexity and hence latency, I propose to use an analog coding scheme that directly maps the sensor data to the wireless channel. Furthermore, I propose the use of neural networks for decoding at the receiver, hence using the name neural receiver. I employ various data types in the neural receivers hence leveraging the mathematical structure of the data in order to achieve exceptionally high performance. Another key contribution here is the mapping of the analog codes to the frequency domain enabling time and frequency synchronization. I also utilize robust estimation theory to significantly improve the performance and reliability of the coding scheme. With the recent introduction of electromagnetic reconfigurable intelligent surfaces (RIS), a paradigm shift is currently taking place in the world of wireless communications. These new technologies have enabled the inclusion of the wireless channel as part of the optimization process. Therefore, I propose to use the polarization state of the electromagnetic wave to convey information over the channel, where the polarization is determined using a RIS. As with the analog codes, I also extensively employ various methods of robust estimation to improve the performance of the recovery of the polarization at the receiver. Finally, I propose a novel communications signal constellation generated by a RIS that allows for equal probability of error at the receiver. Traditional communication systems utilize reference symbols for synchronization. In this work, I utilize statistical methods and the known distributions of the properties of the transmitted signal to synchronize without reference symbols. This is referred to as blind channel estimation. The reliability of the third communications method is enhanced using a state-of-the-art robust estimation method.

Page generated in 0.0516 seconds