• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 9
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 45
  • 28
  • 24
  • 23
  • 21
  • 21
  • 20
  • 20
  • 19
  • 16
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Novel Approaches to Overloaded Array Processing

Hicks, James E. Jr. 22 August 2003 (has links)
An antenna array is overloaded when the number of cochannel signals in its operating environment exceeds the number of elements. Conventional space-time array processing for narrow-band signals fails in overloaded environments. Overloaded array processing (OLAP) is most difficult when signals impinging on the array are near equal power, have tight excess bandwidth, and are of identical signal type. Despite the failure of conventional beamforming in such environments, OLAP becomes possible when a receiver exploits additional signal properties such as the finite-alphabet property and signal excess-bandwidth. This thesis proposes three approaches to signal extraction in overloaded environments, each providing a different tradeoff in performance and complexity. The first receiver architecture extracts signals from an overloaded environment through the use of MMSE interference rejection filtering embedded in a successive interference cancellation (SIC) architecture. The second receiver architecture enhances signal extraction performance by embedding a stronger interference rejection receiver, the reduced-state maximum aposteriori probability (RS-MAP) algorithm in a similar SIC architecture. The third receiver fine-tunes the performance of spatially reduced search joint detection (SRSJD) with the application of an energy focusing transform (EFT), a complexity reducing front-end linear pre-processor. A new type of EFT, the Energy Focusing Unitary Relaxed Transform (EFURT) is developed. This transform facilitates a continuous tradeoff between noise-enhancement and error-propagation in an SRSJD framework. EFURT is used to study the role of this tradeoff for SRSJD receivers in a variety of signal environments. It is found that for the environments studied in this thesis, SRSJD enjoys an aggressive reduction in interference at the expense of possible noise-enhancement. / Ph. D.
12

On joint source-channel decoding and interference cancellation in CDMA-based large-scale wireless sensor networks

Illangakoon, Chathura 26 May 2013 (has links)
Motivated by potential applications in wireless sensor networks, this thesis considers the problem of communicating a large number of correlated analog sources over a Gaussian multiple-access channel using non-orthogonal code-division multiple-access (CDMA). A joint source-channel decoder is presented which can exploit the inter-source correlation for interference reduction in the CDMA channel. This decoder uses a linear minimum mean square error (MMSE) multi-user detector (MUD) in tandem with a MMSE joint source decoder (JSD) for multiple sources to achieve a computational complexity that scales with the number of sources. The MUD and the JSD, then iteratively exchange extrinsic information to improve the interference cancellation. Experimental results show that, compared to a non-iterative decoder, the proposed iterative decoder is more robust against potential performance degradation due to correlated channel interference and offers better near far resistance.
13

On joint source-channel decoding and interference cancellation in CDMA-based large-scale wireless sensor networks

Illangakoon, Chathura 26 May 2013 (has links)
Motivated by potential applications in wireless sensor networks, this thesis considers the problem of communicating a large number of correlated analog sources over a Gaussian multiple-access channel using non-orthogonal code-division multiple-access (CDMA). A joint source-channel decoder is presented which can exploit the inter-source correlation for interference reduction in the CDMA channel. This decoder uses a linear minimum mean square error (MMSE) multi-user detector (MUD) in tandem with a MMSE joint source decoder (JSD) for multiple sources to achieve a computational complexity that scales with the number of sources. The MUD and the JSD, then iteratively exchange extrinsic information to improve the interference cancellation. Experimental results show that, compared to a non-iterative decoder, the proposed iterative decoder is more robust against potential performance degradation due to correlated channel interference and offers better near far resistance.
14

Beginning, persisting, and ceasing to play : a stage use and gratifications approach to multiplayer video games /

Shand, Matthew. January 2010 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2010. / Typescript. Includes bibliographical references (leaves 25-26).
15

A Flexible Context Architecture for a Multi-User GUI

Xu, Yue 30 November 2010 (has links) (PDF)
This thesis focuses on the design and development of a flexible context architecture for a multi-user GUI that serves several users engaged in collaborative computer-aided applications (CAx). The objective of this thesis is to extend previous research into multi-user GUI's by providing multiple users with a flexible context interface for interaction with other users working on the same part at distributed locations. The investigation will consider how distributed users, through user interfaces, interact to simultaneously build models and how interaction context might be presented to regulate the way they wish to interact. The implementation integrates a Multi-User GUI (MUG) with NX Connect, a multi-user CAD prototype for Siemens NX. NX MUG uses agent software to render the user prototype outside of NX and NX Connect. This generalizes the interface so that it could be used with other engineering applications where several users wish to collaborate. The Multi-user GUI enables users to view a collaborating user's workspace, send/receive messages between multi-users, and is capable of translating text interactions in different languages, while skyping with other users. This research will have a profound impact on collaborative teams in reducing barriers to effective communication. This research will also enhance the existing NX Connect multi-user prototype by providing collaborative interaction support among the multi-users.
16

The management of multiple submissions in parallel systems: the fair scheduling approach / La gestion de plusieurs soumissions dans les systèmes parallèles: l\'approche d\'ordonnancement équitable

Pinheiro, Vinicius Gama 14 February 2014 (has links)
The High Performance Computing community is constantly facing new challenges due to the ever growing demand for processing power from scientific applications that represent diverse areas of human knowledge. Parallel and distributed systems are the key to speed up the execution of these applications as many jobs can be executed concurrently. These systems are shared by many users who submit their jobs over time and expect a fair treatment by the scheduler. The work done in this thesis lies in this context: to analyze and develop fair and efficient algorithms for managing computing resources shared among multiple users. We analyze scenarios with many submissions issued from multiple users over time. These submissions contain several jobs and the set of submissions are organized in successive campaigns. In what we define as the Campaign Scheduling model, the jobs of a campaign do not start until all the jobs from the previous campaign are completed. Each user is interested in minimizing the flow times of their own campaigns. This is motivated by the user submission behavior whereas the execution of a new campaign can be tuned by the results of the previous campaign. In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions and we show that, in the general case, it is NP-hard. For the single-user case, we show that an approximation scheduling algorithm for the (classic) parallel job scheduling problem also delivers the same approximation ratio for the Campaign Scheduling problem. For the general case with multiple users, we establish a fairness criteria inspired by time sharing. Then, we propose a scheduling algorithm called FairCamp which uses campaign deadlines to achieve fairness among users between consecutive campaigns. The second part of this work explores a more relaxed and realistic Campaign Scheduling model, provided with dynamic features. To handle this setting, we propose a new algorithm called OStrich whose principle is to maintain a virtual time-sharing schedule in which the same amount of processors is assigned to each user. The completion times in the virtual schedule determine the execution order on the physical processors. Then, the campaigns are interleaved in a fair way. For independent sequential jobs, we show that OStrich guarantees the stretch of a campaign to be proportional to campaigns size and to the total number of users. The stretch is used for measuring by what factor a workload is slowed down relatively to the time it takes to be executed on an unloaded system. Finally, the third part of this work extends the capabilities of OStrich to handle parallel jobs. This new version executes campaigns using a greedy approach and uses an event-based resizing mechanism to shape the virtual time-sharing schedule according to the system utilization ratio. / La communauté de Calcul Haute Performance est constamment confrontée à de nouveaux défis en raison de la demande toujours croissante de la puissance de traitement provenant dapplications scientifiques diverses. Les systèmes parallèles et distribués sont la clé pour accélérer lexécution de ces applications, et atteindre les défis associés car de nombreux processus peuvent être exécutés simultanément. Ces systèmes sont partagés par de nombreux utilisateurs qui soumettent des tâches sur de longues périodes au fil du temps et qui attendent un traitement équitable par lordonnanceur. Le travail effectué dans cette thèse se situe dans ce contexte: analyser et développer des algorithmes équitables et efficaces pour la gestion des ressources informatiques partagés entre plusieurs utilisateurs. Nous analysons les scénarios avec de nombreux soumissions issues de plusieurs utilisateurs. Ces soumissions contiennent un ou plusieurs processus et lensemble des soumissions sont organisées dans des campagnes successives. Dans ce que nous appelons le modèle dordonnancement des campagnes les processus dune campagne ne commencent pas avant que tous les processus de la campagne précédente soient terminés. Chaque utilisateur est intéressé à minimiser la somme des temps dexécution de ses campagnes. Cela est motivé par le comportement de lutilisateur tandis que lexécution dune campagne peut être réglé par les résultats de la campagne précédente. Dans la première partie de ce travail, nous définissons un modèle théorique pour lordonnancement des campagnes sous des hypothèses restrictives et nous montrons que, dans le cas général, il est NP-difficile. Pour le cas mono-utilisateur, nous montrons que lalgorithme dapproximation pour le problème (classique) dordonnancement de processus parallèles fournit également le même rapport dapproximation pour lordonnancement des campagnes. Pour le cas général avec plusieurs utilisateurs, nous établissons un critère déquité inspiré par une situation idéalisée de partage des ressources. Ensuite, nous proposons un algorithme dordonnancement appelé FairCamp qui impose des dates limite pour les campagnes pour assurer léquité entre les utilisateurs entre les campagnes successives. La deuxième partie de ce travail explore un modèle dordonnancement de campagnes plus relâché et réaliste, avec des caractéristiques dynamiques. Pour gérer ce cadre, nous proposons un nouveau algorithme appelé OStrich dont le principe est de maintenir un ordonnancement partagé virtuel dans lequel le même nombre de processeurs est assigné à chaque utilisateur. Les temps dachèvement dans lordonnancement virtuel déterminent lordre dexécution sur le processeurs physiques. Ensuite, les campagnes sont entrelacées de manière équitable. Pour des travaux indépendants séquentiels, nous montrons que OStrich garantit le stretch dune campagne en étant proportionnel à la taille de la campagne et le nombre total dutilisateurs. Le stretch est utilisé pour mesurer le ralentissement par rapport au temps quil prendrait dans un système dédié. Enfin, la troisième partie de ce travail étend les capacités dOStrich pour gérer des tâches parallèles rigides. Cette nouvelle version exécute les campagnes utilisant une approche gourmande et se sert aussi dun mécanisme de redimensionnement basé sur les événements pour mettre à jour lordonnancement virtuel selon le ratio dutilisation du système.
17

Performance enhancement of massive MIMO systems under channel correlation and pilot contamination

Alkhaled, Makram Hashim Mahmood January 2018 (has links)
The past decade has seen an enormous increase in the number of connected wireless devices, and currently there are billions of devices that are connected and managed by wireless networks. At the same time, the applications that are running on these devices have also developed significantly and became more data rate insatiable. As the number of wireless devices and the demand for a high data rate will always increase, in addition to the growing concern about the energy consumption of wireless communication systems, the future wireless communication systems will have to meet three main requirements. These three requirements are: i) being able to achieve high throughput; ii) serving a large number of users simultaneously; and iii) being energy efficient (less energy consumption). Massive multiple-input multiple-output (MIMO) technology can satisfy the aforementioned requirements; and thus, it is a promising candidate technology for the next generations of wireless communication systems. Massive MIMO technology simply refers to the idea of utilizing a large number of antennas at the base station (BS) to serve a large number of users simultaneously using the same time-frequency resources. The hypothesis behind using a massive number of antennas at the BS is that as the number of antennas increases, the channels become favourable. In other words, the channel vectors between the users and their serving BS become (nearly) pairwisely orthogonal as the number of BS antennas increases. This in turn enables the use of linear processing at the BS to achieve near optimal performance. Moreover, a huge throughput and energy efficiency can be attained due to users multiplexing and array gain. In this thesis, we investigate the performance of massive MIMO systems under different scenarios. Firstly, we investigate the performance of a single-cell multi-user massive MIMO system, in which the channel vectors for the different users are assumed to be correlated. In this aspect, we propose two algorithms for users grouping that aim to improve the system performance. Afterwards, the problem of pilot contamination in multi-cell massive MIMO systems is discussed. Based on this discussion, we propose a pilot allocation algorithm that maximizes the minimum achievable rate in a target cell. Following that, we consider two different scenarios for pilot sequences allocation in multi-cell massive MIMO systems. Lower bounds on the achievable rates are derived for two linear detectors, and the performance under different system settings is analysed and discussed for both scenarios. Finally, two algorithms for pilot sequences allocation are proposed. The first algorithm takes advantage of the multiplicity of pilot sequences over the number of users to improve the achievable rate of edge cell users. While the second algorithm aims to mitigate the negative impact of pilot contamination by utilizing more system resources for the channel estimation process to reduce the inter-cell interference.
18

Decomposition of Manufacturing Processes for Multi-User Tool Path Planning

Priddis, Andrew Scherbel 01 March 2016 (has links)
Engineering activities by nature are collaborative endeavors. Single-user applications like CAD, CAE, and CAM force a strictly serial design process, which ultimately lengthens time to market. New multi-user applications such as NXConnect address the issue during the design stage of the product development process by enabling users to work in parallel. Multi-user collaborative tool path planning software addresses the same serial limitations in tool path planning, thereby decreasing cost and increasing the quality of manufacturing processes. As part complexity increases, lead times are magnified by serial workflows. Multi-user tool path planning can shorten the process planning time. But, to be effective, it must be possible to intelligently decompose the manufacturing sequence and distribute path planning assignments among several users. A new method of process decomposition is developed and described in this research. A multi-user CAM (MUCAM) prototype was developed to test the method. The decomposition process and MUCAM prototype together were used to manufacture a part to verify the method.
19

Optimisation of Iterative Multi-user Receivers using Analytical Tools

Shepherd, David Peter, RSISE [sic] January 2008 (has links)
The objective of this thesis is to develop tools for the analysis and optimization of an iterative receiver. These tools can be applied to most soft-in soft-out (SISO) receiver components. For illustration purposes we consider a multi-user DS-CDMA system with forward error correction that employs iterative multi-user detection based on soft interference cancellation and single user decoding. Optimized power levels combined with adaptive scheduling allows for efficient utilization of receiver resources for heavily loaded systems.¶ Metric transfer analysis has been shown to be an accurate method of predicting the convergence behavior of iterative receivers. EXtrinsic Information (EXIT), fidelity (FT) and variance (VT) transfer analysis are well-known methods, however the relationship between the different approaches has not been explored in detail. We compare the metrics numerically and analytically and derive functions to closely approximate the relationship between them. The result allows for easy translation between EXIT, FT and VT methods. Furthermore, we extend the $J$ function, which describes mutual information as a function of variance, to fidelity and symbol error variance, the Rayleigh fading channel model and a channel estimate. These $J$ functions allow the \textit{a priori} inputs to the channel estimator, interference canceller and decoder to be accurately modeled. We also derive the effective EXIT charts which can be used for the convergence analysis and performance predictions of unequal power CDMA systems.¶ The optimization of the coded DS-CDMA system is done in two parts; firstly the received power levels are optimized to minimize the power used in the terminal transmitters, then the decoder activation schedule is optimized such that the multi-user receiver complexity is minimized. The uplink received power levels are optimized for the system load using a constrained nonlinear optimization approach. EXIT charts are used to optimize the power allocation in a multi-user turbo-coded DS-CDMA system. We show through simulation that the optimized power levels allow for successful decoding of heavily loaded systems with a large reduction in the convergence SNR.¶ We utilize EXIT chart analysis and a Viterbi search algorithm to derive the optimal decoding schedule for a multi component receiver/decoder. We show through simulations that decoding delay and complexity can be significantly reduced while maintaining BER performance through optimization of the decoding schedule.
20

An Improved PDA Multi-User Detector for DS-CDMA UWB Systems

Li, Tzung-Cheng 28 August 2005 (has links)
Ultra-Wideband technology has attracted interests of the researchers and commercial groups due to its advantage of high data rate, low complexity and low power consumption. The direct-sequence code division multiple access ultra wideband system (DS-CDMA UWB) is one of the proposal of IEEE 802.15.3a standard. By combing the power of both UWB and DS-CDMA techniques, the system could construct multiple access architecture using direct sequence method. In multi-user environment, the major problem of the receiver designing of conventional DS-CDMA system is multiple access interference(MAI). In DS-CDMA UWB system, the transmitted signal were interfered by inter-symbol interference(ISI) and neighbor symbol interference because of the multi-path channel characteristic. In this thesis, we use the training method to get the spreading waveform influenced by multi-path. Based on the information of spreading waveform, we use the block method to reformulate the received signal. We can separate the interference into multiple access interference and neighbor symbol interference. With Combining the interference cancellation, probabilistic data association (PDA) filter and sliding window techniques, we could eliminate the interference. In the computer simulation section, we compare the detection performance of sliding window PDA detector with conventional detector, and the simulation result shows that the improved PDA detector has better performance than others.

Page generated in 0.0502 seconds