Spelling suggestions: "subject:"multiuser"" "subject:"multilaser""
1 |
Multi-user Non-Cooperative and Cooperative Systems with HARQRauf, Zahid January 2013 (has links)
The performance and reliability of wireless communication links can be improved by employing multiple antennas at both ends, thereby creating multiple-input multiple-output (MIMO) channels. However, once multiple co-channel users are added to the system it can be difficult to provide as many receive antennas as transmit antennas, resulting in a so-called overloaded (rank-deficient) system. Under overloaded conditions, maximum likelihood (ML) detection works well, but its exponential complexity prohibits its use and suboptimal linear detectors perform poorly.
In this thesis, new signal processing techniques for multi-user overloaded systems using hybrid automatic repeat request (HARQ) protocols are investigated. The HARQ retransmissions are used to form virtual receive antennas, which can efficiently transform an overloaded system into a critically loaded system (i.e. a system with an equal number of transmit and receive antennas).
In the first part of the thesis, a multi-user non-cooperative overloaded system is considered. At first, it is demonstrated that the suboptimal linear minimum mean square error (MMSE) detector leads to significant performance degradation compared to an ML detector for such systems. To overcome this drawback, two multi-user transmission schemes are proposed that work well under overloaded conditions. The proposed schemes allow us to apply linear multi-user detection (MUD) algorithms without requiring additional antennas or hardware chains. Monte-Carlo simulations demonstrate that the proposed schemes can result in significant gains in terms of bit-error-rate (BER) and dropped packet performance.
In the second part, the performance of multiple HARQ processes for a two-hop multi-source multi-relay decode-and-forward (DF) relaying network with no direct link are analyzed. Dealing with multiple HARQ processes at each relay, a retransmission scheme is proposed that utilizes virtual antennas to achieve increased receive diversity and improved throughput compared to traditional orthogonal (time division) retransmissions. A novel forwarding strategy at the relay(s) to destination link is proposed with the objective of further improving throughput. Finally, the end-to-end outage probability and throughput efficiency of the proposed retransmission and forwarding schemes are found analytically and confirmed with Monte-Carlo simulations.
|
2 |
A multi-user process interface system for a process control computerSherlock, Barry Graham 27 September 2023 (has links) (PDF)
This thesis describes a system to implement a distributed multi-user process interface to allow the PDP-11/23 computer in the Electrical Engineering department at UCT to be used for process control. The use of this system is to be shared between postgraduate students for research and undergraduates for doing real-time control projects. The interface may be used concurrently by several users, and access is controlled in such a way as to prevent users' programs from interfering with one another. The process interface hardware used was a GEC Micro-Media system, which is a stand-alone process interface system communicating with a host (the PDP-11/23) via a serial line. Hardware to drive a 600-metre serial link at 9600 baud between the PDP-11/23 and the Media interface was designed and built. The software system on the host, written in RTL/2, holds-all data from the interface in a resident common data-base and continually updates it. Access to the interface by applications programs is done indirectly by reading and writing to the database, for which purpose a library of user interface routines is provided. To allow future expansion and modification of the Media interface, software (also written in RTL/2) for an LSI-11 minicomputer interfaced to the Media bus was developed which emulates the operation of the GEC proprietary Micro-Media software. A program to download this software into the LSI-11 was written. A suite of diagnostic programs enables testing of the system hardware and software at various levels. To ease testing, teaching, and applications programming, a general-purpose simulation package for the simulation of analogue systems was developed, as well as graphics routines for use with a Tektronix 4010 plotting terminal. A. real-time computing project for a class of undergraduates was run in 1983. This project made extensive use of the system and demonstrated its viability.
|
3 |
Designed for play : a case study of uses and gratifications as design elements in Massively Multiplayer Online Role-Playing Games /Gibson, Timothy Patrick. January 2008 (has links)
Thesis (M.A.)--Liberty University, 2008. / Includes bibliographical references.
|
4 |
The management of multiple submissions in parallel systems: the fair scheduling approach / La gestion de plusieurs soumissions dans les systèmes parallèles: l\'approche d\'ordonnancement équitableVinicius Gama Pinheiro 14 February 2014 (has links)
La communauté de Calcul Haute Performance est constamment confrontée à de nouveaux défis en raison de la demande toujours croissante de la puissance de traitement provenant dapplications scientifiques diverses. Les systèmes parallèles et distribués sont la clé pour accélérer lexécution de ces applications, et atteindre les défis associés car de nombreux processus peuvent être exécutés simultanément. Ces systèmes sont partagés par de nombreux utilisateurs qui soumettent des tâches sur de longues périodes au fil du temps et qui attendent un traitement équitable par lordonnanceur. Le travail effectué dans cette thèse se situe dans ce contexte: analyser et développer des algorithmes équitables et efficaces pour la gestion des ressources informatiques partagés entre plusieurs utilisateurs. Nous analysons les scénarios avec de nombreux soumissions issues de plusieurs utilisateurs. Ces soumissions contiennent un ou plusieurs processus et lensemble des soumissions sont organisées dans des campagnes successives. Dans ce que nous appelons le modèle dordonnancement des campagnes les processus dune campagne ne commencent pas avant que tous les processus de la campagne précédente soient terminés. Chaque utilisateur est intéressé à minimiser la somme des temps dexécution de ses campagnes. Cela est motivé par le comportement de lutilisateur tandis que lexécution dune campagne peut être réglé par les résultats de la campagne précédente. Dans la première partie de ce travail, nous définissons un modèle théorique pour lordonnancement des campagnes sous des hypothèses restrictives et nous montrons que, dans le cas général, il est NP-difficile. Pour le cas mono-utilisateur, nous montrons que lalgorithme dapproximation pour le problème (classique) dordonnancement de processus parallèles fournit également le même rapport dapproximation pour lordonnancement des campagnes. Pour le cas général avec plusieurs utilisateurs, nous établissons un critère déquité inspiré par une situation idéalisée de partage des ressources. Ensuite, nous proposons un algorithme dordonnancement appelé FairCamp qui impose des dates limite pour les campagnes pour assurer léquité entre les utilisateurs entre les campagnes successives. La deuxième partie de ce travail explore un modèle dordonnancement de campagnes plus relâché et réaliste, avec des caractéristiques dynamiques. Pour gérer ce cadre, nous proposons un nouveau algorithme appelé OStrich dont le principe est de maintenir un ordonnancement partagé virtuel dans lequel le même nombre de processeurs est assigné à chaque utilisateur. Les temps dachèvement dans lordonnancement virtuel déterminent lordre dexécution sur le processeurs physiques. Ensuite, les campagnes sont entrelacées de manière équitable. Pour des travaux indépendants séquentiels, nous montrons que OStrich garantit le stretch dune campagne en étant proportionnel à la taille de la campagne et le nombre total dutilisateurs. Le stretch est utilisé pour mesurer le ralentissement par rapport au temps quil prendrait dans un système dédié. Enfin, la troisième partie de ce travail étend les capacités dOStrich pour gérer des tâches parallèles rigides. Cette nouvelle version exécute les campagnes utilisant une approche gourmande et se sert aussi dun mécanisme de redimensionnement basé sur les événements pour mettre à jour lordonnancement virtuel selon le ratio dutilisation du système. / The High Performance Computing community is constantly facing new challenges due to the ever growing demand for processing power from scientific applications that represent diverse areas of human knowledge. Parallel and distributed systems are the key to speed up the execution of these applications as many jobs can be executed concurrently. These systems are shared by many users who submit their jobs over time and expect a fair treatment by the scheduler. The work done in this thesis lies in this context: to analyze and develop fair and efficient algorithms for managing computing resources shared among multiple users. We analyze scenarios with many submissions issued from multiple users over time. These submissions contain several jobs and the set of submissions are organized in successive campaigns. In what we define as the Campaign Scheduling model, the jobs of a campaign do not start until all the jobs from the previous campaign are completed. Each user is interested in minimizing the flow times of their own campaigns. This is motivated by the user submission behavior whereas the execution of a new campaign can be tuned by the results of the previous campaign. In the first part of this work, we define a theoretical model for Campaign Scheduling under restrictive assumptions and we show that, in the general case, it is NP-hard. For the single-user case, we show that an approximation scheduling algorithm for the (classic) parallel job scheduling problem also delivers the same approximation ratio for the Campaign Scheduling problem. For the general case with multiple users, we establish a fairness criteria inspired by time sharing. Then, we propose a scheduling algorithm called FairCamp which uses campaign deadlines to achieve fairness among users between consecutive campaigns. The second part of this work explores a more relaxed and realistic Campaign Scheduling model, provided with dynamic features. To handle this setting, we propose a new algorithm called OStrich whose principle is to maintain a virtual time-sharing schedule in which the same amount of processors is assigned to each user. The completion times in the virtual schedule determine the execution order on the physical processors. Then, the campaigns are interleaved in a fair way. For independent sequential jobs, we show that OStrich guarantees the stretch of a campaign to be proportional to campaigns size and to the total number of users. The stretch is used for measuring by what factor a workload is slowed down relatively to the time it takes to be executed on an unloaded system. Finally, the third part of this work extends the capabilities of OStrich to handle parallel jobs. This new version executes campaigns using a greedy approach and uses an event-based resizing mechanism to shape the virtual time-sharing schedule according to the system utilization ratio.
|
5 |
Quantifying the multi-user account problem for collaborative filtering based recommender systemsEdwards, James Adrian 15 September 2010 (has links)
Identification based recommender systems make no distinction between users and accounts; all the data collected during account sessions are attributed to a single user. In reality this is not necessarily true for all accounts; several different users who have distinct, and possibly very different, preferences may access the same account. Such accounts are identified as multi-user accounts. Strangely, no serious study considering the existence of multi-user accounts in recommender systems has been undertaken. This report quantifies the affect multi-user accounts have on the predictive capabilities of recommender system, focusing on two popular collaborative filtering algorithms, the kNN user-based and item-based models. The results indicate that while the item-based model is largely resistant to multi-user account corruption the quality of predictions generated by the user-based model is significantly degraded. / text
|
6 |
An Open Software Architecture for UNIX Based Data Acquisition/Telemetry SystemsDawson, Daniel 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Veda Systems Incorporated has recently completed the development of a completely open architecture, UNIX-based software environment for standard telemetry and more generic data acquisition applications. The new software environment operates on many state-of-the-art high-end workstations and provides a workstation independent, multiuse platform for front-end system configuration, database management, real-time graphic data display and data, logging.
|
7 |
Integration of Massive Multiplayer Online Role-Playing Games Client-Server Architectures with Collaborative Multi-User Engineering CAx ToolsWinn, Joshua D. 28 February 2012 (has links)
This research presents a new method for integrating client server architectures that are used for the development of Massive Online Role Playing Games (MMORPG) into multi-user engineering software tools. The new method creates a new architecture named CAx Connect by changing the client-pull-server communication pipeline to a server-push-client communication pipeline, effectively reducing the amount of bandwidth consumed and allowing these tools to utilize multiple server processors for complex calculations. This method was used on the new NX Connect multi-user CAx prototype developed at BYU. The new method provides a road map to further implement this architecture and its services into additional multi-user CAx tools. To demonstrate the effectiveness of this technology, a prototype architecture was built to provide a front end service, a message relay service, and a database insertion service, which were integrated into the current architecture. The front end service provides load balancing of clients, while the feature administration service passes messages throughout the architecture. The database insertion service inserts features passed from the NX Connect client into the database. The results show that this architecture is more efficient and that a scalable architecture was created, successfully demonstrating the integration of this architecture with multi-user CAx tools.
|
8 |
Neutral Parametric Database, Server, Logic Layers, and Clients to Facilitate Multi-EngineerSynchronous Heterogeneous CADBowman, Kelly Eric 01 March 2016 (has links) (PDF)
Engineering companies are sociotechnical systems in which engineers, designers, analysts, etc. use an array of software tools to follow prescribed product-development processes. The purpose of these amalgamated systems is to develop new products as quickly as possible while maintaining quality and meeting customer and market demands. Researchers at Brigham Young University have shortened engineering design cycle times through the development and use of multiengineer synchronous (MES) CAD tools. Other research teams have shortened design cycle-times by extending seamless interoperability across heterogeneous design tools and domains. Seamless multi-engineer synchronous heterogeneous (MESH) CAD environments is the focus of this dissertation. An architecture that supports both MES collaboration and interoperability is defined, tested for robustness, and proposed as the start of a new standard for interoperability. An N-tiered architecture with four layers is used. These layers are data storage, server communication, business logic, and client. Perhaps the most critical part of the architecture is the new neutral parametric database (NPDB) standard which can generically store associative CAD geometry from heterogeneous CAD systems. A practical application has been developed using the architecture which demonstrates design and modeling interoperability between Siemens NX, PTC's Creo, and Dassault Systemes CATIA CAD applications; Interoperability between Siemens' NX and Dassault Systemes' CATIA are specifically outlined in this dissertation. The 2D point, 2D line, 2D arc, 2D circle, 2D spline, 3D point, extrude, and revolve features have been developed. Complex models have successfully been modeled and exchanged in real time across heterogeneous CAD clients and have validated this approach for MESH CAD collaboration.
|
9 |
Multi-User Methods for FEA Pre-ProcessingWeerakoon, Prasad 13 June 2012 (has links) (PDF)
Collaboration in engineering product development leads to shorter product development times and better products. In product development, considerable time is spent preparing the CAD model or assembly for Finite Element Analysis (FEA). In general Computer-Aided Applications (CAx) such as FEA deter collaboration because they allow only a single user to check out and make changes to the model at a given time. Though most of these software applications come with some collaborative tools, they are limited to simple tasks such as screen sharing and instant messaging. This thesis discusses methods to convert a current commercial FEA pre-processing program into a multi-user program, where multiple people are allowed to work on a single FEA model simultaneously. This thesis discusses a method for creating a multi-user FEA pre-processor and a robust, stable multi-user FEA program with full functionality has been developed using CUBIT. A generalized method for creating a networking architecture for a multi-user FEA pre-processor is discussed and the chosen client-server architecture is demonstrated. Furthermore, a method for decomposing a model/assembly using geometry identification tags is discussed. A working prototype which consists of workspace management Graphical User Interfaces (GUI) is demonstrated. A method for handling time-consuming tasks in an asynchronous multi-user environment is presented using Central Processing Unit (CPU) time as a time indicator. Due to architectural limitations of CUBIT, this is not demonstrated. Moreover, a method for handling undo sequences in a multi-user environment is discussed. Since commercial FEA pre-processors do not allow mesh related actions to be undone using an undo option, this undo handling method is not demonstrated.
|
10 |
DESIGNING AGORA: A SHARED MULTI-USER PROGRAMMING ENVIRONMENTThomas Allen Kennell (13806892) 19 September 2022 (has links)
<p>Shared programming systems typically fall into one of two categories: systems to distribute code between users, and systems to allow shared access to editing or debugging facilities. Version-control systems allow distribution of code and are often more than adequate for large-scale software development occurring over a long period of time, but they can become unwieldy for fast iterative or exploratory development in which multiple users wish to participate. In these situations, shared editors or pair programming tools may suffice,with the caveat that any user of the system can typically modify any of the code at will. Rather than connecting several users to the same editor session, it would be more effective to allow users to maintain separate sessions while quickly sharing selected chunks of code at will.</p>
<p>To enable this paradigm, we have designed a new interpreter to allow distributed users to selectively share code and data at run-time. Our solution consists of a bytecode virtual machine <em>back-end </em>with access to a shared environment and a management mechanism to control creation and usage of these resources. By providing access to interpreter sessions overa network connection, we do not tie our interpreter to executing code from any one particular programming language, allowing any conforming <em>front-end</em> compiler and user interface to be used. This solution allows the development burden of shared programs to be distributed dynamically between users at run-time through the shared environment while still affording control over what and when to share, thereby facilitating more effective incremental or experimental multi-user programming.</p>
|
Page generated in 0.0256 seconds