• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 71
  • 9
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 139
  • 139
  • 45
  • 28
  • 24
  • 23
  • 21
  • 21
  • 20
  • 20
  • 19
  • 16
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Developing an Architecture Framework for Cloud-Based, Multi-User, Finite Element Pre-Processing

Briggs, Jared Calvin 14 November 2013 (has links) (PDF)
This research proposes an architecture for a cloud-based, multi-user FEA pre-processing system, where multiple engineers can access and operate on the same model in a parallel environment. A prototype is discussed and tested, the results of which show that a multi-user preprocessor, where all computing is done on a central server that is hosted on a high performance system, provides significant benefits to the analysis team. These benefits include a shortened preprocessing time, and potentially higher-quality models.
92

Collision Prediction and Prevention in a Simultaneous Multi-User Immersive Virtual Environment

Holm, Jeannette E. 07 August 2012 (has links)
No description available.
93

Student-Directed Inquiry: Virtual vs. Physical

Moore, Tonia L. 17 July 2012 (has links)
No description available.
94

Multi-Rate Control Architectures for Network-Based Multi-User Haptics Interaction

Ghiam, Mahyar Fotoohi 12 1900 (has links)
<p> Cooperative haptics enables multiple users to manipulate computer simulated objects in a shared virtual environment and to feel the presence of other users. Prior research in the literature has mainly addressed single user haptic interaction. This thesis is concerned with haptic simulation in multi-user virtual environments in which the users can interact in a shared virtual world from separate workstations over Ethernet-based Local Area Networks (LANs) or Metropolitan Area Networks (MANs). In practice, the achievable real-time communication rate using a typical implementation of network protocols such as the UDP and TCP/IP can be well below the 1kHz update rate that is suggested in the literature for high fidelity haptic rendering. However by adopting a multi-rate control strategy as proposed in this work, the local control loops can be executed at 1kHz while the data packet transmission between the user workstations occur at a lower rate. Within such a framework, two control architectures, namely centralized and distributed are presented. In the centralized controller a central workstation simulates the virtual environment, whereas in the distributed controller each user workstation simulates its own copy of the virtual environment. Two different approaches have been proposed for mathematical modeling of the controllers and have been used in a comparative analysis of their stability and performance. The results of such analysis demonstrate that the distributed control architecture has greater stability margins and outperforms the centralized controller. They also reveal that the limited network transmission rate can degrade the haptic fidelity by introducing viscous damping into the virtual object perceived impedance. This extra damping is compensated by active control based on the damping values obtained from the analytical results. Experimental results conducted with a dual-user/dual-finger haptic platform are presented for each of the proposed controller under various scenarios in which the user workstations communicate with UDP protocol subjected to a limited transmission rate. The results demonstrate the effectiveness of the proposed distributed architecture in providing a stable and transparent haptic simulation in free motion and in contact with rigid environments.</p> / Thesis / Master of Applied Science (MASc)
95

A state management and persistency architecture for peer-to-peer massively multi-user virtual environments

Gilmore, John Sebastian 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Recently, there has been significant research focus on Peer-to-Peer (P2P) Massively Multi-user Virtual Environments (MMVEs). A number of architectures have been presented in the literature to implement the P2P approach. One aspect that has not received sufficient attention in these architectures is state management and state persistency in P2P MMVEs. This work presents and simulates a novel state management and persistency architecture, called Pithos. In order to design the architecture, an investigation is performed into state consistency architectures, into which the state management and persistency architecture should fit. A novel generic state consistency model is proposed that encapsulated all state consistency models reviewed. The requirements for state management and persistency architectures, identified during the review of state consistency models, are used to review state management and persistency architectures currently receiving research attention. Identifying some deficiencies present in current designs, such as lack of fairness, responsiveness and scalability, a novel state management and persistency architecture, called Pithos, is designed. Pithos is a reliable, responsive, secure, fair and scalable distributed storage system, ideally suited to P2P MMVEs. Pithos is implemented in Oversim, which runs on the Omnet++ network simulator. An evaluation of Pithos is performed to verify that it satisfies the identified requirements. It is found that the reliability of Pithos depends heavily on object lifetimes. If an object lives longer on average, retrieval requests are more reliable. An investigation is performed into the factors influencing object lifetime. A novel Markov chain model is proposed which allows for the prediction of objects lifetimes in any finite sized network, for a given amount of redundancy, node lifetime characteristics and object repair rate. / AFRIKAANSE OPSOMMING: Onlangs is daar ’n beduidende navorsingsfokus op Eweknie Massiewe Multi-gebruiker Virtuele Omgewings (MMVOs). ’n Aantal argitekture is in die literatuur beskikbaar wat die eweknie benadering voorstel. Een aspek wat nie voldoende aandag ontvang in hierdie argitekture nie is toestandsbestuur en toestandsvolharding in eweknie MMVOs. Hierdie werk ontwerp en simuleer ’n nuwe toestandsbestuur- en toestandsvolhardingargitektuur genaamd Pithos. Ten einde die argitektuur te ontwerp is ’n ondersoek uitgevoer in toestandskonsekwentheidargitekture, waarin die toestandsbestuur- en toestandsvolhardingargitektuur moet pas. ’n Nuwe generiese toestandskonsekwentheidargitektuur word voorgestel wat alle hersiene toestandskonsekwentheid argitekture vervat. Die vereistes vir die toestandsbestuur- en toestandsvolhardingargitekture, geidentifiseer tydens die hersiening van die toestandskonsekwentheidargitekture, word gebruik om toestandsbestuuren toestandsvolhardingargitekture te hersien wat tans navorsingsaandag geniet. Identifisering van sekere leemtes teenwoordig in die huidige ontwerpe, soos ’n gebrek aan regverdigheid, responsiwiteit en skaleerbaarheid, lei tot die ontwerp van ’n nuwe toestandsbestuur- en toestandsvolhardingargitektuur wat Pithos genoem word. Pithos is ’n betroubare, responsiewe, veilige, regverdige en skaleerbare verspreide stoorstelsel, ideaal geskik is vir eweknie MMVOs. Pithos word geïmplementeer in Oversim, wat loop op die Omnet++ netwerk simulator. ’n Evaluering van Pithos word uitgevoer om te verifieer dat dit voldoen aan die geïdentifiseerde behoeftes. Daar is gevind dat die betroubaarheid van Pithos afhang van die objek leeftyd. As ’n objek gemiddeld langer leef, dan is herwinning versoeke meer betroubaar. ’n Ondersoek word uitgevoer na die faktore wat die objek leeftyd beïnvloed. ’n Nuwe Markov ketting model word voorgestel wat voorsiening maak vir die voorspelling van objek leeftye in eindige grootte netwerke, vir gegewe hoeveelhede van oortolligheid, nodus leeftyd eienskappe en objek herstelkoers.
96

Multidiffusion et diffusion dans les systèmes OFDM sans fil / Multicast and Broadcast in wireless OFDM systems

Saavedra Navarrete, José Antonio 19 October 2012 (has links)
Le système OFDM (Orthogonal Frequency Division Multiplexing) utilise plusieurs sous-porteuses pour transmettre de l’information. Comparé à un schéma mono-porteuse, la modulation multi-porteuses OFDM permet d’obtenir facilement des réglages optimaux (au sens de la capacité de Shannon) pour une transmission à haut débit sur un canal sélectif en fréquence. En ce sens, on peut alors garantir une transmission fiable et une meilleure gestion de l'énergie utilisée. Lors de la transmission avec une modulation OFDM, les sous-porteuses utilisent des canaux différents qui n’ont pas forcement la même atténuation. Allouer le même niveau de puissance à chaque sous-porteuse ne garantit pas une capacité optimale dans une liaison point à point. Une allocation dynamique de la puissance (c’est-à-dire attribuer différents niveaux de puissance aux sous-porteuses en fonction du canal) donne de meilleures performances. Par contre, dans une situation de diffusion (broadcast), l’émetteur ne connaît pas les canaux vers tous les utilisateurs, et la meilleure stratégie consiste à émettre avec la même puissance sur toutes les sous-porteuses. Cette thèse a pour objectif d’explorer les situations intermédiaires, et de proposer les outils d’allocation de puissance appropriés. Cette situation intermédiaire est appelée « multicast », ou « multidiffusion » : l’émetteur envoie les signaux vers un nombre fini (pas trop grand) d’utilisateurs, dont il connaît les paramètres de canaux, et il peut adapter son émission à cette connaissance des canaux. On est donc dans une situation intermédiaire entre le « point à point » et la « diffusion ». L’objectif final de ce travail est d’évaluer le gain apporté par la connaissance des canaux en situation de multicast par rapport à la même communication effectuée comme si on était en diffusion. Bien évidemment, quand le nombre de destinataires est très grand, les gains seront négligeables, car le signal rencontre un nombre très élevé de canaux, et une allocation de puissance uniforme sera quasi optimale. Quand le nombre est très faible, on sera proche du point à point et les gains devraient être sensibles. Nous proposons des outils pour quantifier ces améliorations dans les cas de systèmes ayant une antenne à l'émission et une antenne à la réception, dit SISO (Single Input Single Output) et de systèmes avec plusieurs antennes, dits MIMO (Multiple Input Multiple Output). Les étapes nécessaires pour réaliser ce travail sont : 1) En supposant une connaissance préalable de l’état des canaux (entre station de base et terminaux), mettre en œuvre les outils de la théorie de l'information pour effectuer l’allocation de puissance et évaluer les capacités des systèmes étudiés. 2) Pour le système multi-utilisateur SISO-OFDM, nous proposons un algorithme d'allocation de puissance sur chaque sous porteuse dans une situation de multicast. 3) Pour le système multi-utilisateur MIMO-OFDM, nous proposons un algorithme qui exploite les caractéristiques du précodage "zero forcing". L'objectif est alors de partager la puissance disponible entre toutes les sous-porteuses et toutes les antennes. 4) Enfin, dans une dernière étape nous nous intéressons à une conception efficace de la situation de diffusion, afin de déterminer à l’aide d’outils de géométrie stochastique quelle zone peut être couverte afin qu’un pourcentage donné d’utilisateurs reçoivent une quantité d’information déterminée à l’avance. Ceci permet de déterminer la zone de couverture sans mettre en œuvre des simulations intensives. La combinaison de ces outils permet un choix efficace des situations qui relèvent de la « diffusion », du « multicast » et du « point à point ». / The OFDM (Orthogonal Frequency Division Multiplexing) system uses multiple sub-carriers for data transmission. Compared to the single-carrier scheme, the OFDM technique allows optimal settings for high data rate transmission over a frequency selective channel (from the Shannon’s capacity point of view). We can, by this way, ensure reliable communication and efficient energy use. When we use OFDM, the sub-carriers use different channels with different attenuations as well. The equal power allocation on each sub-carrier does not ensure an optimal capacity in a peer to peer link. Dynamic power allocation (i.e., assign different amount of power to subcarriers according to the channel) gives better results, assuming that the channel state information is available at the transmitter. Nevertheless, the transmitter does not know the channels to all users when broadcast transmission are used, and the best strategy is to transmit with the same power on all subcarriers. This thesis aims to explore the intermediate situations, and propose appropriate power allocation tools. This intermediate situation is called "multicast": the transmitter, which knows the channel parameters, sends signals to a finite number of users, and it can adapt the transmission using this knowledge. It is an intermediate position between the "peer to peer" and the "broadcast. The goal of this work is to evaluate the gain brought by the knowledge of the channel state information in multicast situation beside the broadcast situation. Obviously, when the number of receivers is very large, the gain will not be appreciable because the signal found on its path a very large number of channels, and a uniform power allocation is near optimal. When the number of users is very low, we will be close to the peer to peer transmission and gains should be more appreciable. We propose some tools to quantify these improvements in the case where the systems have one antenna at the transmitter and the receiver, this case named SISO (Single Input Single Output). We also propose those tools on systems with multiple antennas, called MIMO (Multiple Input Multiple Output). The steps required to do this work are: 1) Assuming that the channel state information of the users are known at the base station, we implement tools, using information theory, to perform power allocation and evaluate the capacities of the systems under study. 2) For multi-user SISO-OFDM scheme, we propose a power allocation algorithm on each subcarrier on multicast situation. 3) For multi-user MIMO-OFDM, we propose an algorithm that exploits the characteristics of the "zero forcing" precoding. The objective is to share the available power among all subcarriers and all antennas. 4) Finally, in a last step we focus on an efficient design of the broadcast situation. We use tools from stochastic geometry to determine which area can be covered, with the aim that a percentage of users can receive a predetermined amount of information. This determines the coverage area without implementing long period simulations. The combination of these tools allows an effective choice between the situations that fall under the "broadcast", "multicast" and "peer to peer" transmissions.
97

Random Matrix Analysis of Future Multi Cell MU-MIMO Networks / Analyse des réseaux multi-cellulaires multi-utilisateurs futurs par la théorie des matrices aléatoires

Müller, Axel 13 November 2014 (has links)
Les futurs systèmes de communication sans fil devront utiliser des architectures cellulaires hétérogènes composées de grandes cellules (macro) plus performantes et de petites cellules (femto, micro, ou pico) très denses, afin de soutenir la demande de débit en augmentation exponentielle au niveau de la couche physique. Ces structures provoquent un niveau d'interférence sans précèdent à l'intérieur, comme à l'extérieur des cellules, qui doit être atténué ou, idéalement, exploité afin d'améliorer l'efficacité spectrale globale du réseau. Des techniques comme le MIMO à grande échelle (dit massive MIMO), la coopération, etc., qui contribuent aussi à la gestion des interférences, vont encore augmenter la taille des grandes architectures hétérogènes, qui échappent ainsi à toute possibilité d'analyse théorique par des techniques statistiques traditionnelles.Par conséquent, dans cette thèse, nous allons appliquer et améliorer des résultats connus de la théorie des matrices aléatoires à grande échelle (RMT) afin d'analyser le problème d'interférence et de proposer de nouveaux systèmes de précodage qui s'appuient sur les résultats acquis par l'analyse du système à grande échelle. Nous allons d'abord proposer et analyser une nouvelle famille de précodeurs qui réduit la complexité de calcul de précodage pour les stations de base équipées d'un grand nombre d'antennes, tout en conservant la plupart des capacités d'atténuation d'interférence de l'approche classique et le caractère quasi-optimal du précodeur regularised zero forcing. Dans un deuxième temps, nous allons proposer une variation de la structure de précodage linéaire optimal (obtenue pour de nombreuses mesures de performance) qui permet de réduire le niveau d'interférence induit aux autres cellules. Ceci permet aux petites cellules d'atténuer efficacement les interférences induites et reçues au moyen d'une coopération minimale. Afin de faciliter l'utilisation de l'approche analytique RMT pour les futures générations de chercheurs, nous fournissons également un tutoriel exhaustif sur l'application pratique de la RMT pour les problèmes de communication en début du manuscrit. / Future wireless communication systems will need to feature multi cellular heterogeneous architectures consisting of improved macro cells and very dense small cells, in order to support the exponentially rising demand for physical layer throughput. Such structures cause unprecedented levels of inter and intra cell interference, which needs to be mitigated or, ideally, exploited in order to improve overall spectral efficiency of the communication network. Techniques like massive multiple input multiple output (MIMO), cooperation, etc., that also help with interference management, will increase the size of the already large heterogeneous architectures to truly enormous networks, that defy theoretical analysis via traditional statistical methods.Accordingly, in this thesis we will apply and improve the already known framework of large random matrix theory (RMT) to analyse the interference problem and propose solutions centred around new precoding schemes, which rely on large system analysis based insights. First, we will propose and analyse a new family of precoding schemes that reduce the computational precoding complexity of base stations equipped with a large number of antennas, while maintaining most of the interference mitigation capabilities of conventional close-to-optimal regularized zero forcing. Second, we will propose an interference aware linear precoder, based on an intuitive trade-off and recent results on multi cell regularized zero forcing, that allows small cells to effectively mitigate induced interference with minimal cooperation. In order to facilitate utilization of the analytic RMT approach for future generations of interested researchers, we will also provide a comprehensive tutorial on the practical application of RMT in communication problems.
98

Associative CAD References in the Neutral Parametric Canonical Form

Staves, Daniel Robert 01 March 2016 (has links)
Due to the multiplicity of computer-aided engineering applications present in industry today, interoperability between programs has become increasingly important. A survey conducted among top engineering companies found that 82% of respondents reported using 3 or more CAD formats during the design process. A 1999 study by the National Institute for Standards and Technology (NIST) estimated that inadequate interoperability between the OEM and its suppliers cost the US automotive industry over $1 billion per year, with the majority spent fixing data after translations. The Neutral Parametric Canonical Form (NPCF) prototype standard developed by the NSF Center for e-Design, BYU Site offers a solution to the translation problem by storing feature data in a CAD-neutral format to offer higher-fidelity parametric transfer between CAD systems. This research has focused on expanding the definitions of the NPCF to enforce data integrity and to support associativity between features to preserved design intent through the neutralization process. The NPCF data structure schema was defined to support associativity while maintaining data integrity. Neutral definitions of new features was added including multiple types of coordinate systems, planes and axes. Previously defined neutral features were expanded to support new functionality and the software architecture was redefined to support new CAD systems. Complex models have successfully been created and exchanged by multiple people in real-time to validated the approach of preserving associativity and support for a new CAD system, PTC Creo, was added.
99

Une approche du patching audio collaboratif : enjeux et développement du collecticiel Kiwi. / An approach of collaborative audio patching : challenges and development of the Kiwi groupware

Paris, Eliott 05 December 2018 (has links)
Les logiciels de patching audio traditionnels, tels que Max ou Pure Data, sont des environnements qui permettent de concevoir et d’exécuter des traitements sonores en temps réel. Ces logiciels sont mono-utilisateurs, or, dans bien des cas, les utilisateurs ont besoin de travailler en étroite collaboration à l’élaboration ou à l’exécution d’un même traitement. C’est notamment le cas dans un contexte pédagogique ainsi que pour la création musicale collective. Des solutions existent, mais ne conviennent pas forcément à tous les usages. Aussi avons-nous cherché à nous confronter de manière concrète à cette problématique en développant une nouvelle solution de patching audio collaborative, baptisée Kiwi, qui permet l’élaboration d’un même traitement sonore à plusieurs mains de manière distribuée. À travers une étude critique des solutions logicielles existantes nous donnons des clefs de compréhension pour appréhender la conception d’un système multi-utilisateur de ce type. Nous énonçons les principaux verrous que nous avons eu à lever pour rendre cette pratique viable et présentons la solution logicielle. Nous exposons les possibilités offertes par l’application et les choix de mise en œuvre techniques et ergonomiques que nous avons faits pour permettre à plusieurs personnes de coordonner leurs activités au sein d’un espace de travail mis en commun. Nous revenons ensuite sur différents cas d’utilisation de ce collecticiel dans un contexte pédagogique et de création musicale afin d’évaluer la solution proposée. Nous exposons enfin les développements plus récents et ouvrons sur les perspectives futures que cette application nous permet d’envisager. / Traditional audio patching software, such as Max or Pure Data, are environments that allow you to design and execute sound processing in real time. These programs are single-user, but, in many cases, users need to work together and in a tight way to create and play the same sound processing. This is particularly the case in a pedagogical context and for collective musical creation. Solutions exist, but are not necessarily suitable for all uses. We have tried to confront this problem in a concrete way by developing a new collaborative audio patching solution, named Kiwi, which allows the design of a sound processing with several hands in a distributed manner. Through a critical study of the existing software solutions we give keys of comprehension to apprehend the design of a multi-user system of this type. We present the main barriers that we had to lift to make this practice viable and present the software solution. We show the possibilities offered by the application and the technical and ergonomic implementation choices that we have made to allow several people to coordinate their activities within a shared workspace. Then, we study several uses of this groupware in pedagogical and musical creation contexts in order to evaluate the proposed solution. Finally, we present the recent developments and open up new perspectives for the application.
100

Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code Design

Zheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1. In this thesis, we will look further beyond all existing and proposed video coding standards, and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous encoded frames Sj, while the corresponding decoder can use only all previous encoded frames. We consider all studies, comparisons, and designs on causal video coding from an information theoretic point of view. Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively) denote the minimum total rate required to achieve a given distortion level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively). A novel computation approach is proposed to analytically characterize, numerically compute, and compare the minimum total rate of causal video coding R*c(D₁,...,D_N) required to achieve a given distortion (quality) level D₁,...,D_N > 0. Specifically, we first show that for jointly stationary and ergodic sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal to the infimum of the n-th order total rate distortion function R_{c,n}(D₁,...,D_N) over all n, where R_{c,n}(D₁,...,D_N) itself is given by the minimum of an information quantity over a set of auxiliary random variables. We then present an iterative algorithm for computing R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the algorithm to the global minimum. The global convergence of the algorithm further enables us to not only establish a single-letter characterization of R*c(D₁,...,D_N) in a novel way when the N sources are an independent and identically distributed (IID) vector source, but also demonstrate a somewhat surprising result (dubbed the more and less coding theorem)---under some conditions on source frames and distortion, the more frames need to be encoded and transmitted, the less amount of data after encoding has to be actually sent. With the help of the algorithm, it is also shown by example that R*c(D₁,...,D_N) is in general much smaller than the total rate offered by the traditional greedy coding method by which each frame is encoded in a local optimum manner based on all information available to the encoder of the frame. As a by-product, an extended Markov lemma is established for correlated ergodic sources. From an information theoretic point of view, it is interesting to compare causal video coding and predictive video coding, which all existing video coding standards proposed so far are based upon. In this thesis, by fixing N=3, we first derive a single-letter characterization of R*p(D₁,D₂,D₃) for an IID vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards. The design of causal video coding is also considered in the thesis from an information theoretic perspective by modeling each frame as a stationary information source. We first put forth a concept called causal scalar quantization, and then propose an algorithm for designing optimum fixed-rate causal scalar quantizers for causal video coding to minimize the total distortion among all sources. Simulation results show that in comparison with fixed-rate predictive scalar quantization, fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).

Page generated in 0.0613 seconds