Spelling suggestions: "subject:"multiuser"" "subject:"multilaser""
101 |
A Web Based Multi User Framework For The Design And Detailing Of Reinforced Concrete Frames - Beams.Anil, Engin Burak 01 January 2009 (has links) (PDF)
Structural design of reinforced concrete structures requires involvement of many engineers who contribute to a single project. During the design process engineers have to exchange wide variety of information. Unmanaged data exchange may result in loss of resources. Developing a data model and setting up protocols for the management of data related to various structural design tasks can help to improve the quality of the structural design.
In this study, an object oriented data model was developed for reinforced concrete beams. Geometry of the structure, detailed shape and placement of the reinforcement, and design specific information for beams were defined in the data model. Design code based computations are facilitated by developing a code library.
Another focus of this study is developing a web based, platform independent data management and multi-user framework for structural design and detailing of reinforced concrete frames. The framework allows simultaneous design of a structure by multiple engineers. XML Web Services technology was utilized for the central management of design data. Design data was kept as XML files. Information was exchanged between the server and the engineer on a per-request basis. To design a beam strip, the engineer connects to the server and chooses a series of connected beams. The strip that is selected is locked for modifications of other engineers to prevent any data loss and unnecessary duplicate efforts. When the engineer finalizes the design of a beam strip, data is updated on the server and the lock for this strip is released. Between these requests no active connection is required between the engineer and the server.
As a final task, the framework can produce structural CAD drawings in DXF format.
|
102 |
A Web Based Multi-user Framework For The Design And Detailing Of Reinforced Concrete Frames-columnsUnal, Gokhan 01 December 2009 (has links) (PDF)
In design and detailing of a reinforced concrete frame project, there are many engineers who contribute a single project. Wide variety of information is exchanged between these engineers in design and detailing stages. If the coordination between engineers is not performed sufficiently, data exchange may result in loss of important information that may cause inadequate design and detailing of a structure. Thus, a data model developed for different stages of design and detailing of reinforced concrete structure can facilitate the data exchange among engineers and help improving the quality of structural design. In this study, an object oriented data model was developed for the design and detailing of reinforced concrete columns and beam column joints. The geometry of the structure, amount, shape and placement of reinforcement were defined in this data model. In addition to these, classes that facilitate the design and detailing of reinforced concrete columns and beam column joints according to a building codes were also developed. Another focus of this study is to develop a web based, platform independent data management and multi-user framework for structural design and detailing of reinforced concrete frames. The framework allows simultaneous design of a structure by multiple engineers. XML Web Services technology was utilized for the web based environment in such a way that the design related data was stored and managed centrally by the server in XML files. As a final step, CAD drawings of column reinforcement details in DXF format are prepared.
|
103 |
LDPC Coded OFDM-IDMA SystemsLu, Kuo-sheng 05 August 2009 (has links)
Recently, a novel technique for multi-user spread-spectrum mobile systems, the called interleave-division multiple-access (IDMA) scheme, was proposed by L. Ping etc. The advantage of IDMA is that it inherits many special features from code-division multiple-access (CDMA) such as diversity against fading and mitigation of the other-cell user interference. Moreover, it¡¦s capable of employing a very simple chip-by-chip iterative multi-user detection strategy. In this thesis, we investigate the performance of combining IDMA and orthogonal frequency-division multiplexing (OFDM) scheme. In order to improve the bit error rate performance, we applied low-density parity-check (LDPC) coding to the proposed scheme, named by LDPC Coded OFDM-IDMA Systems. Based on the aid of iterative multi-user detection algorithm, the multiple-access interference (MAI) and inter-symbol interference (ISI) could be canceling efficiently. In short, the proposed scheme provides an efficient solution to high-rate multiuser communications over multipath fading channels.
|
104 |
Rate Distortion Theory for Causal Video Coding: Characterization, Computation Algorithm, Comparison, and Code DesignZheng, Lin January 2012 (has links)
Due to the sheer volume of data involved, video coding is an important application of lossy source coding, and has received wide industrial interest and support as evidenced by the development and success of a series of video coding standards. All MPEG-series and H-series video coding standards proposed so far are based upon a video coding paradigm called predictive video coding, where video source frames Xᵢ,i=1,2,...,N, are encoded in a frame by frame manner, the encoder and decoder for each frame Xᵢ, i =1, 2, ..., N, enlist help only from all previous encoded frames Sj, j=1, 2, ..., i-1.
In this thesis, we will look further beyond all existing and proposed video coding standards,
and introduce a new coding paradigm called causal video coding, in which the encoder for each frame Xᵢ
can use all previous original frames Xj, j=1, 2, ..., i-1, and all previous
encoded frames Sj, while the corresponding decoder can use only all
previous encoded frames. We consider all studies, comparisons, and designs on causal video coding
from an information theoretic
point of view.
Let R*c(D₁,...,D_N) (R*p(D₁,...,D_N), respectively)
denote the minimum total rate required to achieve a given distortion
level D₁,...,D_N > 0 in causal video coding (predictive video coding, respectively).
A novel computation
approach is proposed to analytically characterize, numerically
compute, and compare the
minimum total rate of causal video coding R*c(D₁,...,D_N)
required to achieve a given distortion (quality) level D₁,...,D_N > 0.
Specifically, we first show that for jointly stationary and ergodic
sources X₁, ..., X_N, R*c(D₁,...,D_N) is equal
to the infimum of the n-th order total rate distortion function
R_{c,n}(D₁,...,D_N) over all n, where
R_{c,n}(D₁,...,D_N) itself is given by the minimum of an
information quantity over a set of auxiliary random variables. We
then present an iterative algorithm for computing
R_{c,n}(D₁,...,D_N) and demonstrate the convergence of the
algorithm to the global minimum. The global convergence of the
algorithm further enables us to not only establish a single-letter
characterization of R*c(D₁,...,D_N) in a novel way when the
N sources are an independent and identically distributed (IID)
vector source, but also demonstrate
a somewhat surprising result (dubbed the more and less coding
theorem)---under some conditions on source frames and distortion,
the more frames need to be encoded and transmitted, the less amount
of data after encoding has to be actually sent.
With the help of the algorithm, it is also shown by example that
R*c(D₁,...,D_N) is in general much smaller than the total rate
offered by the traditional greedy coding method by which each frame
is encoded in a local optimum manner based on all information
available to the encoder of the frame.
As a by-product, an extended Markov lemma is
established for correlated ergodic sources.
From an information theoretic point of view,
it is interesting to compare causal
video coding and predictive video coding,
which all existing video
coding standards proposed so far are based upon.
In this thesis, by fixing N=3,
we first derive a single-letter characterization
of R*p(D₁,D₂,D₃) for an IID
vector source (X₁,X₂,X₃) where X₁ and X₂ are independent, and then demonstrate the existence of such X₁,X₂,X₃ for which R*p(D₁,D₂,D₃)>R*c(D₁,D₂,D₃) under some conditions on source frames and distortion. This result makes causal video coding an attractive framework for future video coding systems and standards.
The design of causal video coding is also considered in the thesis from an information
theoretic perspective by modeling each frame as a stationary information source.
We first put forth a concept called causal scalar quantization, and then
propose an algorithm for designing optimum fixed-rate causal scalar quantizers
for causal video coding to minimize the total distortion among all sources.
Simulation results show that in comparison with fixed-rate predictive scalar quantization,
fixed-rate causal scalar quantization offers as large as 16% quality improvement (distortion reduction).
|
105 |
Acoustique longue portée pour transmission et localisation de signaux / Long-range acoustics for the transmission and localization of signalsOllivier, Benjamin 06 December 2016 (has links)
Le positionnement d'objets sous-marins représente un enjeu stratégique pour des applications militaires, industrielles et scientifiques. Les systèmes de positionnement reposent sur des signaux de type SONAR « Sound Navigation and Ranging ». Plusieurs émetteurs synchrones avec des temps d'émission connus sont alors considérés, l'objectif étant que la position d'un récepteur se fasse en fonction des positions des émetteurs. Nous avons la main mise sur la détection des signaux en réception d'une part, et sur le choix des formes d'ondes à l'émission d'autre part. La méthode de détection, basée sur le filtrage adapté, se veut robuste aux différentes perturbations engendrées par le canal de propagation (pertes par transmission, multi-trajets) et par le système lui-même (environnement multi-émetteurs). De plus, la détection restreinte à une somme de tests d'hypothèses binaires, nécessite un fonctionnement en temps réel. A l'émission, les formes d'ondes doivent permettre d'identifier indépendamment les émetteurs les uns des autres. Ainsi les travaux portent essentiellement sur les modulations FHSS, les paramètres de construction de ces signaux étant alors choisis de sorte à optimiser la méthode de détection étudiée. Enfin, l'implémentation des algorithmes issus de ces travaux sur des systèmes embarqués a permis leur validation sur des données enregistrées, puis en conditions réelles. Ces essais ont été réalisés avec l'entreprise ALSEAMAR, dans le cadre de la thèse CIFRE-DGA. / There is an increasing interest in underwater positioning system in industry (off-shore, military, and biology). In order to localize a receiver relative to a grid of transmitters, thanks to the knowledge of positions and transmission time, it needs to detect each signal and estimate the TOA (Time Of Arrival). Thus, a range between a transmitter and receiver can be deduced by estimation of TOA. When receiver knows three ranges at least, it can deduce its position by triangulation. This work takes into account signal detection, and waveform choice. Detection method, based on matched filter, needs to be robust face to propagation channel (transmission loss, multi-paths) and to the system (multi-users environment). Moreover, the detection structure, being a combination of binary hypothesis testing, must work in real time. In a CDMA context which requires to distinguish each transmitter, the FHSS (Frequency Hopped Spread Spectrum) modulation, allocating one code per user, is adapted. FHSS signals performance, depending of the number of frequency shifts N and the time-bandwidth product, are analyzed from detection criterion point of view. Moreover, detection method and adapted signal is tested in a shallow water environment.The research was supported by ALSEAMAR and DGA-MRIS scholarship.
|
106 |
Conception et réalisation d’un lien Light-Fidelity multi-utilisateur en intérieur / Conception and realization of an indoor multi-user Light-Fidelity linkMohammedi Merah, Mounir 08 October 2019 (has links)
De nos jours, le nombre d'appareils connectés nécessitant un accès aux données mobiles est en augmentation constante. L'arrivée d'encore plus d'ojects multimédias connectés et la demande croissante d'informations par appareil ont mis en évidence les limites de la quatrième génération de réseaux cellulaires (4G). Cela a poussé au développement de nouvelles méthodes, dont la 5G. L'objectif est d'être en mesure de prendre en charge la croissance des systèmes portables, des capteurs ou des sysèmes associés à l'internet des objets (IoT). La vision derrière la 5G est de permettre une société entièrement mobile et connectée avec une expérience consistente.Les petites cellules sont la base des normes de communication avancées telles que 4G et maintenant 5G. Ils résultent de l’utilisation de bandes de fréquences plus élevées pour l’accès radiofréquences (RF) afin de supporter de nouvelles normes et exigences croissantes en bande passante. La 5G utilise des ondes millimétriques et nécessite un déploiement dans un environnement urbain intérieur et urbain dense, ce qui peut s'avérer être un défi. C’est là que la 5G devra inclure des solutions de réseau hybrides et pouvoir coexister avec d’autres technologies d’accès sans fil. La communication par lumière visible (VLC) s’inscrit dans ce moule puisque la lumière visible correspond à la bande comprise entre 400 et 800 THz. Le spectre disponible est des milliers de fois plus large que le spectre RF et il n’interfère pas avec celui-ci. Le principe se base sur la combinaison de l'éclairage avec un lien de communication pouvant atteindre des dizaines de gigabits par seconde. Le potentiel est d’offrir un complément à la 5G dans un réseau hybride, offrant une vitesse élevée, aucune interférence et une sécurité accrue au prix d’une couverture limitée et d’une faible maturité technologique.L’objectif de cette thèse est donc de proposer et d’évaluer une implémentation expérimentale d’un système VLC en intérieur et multi-utilisateurs afin de répondre aux objectifs de la configuration light-fidelity (Li-Fi) dans le contexte d’une petite cellule. La première étape de cette étude est un état de l'art détaillé sur le principe de VLC dans la communication sans fil en intérieur et de l’accès multi-utilisateur. Cela permet de mieux expliquer le concept de notre désign et de comparer notre approche aux travaux existants. La deuxième étape consiste en une analyse des principes et des hypothèses pour le système VLC multi-utilisateurs en intérieur portant à la fois sur la technique de modulation et sur les schémas d’accès multi-utilisateurs. Les conclusions tirées des analyses théoriques et numériques servent de base pour la suite du travail. La troisième étape consiste en plusieurs analyses expérimentales sur l'optimisation des performances de diffusion pour un utilisateur unique, puis sur les performances multi-utilisateurs du système à l'aide de divers schémas d'accès. Le débit total avec une LED blanche commerciale atteint 163 Mb/s avec un taux d'erreur réduit d'un facteur de 3,55 grâce au processus d'optimisation des performances. Cette technique a l'avantage d'augmenter la flexibilité pour un scénario avec plusieurs utilisateurs sans augmenter la complexité car seuls les paramètres des filtres de modulation sont altérés. La taille de la cellule obtenue est de 4.56 m² à une distance de 2,15 mètres du transmetteur. Le capacité peut atteindre jusqu'à 40 utilisateurs, ou 40.62 Mb/s dans un scénario à 4 utilisateurs. Il est donc démontré que le système proposé pourrait fonctionner comme une cellule à une distance réaliste, avec un débit de données élevé et la capacité de répondre aux besoins d’un grand nombre d’utilisateurs tout en limitant les coûts de mise en œuvre. / Nowadays, the number of connected devices requiring access to mobile data is considerably increasing. The arrival of even more connected multimedia objects and the growing demand for more information per device highlighted the limits of the fourth generation of broadband cellular networks (4G). This pushed for the development of new methods, one of which is 5G. The goal is to be able to support the growth of wearable, sensors, or related internet-of-object (IoT) systems. The vision behind 5G is to enable a fully mobile and connected society with a consistent experience. In consequence, there is a fundamental need to achieve a seamless and consistent user experience across time and space.Small cells are the basis of advanced communications standards such as 4G and now, 5G. They exist as a result of using higher frequency bands for RF access in order to support new standards and the increasing demands in bandwidth. 5G use millimeter waves and requires a deployment across indoor and dense urban environment which may prove to be a challenge. This is where 5G will need to include hybrid networking solutions and be able to coexist with other wireless access technologies. Visible light communication (VLC) fits into that mold since visible light corresponds to the band between 400 and 800 THz. The available spectrum is multiple thousand times the size of the RF spectrum and it does not interfere with it. The technique combines illumination with communication at possibly tens of gigabits per second. It has the potential to offer a synergistic pairing with 5G in a hybrid network, offering high speed, no interferences, and more security at the cost of limited coverage and low technological maturity.The goal of this thesis is thus to propose and evaluate an experimental implementation of an indoor multi-user VLC system in order to answer the objectives of Li-Fi setup in the context of a small cell. The first step of this study is a detailed state-of-the-art on VLC in indoor wireless communication and multi-user access. It allows the design of our work to be better explained and to compare our approach with existing works. The second step is an analysis of the principles and hypothesis supporting the indoor multi-user VLC system in the study both on the modulation technique and the multi-user access schemes. The conclusions drawn from theoretical and numerical analysis are used as a basis for the rest of the work. The third step is the experimental setup investigations on the single-user broadcast performances optimization and then on the multi-user performances of the system using various schemes. The total throughput using an off-the-shelf white LED reaches 163 Mb/s with a bit-error rate decreased by a factor of 3.55 thanks to the performance optimization process. This technique has the advantage of increasing the flexibility for a multi-access scenario while not augmenting the complexity as it only optimizes the modulation filter parameters. The multi-user access is obtained for a cell size of 4.56 m² at a distance of 2.15 meter away from the transmitter. The user capacity can reach up to 40 users, or 40.62 Mb/s in a 4-user scenario. It is thus demonstrated that the proposed system could function as a cell at a realistic range, with high data rate and the ability to provide for a large amount of users while limiting the cost of implementation.
|
107 |
An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted SurgeryMunawar, Adnan 13 December 2019 (has links)
The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator.
|
108 |
An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted SurgeryMunawar, Adnan 03 December 2019 (has links)
The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator.
|
109 |
[en] ON HYBRID BEAMFORMING DESIGN FOR DOWNLINK MMWAVE MASSIVE MU-MIMO SYSTEMS / [pt] PROJETO HÍBRIDO DE FORMAÇÃO DE FEIXE PARA ENLACE DIRETO EM ONDAS MILIMÉTRICAS EM SISTEMAS MASSIVOS MU-MIMO12 November 2020 (has links)
[pt] As comunicações de ondas milimétricas (mmWave) são consideradas
uma tecnologia essencial para os sistemas celulares de próxima geração, dado
que a enorme largura de banda disponível pode potencialmente fornecer
as taxas de vários gigabits por segundo. As técnicas convencionais de
pré-codificação e combinação são impraticáveis nos cenários da mmWave
devido ao custo de fabricação e ao consumo de energia. As alternativas
híbridas foram consideradas uma tecnologia promissora para fornecer um
compromisso entre a complexidade do hardware e o desempenho do sistema.
Um grande número de projetos de pré-codificadores híbridos têm sido
proposto com diferentes abordagens. Uma abordagem possível é procurar
minimizar a distância euclidiana entre o pré-decodificador híbrido e o
pré-decodificador totalmente digital. No entanto, essa abordagem torna o
projeto do pré-codificador híbrido um problema de fatoração da matrices
difícil de lidar devido às restrições de hardware dos componentes analógicos.
Esta tese de doutorado propõe alguns projetos de pré-codificadores e combinadores
híbridos por meio de uma estratégia hierárquica. O problema
híbrido de pré-codificação / combinação é dividido em partes analógicas e
digitais. Primeiro, o pré-codificador / combinador analógico é projetado.
Em seguida, com o pré-codificador / combinador analógico fixo, o précodificador
/ combinador digital é calculado para melhorar o desempenho
do sistema. Além disso, métodos de otimização linear e não linear são empregados
para projetar a parte analógica do pré-codificador / combinador.
A viabilidade dessas propostas é avaliada usando diferentes técnicas de
detecção de dados e analisando o desempenho do sistema em termos de taxa
de erros de bits (BER), sum–rate e outras métricas, em cenários internos
do mmWave, considerando enlace diretos massivo do MU–MIMO.
Além disso, este trabalho propõe um método para encontrar aproximações
analíticas bastante restritas ao desempenho obtido no BER. A metodologia
proposta exigiria o conhecimento da função densidade de probabilidade
(fdp) das variáveis relacionadas que são desconhecidas para os cenários
mmWave. Para resolver este problema, são utilizadas as aproximações fdp
Gamma. As aproximações analíticas do BER resultaram em diferenças não
superiores a 0,5 dB em relação aos resultados da simulação em alto SNR. / [en] Millimeter–wave (mmWave) communications have been regarded as a
key technology for the next–generation cellular systems since the huge available
bandwidth can potentially provide the rates of multiple gigabits per
second. Conventional precoding and combining techniques are impractical
at mmWave scenarios due to manufacturing cost and power consumption.
Hybrid alternatives have been considered as a promising technology to provide
a compromise between hardware complexity and system performance.
A large number of hybrid precoder designs have been proposed with
different approaches. One possible approach is to search for minimizing the
Euclidean distance between hybrid precoder and the full-digital precoder.
However, this approach makes the hybrid precoder design becomes a matrix
factorization problem difficult to deal due to the hardware constraints of
analog components.
This doctoral thesis proposes some hybrid precoder and combiners designs
through a hierarchical strategy. The hybrid precoding/combining problem
is divided into analog and digital parts. First, the analog precoder/combiner
is designed. Then, with the analog precoder/combiner fixed, the digital precoder/
combiner is computed to improve the system performance. Furthermore,
linear and no-linear optimization methods are employed to design the
analog part of the precoder/combiner. The viability of these proposals is evaluated using different data detection techniques and analyzing the system performance in terms of bit error
rate (BER), sum rate, and other metrics, in indoor mmWave scenarios
considering massive MU-MIMO downlink. Also, this work proposes a method to find fairly tight analytic approximations to the obtained BER performance. The methodology proposed would
require the knowledge of the probability density function (pdf) of the variables
involved, which are unknown for mmWave scenarios. In order to solve
this problem, Gamma pdf approximations are used. The analytic BER
approximations resulted in differences no larger than 0.5 dB with respect
to the simulation results in high SNR.
|
110 |
Neutral Parametric Canonical Form for 2D and 3D Wireframe CAD GeometryFreeman, Robert Steven 01 August 2015 (has links) (PDF)
The challenge of interoperability is to retain model integrity when different software applications exchange and interpret model data. Transferring CAD data between heterogeneous CAD systems is a challenge because of differences in feature representation. A study by the National Institute for Standards and Technology (NIST) performed in 1999 made a conservative estimate that inadequate interoperability in the automotive industry costs them $1 billion per year. One critical part of eliminating the high costs due to poor interoperability is a neutral format between heterogeneous CAD systems. An effective neutral CAD format should include a current-state data store, be associative, include the union of CAD features across an arbitrary number of CAD systems, maintain design history, maintain referential integrity, and support multi-user collaboration. This research has focused on extending an existing synchronous collaborative CAD software tool to allow for a neutral, current-state data store. This has been accomplished by creating a Neutral Parametric Canonical Form (NPCF) which defines the neutral data structure for many basic CAD features to enable translation between heterogeneous CAD systems. The initial architecture developed begins to define a new standard for storing CAD features neutrally. The NPCF's for many features have been implemented in a multi-user interoperability program and work between NX and CATIA CAD systems. The 2D point, 2D line, 2D arc, 2D circle, 2D spline, 3D point, extrude, and revolve NPCF's will be specifically defined. Complex models have successfully been modeled and exchanged in real time and have validated the NPCF approach. Multiple users can be in the same part at the same time in different CAD systems and create and update models in real time.
|
Page generated in 0.0452 seconds