Spelling suggestions: "subject:"lemsystems 1design"" "subject:"lemsystems 22design""
151 |
Integrated Self-Interference Cancellation for Full-Duplex and Frequency-Division Duplexing Wireless Communication SystemsZhou, Jin January 2017 (has links)
From wirelessly connected robots to car-to-car communications, and to smart cities, almost every aspect of our lives will benefit from future wireless communications. While promise an exciting future world, next-generation wireless communications impose requirements on the data rate, spectral efficiency, and latency (among others) that are higher than those for today's systems by several orders of magnitude.
Full-duplex wireless, an emergent wireless communications paradigm, breaks the long-held assumption that it is impossible for a wireless device to transmit and receive simultaneously at the same frequency, and has the potential to immediately double network capacity at the physical (PHY) layer and offers many other benefits (such as reduced latency) at the higher layers. Recently, discrete-component-based demonstrations have established the feasibility of full-duplex wireless. However, the realization of integrated full duplex radios, compact radios that can fit into smartphones, is fraught with fundamental challenges. In addition, to unleash the full potential of full-duplex communication, a careful redesign of the PHY layer and the medium access control (MAC) layer using a cross-layer approach is required.
The biggest challenge associated with full duplex wireless is the tremendous amount of transmitter self-interference right on top of the desired signal. In this dissertation, new self-interference-cancellation approaches at both system and circuit levels are presented, contributing towards the realization of full-duplex radios using integrated circuit technology. Specifically, these new approaches involve elimination of the noise and distortion of the cancellation circuitry, enhancing the integrated cancellation bandwidth, and performing joint radio frequency, analog, and digital cancellation to achieve cancellation with nearly one part-per-billion accuracy.
In collaboration with researchers at higher layers of the stack, a cross-layer approach has been used in our full-duplex research and has allowed us to derive power allocation algorithms and to characterize rate-gain improvements for full-duplex wireless networks. To enable experimental characterization of full-duplex MAC layer algorithms, a cross-layered software-defined full-duplex radio testbed has been developed. In collaboration with researchers from the field of micro-electro-mechanical systems, we demonstrate a multi-band frequency-division duplexing system using a cavity-filter-based tunable duplexer and our integrated widely-tunable self-interference-cancelling receiver.
|
152 |
Multi-cell coordinated beamforming and admission control in wireless cellular networks.January 2012 (has links)
協作多點 (CoMP)是一種最近興起的傳輸技術,其主要作用為應付新一代無線通訊系統中的小區間干擾問題。在過去十數年內,研究員研發了 CoMP中一些關鍵的新技術,當中包括 MIMO合作和干擾協調。本論文考慮一個聯合用戶排程和干擾協調的問題。在傳統的研究中,用戶排程和干擾協調通常作為獨立的問題進行研究。可是,從本質上這兩個問題是相互影響的,因此傳統的研究將導致系統性能退化。為此,本論文探討了一個聯合用戶排程和波束形成(JACoB)的問題,這當中採用了一種稱為協同波束形成(CoBF)的干擾協調技術。具體而言,本文把 JACoB問題表達成了一個可支持用戶數最大化的問題,而其中的 CoBF設計將盡可能地配合用戶的需求而改變。 / 本論文有兩個主要的貢獻。第一,本文把 JACoB問題轉換成一個 ℓ₀範數最小化問題。其後本文採用 ℓ₁範數近似法將 JACoB問題近似為一個凸優化問題。第二,本文提出一種新型的分佈 JACoB方法。本文提出的分佈方法是基於塊坐標下降法。該方法不同於傳統的基於次梯度方法的分佈方法,如原始/對偶分解。 / 仿真結果顯示,採用本文提出的 JACoB方法(無論是集中的或是分佈的)所能支持的用戶數量遠超過現有的固定波束形成方法。此外,本文提出的分佈 JACoB方法能達到與集中JACoB方法相近的性能,而且其收斂速度亦是相當快的。 / Coordinated MultiPoint (CoMP) cooperative transmission has recently emerged as a promising technique for mitigating intercell interference in next generation wireless communication systems. Several key techniques for CoMP have been endeavored over the past decades, for example, MIMO cooperation and interference coordination. The present work studies a joint user scheduling and interference coordination problem in the CoMP downlink systems. Conventionally, user scheduling and interference coordination are treated as separate problems. This may result in a degradation of the system performance as the two problems are actually intertwined with each other. As such, this thesis considers a joint admission control and beamforming (JA-CoB) problem which employs a popular interference coordination technique called coordinated beamforming (CoBF). In particular, the JA-CoB problem is stated as a user number maximization problem where the CoBF design can be adapted to the set of selected users. / There are two major contributions in this thesis. Firstly, the JA-CoB problem is cast as an ℓ₀ norm minimization problem and then tackled by the now popularized ℓ₁ approximation technique. Secondly, a novel decentralized JACoB method is developed. The proposed de-centralized method is based on the simple block coordinate descent method, which is different from the conventional approaches which em-ploy subgradient-based method such as dual/primal decomposition. / The simulation results indicate that: i) the proposed centralized method yields a performance close to the optimum JACoB design while the complexity is significantly reduced; ii) employing the proposed JA-CoB methods (either centralized or decentralized) gives a significant gain over a fixed beamformers design in terms of the number of supported users. Moreover, the decentralized JACoB method achieves a performance close to its centralized counterpart, whilst the convergence speed is considerably fast. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Wai, Hoi To. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 77-80). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.iv / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of techniques for CoMP --- p.2 / Chapter 1.2 --- Overview of user scheduling algorithms --- p.4 / Chapter 1.3 --- Contributions --- p.6 / Chapter 2 --- The JACoB problem and the related works --- p.8 / Chapter 2.1 --- System model --- p.8 / Chapter 2.2 --- Joint admission control and beamforming (JACoB) --- p.10 / Chapter 2.2.1 --- Coordinated beamformers design --- p.11 / Chapter 2.2.2 --- Semide nite relaxation for the CoBF problem --- p.13 / Chapter 2.3 --- Related works --- p.14 / Chapter 2.3.1 --- Common trend in JACoB - deflation heuristic . --- p.18 / Chapter 2.4 --- Decentralized methods --- p.19 / Chapter 3 --- Centralized JACoB method --- p.21 / Chapter 3.1 --- Step 1 - a new formulation to JACoB --- p.21 / Chapter 3.2 --- Step 2 - ℓ₁ approximation to JACoB --- p.24 / Chapter 3.2.1 --- Properties of the ℓ₁ JACoB problem --- p.26 / Chapter 3.3 --- Proposed JACoB method --- p.28 / Chapter 3.3.1 --- Prescreening procedure --- p.28 / Chapter 4 --- Decentralized JACoB method --- p.31 / Chapter 4.1 --- Block coordinate descent method --- p.32 / Chapter 4.2 --- Smooth approximation to ℓ₁ JACoB --- p.34 / Chapter 4.2.1 --- Empirical iteration complexity of the BCD method --- p.38 / Chapter 4.3 --- Proposed decentralized JACoB method --- p.40 / Chapter 5 --- Simulation results --- p.43 / Chapter 5.1 --- Performance of centralized JACoB methods --- p.44 / Chapter 5.2 --- Performance of decentralized JACoB methods --- p.48 / Chapter 5.3 --- Summary --- p.52 / Chapter 6 --- Conclusions and future directions --- p.53 / Chapter 6.1 --- Future directions --- p.53 / Chapter 6.1.1 --- From a practical point of view --- p.54 / Chapter 6.1.2 --- From a theoretical point of view --- p.54 / Chapter A --- A primal decomposition method for (3.4) --- p.56 / Chapter B --- A projected gradient method for (4.3) --- p.60 / Chapter C --- Proofs --- p.67 / Chapter C.1 --- KKT conditions for (2.6) and (3.5) --- p.67 / Chapter C.2 --- Proof of Proposition 2.1 --- p.68 / Chapter C.3 --- Proof of Proposition 3.3 --- p.69 / Chapter C.4 --- Proof of Proposition 3.2 --- p.69 / Chapter C.5 --- Proof of Proposition 3.5 --- p.71 / Chapter C.6 --- Proof of Fact 4.1 --- p.75 / Bibliography --- p.77
|
153 |
A computational-based methodology for the rapid determination of initial AP location for WLAN deploymentAltamirano, Esteban 18 March 2004 (has links)
The determination of the optimal location of transceivers is a critical design
factor when deploying a wireless local area network (WLAN). The performance of
the WLAN will improve in a variety of aspects when the transceivers' locations are
adequately determined, including the overall cell coverage to the battery life of the
client units. Currently, the most common method to determine the appropriate
location of transceivers is known as a site survey, which is normally a very time and
energy consuming process.
The main objective of this research was to improve current methodologies for
the optimal or near-optimal placement of APs in a WLAN installation. To achieve
this objective, several improvements and additions were made to an existing
computational tool to reflect the evolution that WLAN equipment has experienced in
recent years. Major additions to the computational tool included the addition of the
capability to handle multiple power levels for the transceivers, the implementation of
a more adequate and precise representation of the passive interference sources for the
path loss calculations, and the definition of a termination criterion to achieve
reasonable computational times without compromising the quality of the solution.
An experiment was designed to assess how the improvements made to the
computational tool provided the desired balance between computational time and the
quality of the solutions obtained. The controlled factors were the level of strictness
of the termination criterion (i.e., high or low), and the number of runs performed
(i.e., 1, 5, 10, 15, and 20 runs). The low level of strictness proved to dramatically
reduce (i.e., from 65 to 70%) the running time required to obtain an acceptable
solution when compared to that obtained at the high level of strictness. The quality
of the solutions found with a single run was considerably lower than that obtained
with the any other number of runs. On the other hand, the quality of the solutions
seemed to stabilize at and after 10 runs, indicating that there is no added value to the
quality of the solution when 15 or 20 runs are performed. In summary, having the
computational tool developed in this research execute 5 runs with the low level of
strictness would generate high quality solutions in a reasonable running time. / Graduation date: 2004
|
154 |
Active matrix electroluminescent device power considerationsBeck, Douglas 12 June 1997 (has links)
An active-matrix electroluminescent (AMEL) design tool has been developed for
the simulation of AMEL display devices. The AMEL design tool is a software package
that simulates AMEL device operation using a lumped parameter circuit model. The
lumped parameter circuit model is developed primarily to address AMEL power
dissipation issues. The AMEL design tool provides a user-friendly approach for
investigating the AMEL display device through the AMEL lumped parameter circuit
model. The AMEL design tool is programmed in C with a standard Microsoft Windows
interface.
Three techniques for power reduction have been identified and investigated:
increasing the high voltage NDMOS transistor breakdown voltage, parasitic capacitance
optimization, and development of a low voltage phosphor. / Graduation date: 1998
|
155 |
Design of high-speed adaptive parallel multi-level decision feedback equalizerXiang, Yihai 26 February 1997 (has links)
Multi-level decision feedback equalization (MDFE) is an effective technique to remove inter-symbol interference (ISI) from disk readback signals, which uses the simple architecture of decision feedback equalization. Parallelism which doubles the symbol rate can be realized by setting the first tap of the feedback filter to zero.
A mixed-signal implementation has been chosen for the parallel MDFE, in which coefficients for the 9-tap feedback filter are adapted in the digital domain by 10-bit up/ down counters; 6-bit current mode D/A converters are used to convert digital coefficients to differential current signals which are summed with the forward equalizer (FE) output, and a flash A/D is used to make decisions and generate error signals for adaptation.
In this thesis, a description of the parallel structure and the adaptation algorithm are presented with behavioral level verification. The circuit design and layout were carried out in HP 1.2um n-well CMOS process. The design of the high-speed counter and the current-mode D/A are discussed. HSPICE simulations show that a symbol rate of 100Mb/s for the feedback equalizer is readily achieved. / Graduation date: 1997
|
156 |
Design of high-speed low-power analog CMOS decision feedback equalizersSu, Wenjun 08 July 1996 (has links)
Decision feedback equalizer (DFE) is an effective method to remove inter-symbol
interference (ISI) from a disk-drive read channel. Analog IC implementations of DFE
potentially offers higher speed, smaller die area, and lower power consumption when
compared to their digital counterparts.
Most of the available DFE equalizers were realized by using digital FIR filters
preceded by a flash A/D converter. Both the FIR filter and flash A/D converter are the
major contributers to the power dissipation. However, this project focuses on the analog
IC implementations of the DFE to achieve high speed and low power consumption. In
other words, this project gets intensively involved in the design of a large-input highly-linear
voltage-to-current converter, the design of a high-speed low-power 6-bit
comparator, and the design of a high-speed low-power 6-bit current-steering D/A
converter.
The design and layout for the proposed analog equalizer are carried out in a 1.2
pm n-well CMOS process. HSPICE simulations show that an analog DFE with 100 MHz
clock frequency and 6-bit accuracy can be easily achieved. The power consumption for
all the analog circuits is only about 24mW operating under a single 5V power supply. / Graduation date: 1997
|
157 |
Exploiting Requirements Variability for Software Customization and AdaptationLapouchnian, Alexei 09 June 2011 (has links)
The complexity of software systems is exploding, along with their use and application in new domains. Managing this complexity has become a focal point for research in Software Engineering. One direction for research in this area is developing techniques for designing adaptive software systems that self-optimize, self-repair, self-configure and self-protect, thereby reducing maintenance costs, while improving quality of service.
This thesis presents a requirements-driven approach for developing adaptive and customizable systems. Requirements goal models are used as a basis for capturing problem variability, leading to software designs that support a space of possible behaviours – all delivering the same functionality. This space can be exploited at system deployment time to customize the system on the basis of user preferences. It can also be used at runtime to support system adaptation if the current behaviour of the running system is deemed to be unsatisfactory.
The contributions of the thesis include a framework for systematically generating designs from high-variability goal models. Three complementary design views are generated: configurational view (feature model), behavioural view (statecharts) and an architectural view (parameterized architecture). The framework is also applied to the field of business process management for intuitive high-level process customization.
In addition, the thesis proposes a modeling framework for capturing domain variability through contexts and applies it to goal models. A single goal model is used to capture requirements variations in different contexts. Models for particular contexts can then be automatically generated from this global requirements model. As well, the thesis proposes a new class of requirements-about-requirements called awareness requirements. Awareness requirements are naturally operationalized through feedback controllers – the core mechanisms of every adaptive system. The thesis presents an approach for systematically designing monitoring, analysis/diagnosis, and compensation components of a feedback controller, given a set of awareness requirements. Situations requiring adaptation are explicitly captured using contexts.
|
158 |
Exploiting Requirements Variability for Software Customization and AdaptationLapouchnian, Alexei 09 June 2011 (has links)
The complexity of software systems is exploding, along with their use and application in new domains. Managing this complexity has become a focal point for research in Software Engineering. One direction for research in this area is developing techniques for designing adaptive software systems that self-optimize, self-repair, self-configure and self-protect, thereby reducing maintenance costs, while improving quality of service.
This thesis presents a requirements-driven approach for developing adaptive and customizable systems. Requirements goal models are used as a basis for capturing problem variability, leading to software designs that support a space of possible behaviours – all delivering the same functionality. This space can be exploited at system deployment time to customize the system on the basis of user preferences. It can also be used at runtime to support system adaptation if the current behaviour of the running system is deemed to be unsatisfactory.
The contributions of the thesis include a framework for systematically generating designs from high-variability goal models. Three complementary design views are generated: configurational view (feature model), behavioural view (statecharts) and an architectural view (parameterized architecture). The framework is also applied to the field of business process management for intuitive high-level process customization.
In addition, the thesis proposes a modeling framework for capturing domain variability through contexts and applies it to goal models. A single goal model is used to capture requirements variations in different contexts. Models for particular contexts can then be automatically generated from this global requirements model. As well, the thesis proposes a new class of requirements-about-requirements called awareness requirements. Awareness requirements are naturally operationalized through feedback controllers – the core mechanisms of every adaptive system. The thesis presents an approach for systematically designing monitoring, analysis/diagnosis, and compensation components of a feedback controller, given a set of awareness requirements. Situations requiring adaptation are explicitly captured using contexts.
|
159 |
Modeling Continuous Emotional Appraisals of Music Using System IdentificationKorhonen, Mark January 2004 (has links)
The goal of this project is to apply system identification techniques to model people's perception of emotion in music as a function of time. Emotional appraisals of six selections of classical music are measured from volunteers who continuously quantify emotion using the dimensions valence and arousal. Also, features that communicate emotion are extracted from the music as a function of time. By treating the features as inputs to a system and the emotional appraisals as outputs of that system, linear models of the emotional appraisals are created. The models are validated by predicting a listener's emotional appraisals of a musical selection (song) unfamiliar to the system. The results of this project show that system identification provides a means to improve previous models for individual songs by allowing them to generalize emotional appraisals for a genre of music. The average <i>R</i>² statistic of the best model structure in this project is 7. 7% for valence and 75. 1% for arousal, which is comparable to the <i>R</i>² statistics for models of individual songs.
|
160 |
Relationships Between Motor Unit Anatomical Characteristics and Motor Unit Potential Statistics in Healthy MusclesEmrani, Mahdieh Sadat January 2005 (has links)
The main goal of this thesis was to discover the relationships between MU characteristics and MUP features. To reach this goal, several features explaining the anatomical structure of the muscle were introduced. Additionally, features representing specific properties of the EMG signal detected from that muscle, were defined. Since information regarding the underlying anatomy was not available from real data, a physiologically based muscle model was used to extract the required features. This muscle model stands out from others, by providing similar acquisition schemes as the ones utilized by physicians in real clinical settings and by modelling the interactions among different volume conductor factors and the collection of MUs in the muscle in a realistic way. Having the features ready, several relationship discovery techniques were used, to reveal relationships between MU features and MUP features. To interpret the results obtained from the correlation analysis and pattern discovery techniques properly, several algorithms and new statistics were defined. The results obtained from correlation analysis and pattern discovery technique were similar to each other, and suggested that to maximize the inter-relationships between MUP features and MU features, MUPs could be filtered based on their slope values, specifically MUPs with slopes lower than 0. 6 v/s could be excluded. Additionally PDT results showed that high slope MUPs were not as informative about the underlying MU and could be excluded to maximize the relationships between MUP features and MU characteristics. Certain MUP features were determined to be highly related to certain MU characteristics. MUP <em>area</em> and <em>duration</em> were shown to be the best representative feature for the MU size and <em>average fiber density</em>, respectively. For the distribution of fiber diameter in the MU, <em>duration</em> and <em>number of turns</em> were determined to reflect <em>mean fiber diameter</em> and <em>stdv of fiber diameter</em> the best, correspondingly.
|
Page generated in 0.0335 seconds