261 |
Subband Adaptive Filtering Algorithms And ApplicationsSridharan, M K 06 1900 (has links)
In system identification scenario, the linear approximation of the system modelled by its impulse response, is estimated in real time by gradient type Least Mean Square (LMS) or Recursive Least Squares (RLS) algorithms. In recent applications like acoustic echo cancellation, the order of the impulse response to be estimated is very high, and these traditional approaches are inefficient and real time implementation becomes difficult. Alternatively, the system is modelled by a set of shorter adaptive filters operating in parallel on subsampled signals. This approach, referred to as subband adaptive filtering, is expected to reduce not only the computational complexity but also to improve the convergence rate of the adaptive algorithm. But in practice, different subband adaptive algorithms have to be used to enhance the performance with respect to complexity, convergence rate and processing delay. A single subband adaptive filtering algorithm which outperforms the full band scheme in all applications is yet to be realized.
This thesis is intended to study the subband adaptive filtering techniques and explore the possibilities of better algorithms for performance improvement. Three different subband adaptive algorithms have been proposed and their performance have been verified through simulations. These algorithms have been applied to acoustic echo cancellation and EEG artefact minimization problems.
Details of the work
To start with, the fast FIR filtering scheme introduced by Mou and Duhamel has been generalized. The Perfect Reconstruction Filter Bank (PRFB) is used to model the linear FIR system. The structure offers efficient implementation with reduced arithmetic complexity. By using a PRFB with non adjacent filters non overlapping, many channel filters can be eliminated from the structure. This helps in reducing the complexity of the structure further, but introduces approximation in the model. The modelling error depends on the stop band attenuation of the filters of the PRFB. The error introduced due to approximation is tolerable
for applications like acoustic echo cancellation.
The filtered output of the modified generalized fast filtering structure is given by
(formula)
where, Pk(z) is the main channel output, Pk,, k+1 (z) is the output of auxiliary channel filters at the reduced rate, Gk (z) is the kth synthesis filter and M the number of channels in the PRFB. An adaptation scheme is developed for adapting the main channel filters. Auxiliary channel filters are derived from main channel filters.
Secondly, the aliasing problem of the classical structure is reduced without using the cross filters. Aliasing components in the estimated signal results in very poor steady state performance in the classical structure. Attempts to eliminate the aliasing have reduced the computation gain margin and the convergence rate. Any attempt to estimate the subband reference signals from the aliased subband input signals results in aliasing. The analysis filter Hk(z) having the following antialiasing property
(formula)
can avoid aliasing in the input subband signal. The asymmetry of the frequency response prevents the use of real analysis filters. In the investigation presented in this thesis, complex analysis filters and real'synthesis filters are used in the classical structure, to reduce the aliasing errors and to achieve superior convergence rate.
PRFB is traditionally used in implementing Interpolated FIR (IFIR) structure. These filters may not be ideal for processing an input signal for an adaptive algorithm. As third contribution, the IFIR structure is modified using discrete finite frames. The model of an FIR filter s is given by Fc, with c = Hs. The columns of the matrix F forms a frame with rows of H as its dual frame. The matrix elements can be arbitrary except that the transformation should be implementable as a filter bank. This freedom is used to optimize the filter bank, with the knowledge of the input statistics, for initial convergence rate enhancement .
Next, the proposed subband adaptive algorithms are applied to acoustic echo cancellation problem with realistic parameters. Speech input and sufficiently long Room Impulse Response (RIR) are used in the simulations. The Echo Return Loss Enhancement (ERLE)and the steady state error spectrum are used as performance measures to compare these algorithms with the full band scheme and other representative subband implementations.
Finally, Subband adaptive algorithm is used in minimization of EOG (Electrooculogram) artefacts from measured EEG (Electroencephalogram) signal. An IIR filterbank providing sufficient isolation between the frequency bands is used in the modified IFIR structure and this structure has been employed in the artefact minimization scheme. The estimation error in the high frequency range has been reduced and the output signal to noise ratio has been increased by a couple of dB over that of the fullband scheme.
Conclusions
Efforts to find elegant Subband adaptive filtering algorithms will continue in the future. However, in this thesis, the generalized filtering algorithm could offer gain in filtering complexity of the order of M/2 and reduced misadjustment . The complex classical scheme offered improved convergence rate, reduced misadjustment and computational gains of the order of M/4 . The modifications of the IFIR structure using discrete finite frames made it possible to eliminate the processing delay and enhance the convergence rate. Typical performance of the complex classical case for speech input in a realistic scenario (8 channel case), offers ERLE of more than 45dB. The subband approach to EOG artefact minimization in EEG signal was found to be superior to their fullband counterpart.
(Refer PDF file for Formulas)
|
262 |
開放經濟體下納入信用市場之匯率動態 / Exchange Rate Dynamics in a Small Open Economy with Credit Market林育聖, Lin,Yu-Sheng Unknown Date (has links)
In the literature, a considerable theoretical and empirical works have investigated the credit channel of monetary transmission mechanism. This dissertation extends the Bernanke and Blinder (1988) model to an open-economy setting with flexible exchange rate and perfect capital mobility. By means of the framework, we examine the exchange rate dynamics and the adjustment of real output. It turns out that, with a significant credit channel effect, the exchange rate puzzle may occur in the short run and in long run. Moreover, in contrast to Dornbusch (1976), this dissertation shows that, depending upon the strength of the credit channel effect, overshooting, undershooting and counter-shooting impact effect may occur when international capital mobility is perfect. / In the literature, a considerable theoretical and empirical works have investigated the credit channel of monetary transmission mechanism. This dissertation extends the Bernanke and Blinder (1988) model to an open-economy setting with flexible exchange rate and perfect capital mobility. By means of the framework, we examine the exchange rate dynamics and the adjustment of real output. It turns out that, with a significant credit channel effect, the exchange rate puzzle may occur in the short run and in long run. Moreover, in contrast to Dornbusch (1976), this dissertation shows that, depending upon the strength of the credit channel effect, overshooting, undershooting and counter-shooting impact effect may occur when international capital mobility is perfect.
|
263 |
Violence, sexualité et double : les représentations féminines dans Perfect Blue et Paprika de Kon SatoshiScott, Gabrielle 04 1900 (has links)
Le présent mémoire consiste en une analyse thématique des représentations féminines dans l’œuvre de Satoshi Kon, de Perfect Blue à Paprika. L’objectif de ce travail est de démontrer que ces images de la femme reflètent la place des femmes dans la société japonaise contemporaine. À cet effet, nous avons examiné les films du réalisateur selon l’approche des études féministes du cinéma. Nous avons divisé notre analyse en trois thèmes : la violence, la sexualité et le double.
Il apparaît que les représentations féminines des longs-métrages de Kon possèdent effectivement des parallèles au sein la société nippone actuelle. Le réalisateur emploie des figures et des motifs narratifs communs au Japon et l’anime afin de produire et reproduire les stéréotypes de genre. Par ailleurs, il utilise les éléments filmiques et les particularités du médium de l’anime pour appuyer ces définitions des rôles sexuels.
Cette étude est originale par son angle d’approche féministe et psychanalytique qui est rarement adopté par les théoriciens de l’anime. Les études portant sur ce médium sont d’ailleurs récentes et s’intéressent généralement à l’esthétique de l’anime ou à la formation d’une identité nationale japonaise plutôt qu’à la construction du genre dans un média de culture populaire. / The present thesis consists of a thematic analysis of the feminine representations in Satoshi Kon’s work, from Perfect Blue to Paprika. Our objective is to demonstrate that these female depictions reflect the status of women in contemporary japanese society. To this end, we examined the director’s movies according to feminist film theory. Also, we separated our analysis in three themes : violence, sexuality and the double.
It seems that Kon’s feminine representations possess parallels to the present Japanese society. Indeed, the director uses figures and narrative motifs common to Japan and anime in order to produce and reproduce gender stereotypes. In addition, he utilizes filmic elements and the particularities of the anime medium to support these definitions of sexual roles.
This study is original in its feminist and psychoanalytic approach which is rarely employed by anime theorists. Furthermore, the studies regarding this medium are fairly new and usually focus on the anime easthetic and the establishment of a Japanese national identity rather than the construction of gender in a popular culture media.
|
264 |
Optimal use of resources: classic foraging theory, satisficing and smart foraging – modelling foraging behaviors of elkWeclaw, Piotr Unknown Date
No description available.
|
265 |
Planar Lensing Lithography: Enhancing the Optical Near Field.Melville, David O. S. January 2006 (has links)
In 2000, a controversial paper by John Pendry surmised that a slab of negative index material could act as a perfect lens, projecting images with resolution detail beyond the limits of conventional lensing systems. A thin silver slab was his realistic suggestion for a practical near-field superlens - a 'poor-mans perfect lens'. The superlens relied on plasmonic resonances rather than negative refraction to provide imaging. This silver superlens concept was experimentally verified by the author using a novel near-field lithographic technique called Planar Lensing Lithography (PLL), an extension of a previously developed Evanescent Near-Field Optical Lithography (ENFOL) technique. This thesis covers the computational and experimental efforts to test the performance of a silver superlens using PLL, and to compare it with the results produced by ENFOL. The PLL process was developed by creating metal patterned conformable photomasks on glass coverslips and adapting them for use with an available optical exposure system. After sub-diffraction-limited ENFOL results were achieved with this system additional spacer and silver layers were deposited onto the masks to produce a near-field test platform for the silver superlens. Imaging through a silver superlens was achieved in a near-field lithography environment for sub-micron, sub-wavelength, and sub-diffraction-limited features. The performance of PLL masks with 120-, 85-, 60-, and 50-nm thick silver layers was investigated. Features on periods down to 145-nm have been imaged through a 50-nm thick silver layer into a thin photoresist using a broadband mercury arc lamp. The quality of the imaging has been improved by using 365 nm narrowband exposures, however, resolution enhancement was not achieved. Multiple layer silver superlensing has also been experimentally investigated for the first time; it was proposed that a multi-layered superlens could achieve better resolution than a single layer lens for the same total silver thickness. Using a PLL mask with two 30-nm thick silver layers gave 170-nm pitch sub-diffraction-limited resolution, while for a single layer mask with the same total thickness (60 nm) resolution was limited to a 350-nm pitch. The proposed resolution enhancement was verified, however pattern fidelity was reduced, the result of additional surface roughness. Simulation and analytical techniques have been used to investigate and understand vi ABSTRACT the enhancements and limitations of the PLL technique. A Finite-Difference Time- Domain (FDTD) tool was written to produce full-vector numerical simulations and this provided both broad- and narrowband results, allowing image quality as a function of grating period to be investigated. An analytical T-matrix method was also derived to facilitate computationally efficient performance analysis for grating transmission through PLL stacks. Both methods showed that there is a performance advantage for PLL over conventional near-field optical lithography, however, the performance of the system varies greatly with grating period. The advantages of PLL are most prominent for multi-layer lenses. The work of this thesis indicates that the utilisation of plasmonic resonances in PLL and related techniques can enhance the performance of near-field lithography.
|
266 |
聯合行為下寬恕政策的有效性分析 / The Effectiveness Analysis of Leniency Policy under Cartel陳姿伶, Chen, Tzu Ling Unknown Date (has links)
寬恕政策為政府打擊卡特爾不可或缺的重要工具,為了維持市場競爭公平性,各國相繼將其引入法條之中,該政策透過廠商主動揭露涉案行為,使得政府可有效掌握證據將其處置。本文建立兩種賽局模型並分別利用子賽局完全均衡及序列均衡的概念,嘗試討論一般情況下寬恕政策的效率及納入資訊不對稱情形下的政策有效性,並由兩模型推論出:實行寬恕政策且廠商主動申報聯合行為為社會最有效率的均衡、透過政府制定適當的罰鍰區間引導下,主動申報聯合行為的行為可視為一區隔廠商型態的訊號。 / The leniency policy plays an indispensable role in thwarting cartel formation. To maintain the fairness of market competition, most countries successively bring this policy into their antitrust legislation. After the enforcement of the policy, the involved firms may have incentive to self-report and provide evidences to the Antitrust Authority. Therefore, the authorities can get enough evidences to convict those firms of being cartel members.
In this paper, we develop two kinds of game theoretical model and use the concept of subgame perfect equilibrium and sequential equilibrium to discuss the efficiency of leniency policy in general conditions, and the effectiveness of the policy under the condition of information asymmetry. We show that it is efficient to the society and the authorities when the cartel members self-report under the enforcement of leniency policy. Moreover, by setting up an appropriate fine payment, self-reporting can be a signal for the authorities to segment the type of the involved firms.
|
267 |
Array Signal Processing for Beamforming and Blind Source SeparationMoazzen, Iman 30 April 2013 (has links)
A new broadband beamformer composed of nested arrays (NAs), multi-dimensional (MD) filters, and multirate techniques is proposed for both linear and planar arrays. It is shown that this combination results in frequency-invariant response. For a given number of sensors, the advantage of using NAs is that the effective aperture for low temporal frequencies is larger than in the case of using uniform arrays. This leads to high spatial selectivity for low frequencies. For a given aperture size, the proposed beamformer can be implemented with significantly fewer sensors and less computation than uniform arrays with a slight deterioration in performance. Taking advantage of the Noble identity and polyphase structures, the proposed method can be efficiently implemented. Simulation results demonstrate the good performance of the proposed beamformer in terms of frequency-invariant response and computational requirements.
The broadband beamformer requires a filter bank with a non-compatible set of sampling rates which is challenging to be designed. To address this issue, a filter bank design approach is presented. The approach is based on formulating the design problem as an optimization problem with a performance index which consists of a term depending on perfect reconstruction (PR) and a term depending on the magnitude specifications of the analysis filters. The design objectives are to achieve almost perfect reconstruction (PR) and have the analysis filters satisfying some prescribed frequency specifications. Several design examples are considered to show the satisfactory performance of the proposed method.
A new blind multi-stage space-time equalizer (STE) is proposed which can separate narrowband sources from a mixed signal. Neither the direction of arrival (DOA) nor a training sequence is assumed to be available for the receiver. The beamformer and equalizer are jointly updated to combat both co-channel interference (CCI) and inter-symbol interference (ISI) effectively. Using subarray beamformers, the DOA, possibly time-varying, of the captured signal is estimated and tracked. The estimated DOA is used by the beamformer to provide strong CCI cancellation. In order to alleviate inter-stage error propagation significantly, a mean-square-error sorting algorithm is used which assigns detected sources to different stages according to the reconstruction error at different stages. Further, to speed up the convergence, a simple-yet-efficient DOA estimation algorithm is proposed which can provide good initial DOAs for the multi-stage STE. Simulation results illustrate the good performance of the proposed STE and show that it can effectively deal with changing DOAs and time variant channels. / Graduate / 0544 / imanmoaz@uvic.ca
|
268 |
Optimal use of resources: classic foraging theory, satisficing and smart foraging modelling foraging behaviors of elkWeclaw, Piotr 06 1900 (has links)
It is generally accepted that the Marginal Value Theorem (MVT) describes optimal foraging strategies. Some research findings, however, indicate that in natural conditions foragers not always behave according to the MVT. To address this inconsistency, in a series of computer simulations, I examined the behaviour of four types of foragers having specific foraging efficiencies and using the MVT and alternative strategies in 16 simulated landscapes in an ideal environment (no intra- and inter-species interactions). I used data on elk (Cervus elaphus) to construct the virtual forager. Contrary to the widely accepted understanding of the MVT, I found that in environments with the same average patch quality and varying average travel times between patches, patch residence times of some foragers were not affected by travel times. I propose a mechanism responsible for this observation and formulate the perfect forager theorem (PFT). I also introduce the concepts of a foraging coefficient (F) and foragers hub (), and formulate a model to describe the relationship between the perfect forager and other forager types. I identify situations where a forager aiming to choose an optimal foraging strategy and maximize its cumulative consumption should not follow the MVT. I describe these situations in a form of a mathematical model. I also demonstrate that the lack of biological realism and environmental noise are not required to explain the deviations from the MVT observed in field research, and explain the importance of scale in optimal foraging behaviour. I also demonstrate that smart foraging, which is a set of rules based on key ecological concepts: the functional response curve (FRC), satisficing, the MVT, and incorporates time limitations, should allow for fitness maximization. Thus, it should be an optimal behavior in the context of natural selection. I also demonstrate the importance of the FRC as a driver for foraging behaviors and argue that animals should focus more on increasing the slope of their FRC than on choosing a specific foraging strategy. Natural selection should, therefore, favor foragers with steep FRC. My findings introduce new concepts in behavioural ecology, have implications for animal ecology and inform wildlife management.
|
269 |
Application of economic analysis to evaluate various infectious diseases in VietnamPhuong, Tran Thi Thanh January 2017 (has links)
This thesis is composed of two economic evaluations: one trial-based study and one model-based study. In a recent study published in Clinical Infectious Diseases in 2011, a team of OUCRU investigators found that immediate antiretroviral therapy (ART) was not associated with improved 9-month survival in HIV-associated TBM patients (HR, 1.12; 95% CI, .81 toâ1.55; P = .50). An economic evaluation of this clinical trial was conducted to examine the cost-effectiveness of immediate ART (initiate ART within 1 week of study entry) versus deferred ART (initiate ART after 2 months of TB treatment) in HIV-associated TBM patients. Over 9 months, immediate ART was not different from deferred ART in terms of costs and QALYs gained. Late initiation of ART during TB and HIV treatment for HIV-positive TBM patients proved to be the most cost-effective strategy. Increasing resistance of Plasmodium falciparum malaria to artemisinin is posing a major threat to the global effort to eliminate malaria. Artesmisinin combination therapies (ACT) are currently known as the most efficacious first-line therapies to treat uncomplicated malaria. However, resistance to both artemisinin and partner drugs is developing and this could result in increasing morbidity, mortality, and economic costs. One strategy advocated for delaying the development of resistance to the ACTs is the wide-scale deployment of multiple first-line therapies. A previous modeling study examined that the use of multiple first-line therapies (MFT) reduced the long-term treatment failures compared with strategies in which a single first-line ACT was recommended. Motivated by observed results of the published modelling study in the Lancet, the cost-effectiveness of the MFT versus the single first-line therapies was assessed in settings of different transmission intensities, treatment coverages and fitness cost of resistance using a previously developed model of the dynamics of malaria and a literature âbased cost estimate of changing antimalarial drug policy at national level. This study demonstrates that the MFT strategies outperform the single first-line strategies in terms of costs and benefits across the wide range of epidemiological and economic scenarios considered. The second analysis of the thesis is not only internationally relevant but also with a focus towards healthcare practice in Vietnam. These two studies add significant new cost-effectiveness evidence in Vietnam. This thesis presents the first trial-based economic evaluation in Vietnam considers patient-health outcome measures as the participants have cognitive limitations (tuberculous meningitis), dealing with missing data along with the potential ways to handle this common problem by the use of multiple imputation, and the issues of censored costs data. Having identified these issues would support the decision makers or stakeholders including the pharmaceutical industry to devise a new guideline on how to implement a well-design trial-based economic evaluation in Vietnam in the future. Another novelty of this thesis is the introduction of the detailed of costing of drug regimens change in which the economic evaluations considering the drug policy change often do not include. This cost could be substantial to the healthcare system for retraining the staff and publishing the new guidelines. This thesis will document the costs incurred by the Vietnamese government by changing the first-line treatment of malaria, from single first-line therapy (ACT) to multiple first-line therapies.
|
270 |
Propriétés géométriques du nombre chromatique : polyèdres, structures et algorithmes / Geometric properties of the chromatic number : polyhedra, structure and algorithmsBenchetrit, Yohann 12 May 2015 (has links)
Le calcul du nombre chromatique et la détermination d'une colo- ration optimale des sommets d'un graphe sont des problèmes NP- difficiles en général. Ils peuvent cependant être résolus en temps po- lynomial dans les graphes parfaits. Par ailleurs, la perfection d'un graphe peut être décidée efficacement. Les graphes parfaits sont caractérisés par la structure de leur poly- tope des stables : les facettes non-triviales sont définies exclusivement par des inégalités de cliques. Réciproquement, une structure similaire des facettes du polytope des stables détermine-t-elle des propriétés combinatoires et algorithmiques intéressantes? Un graphe est h-parfait si les facettes non-triviales de son polytope des stables sont définies par des inégalités de cliques et de circuits impairs. On ne connaît que peu de résultats analogues au cas des graphes parfaits pour la h-perfection, et on ne sait pas si les problèmes sont NP-difficiles. Par exemple, les complexités algorithmiques de la re- connaissance des graphes h-parfaits et du calcul de leur nombre chro- matique sont toujours ouvertes. Par ailleurs, on ne dispose pas de borne sur la différence entre le nombre chromatique et la taille maxi- mum d'une clique d'un graphe h-parfait. Dans cette thèse, nous montrons tout d'abord que les opérations de t-mineurs conservent la h-perfection (ce qui fournit une extension non triviale d'un résultat de Gerards et Shepherd pour la t-perfection). De plus, nous prouvons qu'elles préservent la propriété de décompo- sition entière du polytope des stables. Nous utilisons ce résultat pour répondre négativement à une question de Shepherd sur les graphes h-parfaits 3-colorables. L'étude des graphes minimalement h-imparfaits (relativement aux t-mineurs) est liée à la recherche d'une caractérisation co-NP com- binatoire de la h-perfection. Nous faisons l'inventaire des exemples connus de tels graphes, donnons une description de leur polytope des stables et énonçons plusieurs conjectures à leur propos. D'autre part, nous montrons que le nombre chromatique (pondéré) de certains graphes h-parfaits peut être obtenu efficacement en ar- rondissant sa relaxation fractionnaire à l'entier supérieur. Ce résultat implique notamment un nouveau cas d'une conjecture de Goldberg et Seymour sur la coloration d'arêtes. Enfin, nous présentons un nouveau paramètre de graphe associé aux facettes du polytope des couplages et l'utilisons pour donner un algorithme simple et efficace de reconnaissance des graphes h- parfaits dans la classe des graphes adjoints. / Computing the chromatic number and finding an optimal coloring of a perfect graph can be done efficiently, whereas it is an NP-hard problem in general. Furthermore, testing perfection can be carried- out in polynomial-time. Perfect graphs are characterized by a minimal structure of their sta- ble set polytope: the non-trivial facets are defined by clique-inequalities only. Conversely, does a similar facet-structure for the stable set polytope imply nice combinatorial and algorithmic properties of the graph ? A graph is h-perfect if its stable set polytope is completely de- scribed by non-negativity, clique and odd-circuit inequalities. Statements analogous to the results on perfection are far from being understood for h-perfection, and negative results are missing. For ex- ample, testing h-perfection and determining the chromatic number of an h-perfect graph are unsolved. Besides, no upper bound is known on the gap between the chromatic and clique numbers of an h-perfect graph. Our first main result states that the operations of t-minors keep h- perfection (this is a non-trivial extension of a result of Gerards and Shepherd on t-perfect graphs). We show that it also keeps the Integer Decomposition Property of the stable set polytope, and use this to answer a question of Shepherd on 3-colorable h-perfect graphs in the negative. The study of minimally h-imperfect graphs with respect to t-minors may yield a combinatorial co-NP characterization of h-perfection. We review the currently known examples of such graphs, study their stable set polytope and state several conjectures on their structure. On the other hand, we show that the (weighted) chromatic number of certain h-perfect graphs can be obtained efficiently by rounding-up its fractional relaxation. This is related to conjectures of Goldberg and Seymour on edge-colorings. Finally, we introduce a new parameter on the complexity of the matching polytope and use it to give an efficient and elementary al- gorithm for testing h-perfection in line-graphs.
|
Page generated in 0.0474 seconds