• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 627
  • 402
  • 118
  • 86
  • 68
  • 49
  • 47
  • 42
  • 24
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • Tagged with
  • 1685
  • 241
  • 188
  • 185
  • 167
  • 151
  • 130
  • 126
  • 112
  • 112
  • 108
  • 103
  • 102
  • 93
  • 87
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

A new computational approach to the synthesis of fixed order controllers

Malik, Waqar Ahmad 10 October 2008 (has links)
The research described in this dissertation deals with an open problem concerning the synthesis of controllers of xed order and structure. This problem is encountered in a variety of applications. Simply put, the problem may be put as the determination of the set, S of controller parameter vectors, K = (k1; k2,...,kl), that render Hurwitz a family (indexed by F) of complex polynomials of the form {P0(s.a) + [summation] i=1 Pi(s,a)ki, a [set membership] F}, where the polynomials Pj(s,a), j = 0,...,l are given data. They are specied by the plant to be controlled, the structure of the controller desired and the performance that the controllers are expected to achieve. Simple examples indicate that the set S can be non-convex and even be disconnected. While the determination of the non-emptiness of S is decidable and amenable to methods such as the quantier elimination scheme, such methods have not been computationally tractable and more importantly, do not provide a reasonable approximation for the set of controllers. Practical applications require the construction of a set of controllers that will enable a control engineer to check the satisfaction of performance criteria that may not be mathematically well characterized. The transient performance criteria often fall into this category. From the practical viewpoint of the construction of approximations for S, this dissertation is dierent from earlier work in the literature on this problem. A novel feature of the proposed algorithm is the exploitation of the interlacing property of Hurwitz polynomials to provide arbitrarily tight outer and inner approximation to S. The approximation is given in terms of the union of polyhedral sets which are constructed systematically using the Hermite-Biehler theorem and the generalizations of the Descartes' rule of signs.
282

O teorema de Lefschetz-Hopf e sua relação com outros teoremas clássicos da topologia /

Galves, Ana Paula Tremura. January 2009 (has links)
Orientador: Maria Gorete Carreira Andrade / Banca: Denise de Mattos / Banca: Ermínia de Lourdes Campello Fanti / Resumo: Em Topologia, mais especificamente em Topologia Algébrica, temos alguns resultados clássicos que de alguma forma estão relacionados. No desenvolvimento deste trabalho, estudamos alguns desses resultados, a saber: Teorema de Lefschetz-Hopf, Teorema do Ponto Fixo de Lefschetz, Teorema do Ponto Fixo de Brouwer, Teorema da Curva de Jordan e o Teorema Clássico de Borsuk-Ulam. Além disso, tivemos como objetivo principal mostrar relações existentes entre esses teoremas a partir do Teorema de Lefschetz-Hopf. / Abstract: In Topology, more specifically in Algebraic Topology, we have some classical results that are in some way related. In developing this work, we studied some of these results, namely the Lefschetz-Hopf Theorem, the Lefschetz Fixed Point Theorem, the Brouwer Fixed Point Theorem, the Jordan Curve Theorem and the Classic Borsuk-Ulam Theorem. Moreover, our main objective was to show relationships among those theorems by using Lefschetz-Hopf Theorem. / Mestre
283

Kritische Ereignisse und private Überschuldung. Eine quantitative Betrachtung des Zusammenhangs

Angel, Stefan, Heitzmann, Karin 09 1900 (has links) (PDF)
Es wird untersucht, ob kritische Ereignisse (z. B. Arbeitslosigkeit) bzw. ein durch kritische Ereignisse ausgelöster finanzieller Schock die Überschuldungswahrscheinlichkeit privater Haushalte signifikant erhöhen (Schockthese). Weiters wird getestet, ob der Effekt kritischer Ereignisse durch kostensparende Handlungen abgeschwächt werden kann (Copingthese) bzw. von der finanziellen und sozialen Ausgangssituation beeinflusst ist (Vulnerabilitätsthese). Datengrundlage sind österreichische Befragungsdaten (ECHP 1995 bis 2001; EU-SILC 2004 bis 2008), auf Basis derer Panel-Regressionsmodelle geschätzt werden. Für die untersuchten kritischen Ereignisse kann kein direkter Effekt auf die Überschuldungswahrscheinlichkeit nachgewiesen werden; sehr wohl aber wirkt sich ein finanzieller Schock signifikant aus. Die Evidenz für eine Gültigkeit der Copingthese ist schwach, aber auch nach Kontrolle unbeobachteter, zeitkonstanter Faktoren stabil. Schätzungen zur Überprüfung der Vulnerabilitätsthese zeigen je nach verwendetem Vulnerabilitätsindikator unterschiedliche Ergebnisse. Die Befunde unterstreichen die Komplexität des Entstehungszusammenhanges: Überschuldung kann weder ausschließlich auf das Konsumverhalten bzw. Kosten-Nutzen-Erwägungen der Haushalte, noch ausschließlich auf exogene Schocks zurückgeführt werden.
284

Évaluation analytique de la précision des systèmes en virgule fixe pour des applications de communication numérique / Analytical approach for evaluation of the fixed point accuracy

Chakhari, Aymen 07 October 2014 (has links)
Par rapport à l'arithmétique virgule flottante, l'arithmétique virgule fixe se révèle plus avantageuse en termes de contraintes de coût et de consommation, cependant la conversion en arithmétique virgule fixe d'un algorithme spécifié initialement en virgule flottante se révèle être une tâche fastidieuse. Au sein de ce processus de conversion, l'une des étapes majeures concerne l'évaluation de la précision de la spécification en virgule fixe. En effet, le changement du format des données de l'application s'effectue en éliminant des bits ce qui conduit à la génération de bruits de quantification qui se propagent au sein du système et dégradent la précision des calculs en sortie de l'application. Par conséquent, cette perte de précision de calcul doit être maîtrisée et évaluée afin de garantir l'intégrité de l'algorithme et répondre aux spécifications initiales de l'application. Le travail mené dans le cadre de cette thèse se concentre sur des approches basées sur l'évaluation de la précision à travers des modèles analytiques (par opposition à l'approche par simulations). Ce travail traite en premier lieu de la recherche de modèles analytiques pour évaluer la précision des opérateurs non lisses de décision ainsi que la cascade d'opérateurs de décision. Par conséquent, la caractérisation de la propagation des erreurs de quantification dans la cascade d'opérateurs de décision est le fondement des modèles analytiques proposés. Ces modèles sont appliqués à la problématique de l'évaluation de la précision de l'algorithme de décodage sphérique SSFE (Selective Spanning with Fast Enumeration) utilisé pour les systèmes de transmission de type MIMO (Multiple-Input Multiple-Output). Dans une seconde étape, l'évaluation de la précision des structures itératives d'opérateurs de décision a fait l'objet d'intérêt. Une caractérisation des erreurs de quantification engendrées par l'utilisation de l'arithmétique en virgule fixe est menée afin de proposer des modèles analytiques basés sur l'estimation d'une borne supérieure de la probabilité d'erreur de décision ce qui permet de réduire les temps d'évaluation. Ces modèles sont ensuite appliqués à la problématique de l'évaluation de la spécification virgule fixe de l'égaliseur à retour de décision DFE (Decision Feedback Equalizer). Le second aspect du travail concerne l'optimisation des largeurs de données en virgule fixe. Ce processus d'optimisation est basé sur la minimisation de la probabilité d'erreur de décision dans le cadre d'une implémentation sur un FPGA (Field-Programmable Gate Array) de l'algorithme DFE complexe sous contrainte d'une précision donnée. Par conséquent, pour chaque spécification en virgule fixe, la précision est évaluée à travers les modèles analytiques proposés. L'estimation de la consommation des ressources et de la puissance sur le FPGA est ensuite obtenue à l'aide des outils de Xilinx pour faire un choix adéquat des largeurs des données en visant à un compromis précision/coût. La dernière phase de ce travail traite de la modélisation en virgule fixe des algorithmes de décodage itératif reposant sur les concepts de turbo-décodage et de décodage LDPC (Low-Density Parity-Check). L'approche proposée prend en compte la structure spécifique de ces algorithmes ce qui implique que les quantités calculées au sein du décodeur (ainsi que les opérations) soient quantifiées suivant une approche itérative. De plus, la représentation en virgule fixe utilisée (reposant sur le couple dynamique et le nombre de bits total) diffère de la représentation classique qui, elle, utilise le nombre de bits accordé à la partie entière et la partie fractionnaire. Avec une telle représentation, le choix de la dynamique engendre davantage de flexibilité puisque la dynamique n'est plus limitée uniquement à une puissance de deux. Enfin, la réduction de la taille des mémoires par des techniques de saturation et de troncature est proposée de manière à cibler des architectures à faible-complexité. / Traditionally, evaluation of accuracy is performed through two different approaches. The first approach is to perform simulations fixed-point implementation in order to assess its performance. These approaches based on simulation require large computing capacities and lead to prohibitive time evaluation. To avoid this problem, the work done in this thesis focuses on approaches based on the accuracy evaluation through analytical models. These models describe the behavior of the system through analytical expressions that evaluate a defined metric of precision. Several analytical models have been proposed to evaluate the fixed point accuracy of Linear Time Invariant systems (LTI) and of non-LTI non-recursive and recursive linear systems. The objective of this thesis is to propose analytical models to evaluate the accuracy of digital communications systems and algorithms of digital signal processing made up of non-smooth and non-linear operators in terms of noise. In a first step, analytical models for evaluation of the accuracy of decision operators and their iterations and cascades are provided. In a second step, an optimization of the data length is given for fixed-point hardware implementation of the Decision Feedback Equalizer DFE based on analytical models proposed and for iterative decoding algorithms such as turbo decoding and LDPC decoding-(Low-Density Parity-Check) in a particular quantization law. The first aspect of this work concerns the proposition analytical models for evaluating the accuracy of the non-smooth decision operators and the cascading of decision operators. So, the characterization of the quantization errors propagation in the cascade of decision operators is the basis of the proposed analytical models. These models are applied in a second step to evaluate the accuracy of the spherical decoding algorithmSSFE (Selective Spanning with Fast Enumeration) used for transmission MIMO systems (Multiple-Input Multiple -Output). In a second step, the accuracy evaluation of the iterative structures of decision operators has been the interesting subject. Characterization of quantization errors caused by the use of fixed-point arithmetic is introduced to result in analytical models to evaluate the accuracy of application of digital signal processing including iterative structures of decision. A second approach, based on the estimation of an upper bound of the decision error probability in the convergence mode, is proposed for evaluating the accuracy of these applications in order to reduce the evaluation time. These models are applied to the problem of evaluating the fixed-point specification of the Decision Feedback Equalizer DFE. The estimation of resources and power consumption on the FPGA is then obtained using the Xilinx tools to make a proper choice of the data widths aiming to a compromise accuracy/cost. The last step of our work concerns the fixed-point modeling of iterative decoding algorithms. A model of the turbo decoding algorithm and the LDPC decoding is then given. This approach integrates the particular structure of these algorithms which implies that the calculated quantities in the decoder and the operations are quantified following an iterative approach. Furthermore, the used fixed-point representation is different from the conventional representation using the number of bits accorded to the integer part and the fractional part. The proposed approach is based on the dynamic and the total number of bits. Besides, the dynamic choice causes more flexibility for fixed-point models since it is not limited to only a power of two.
285

Applications in Fixed Point Theory

Farmer, Matthew Ray 12 1900 (has links)
Banach's contraction principle is probably one of the most important theorems in fixed point theory. It has been used to develop much of the rest of fixed point theory. Another key result in the field is a theorem due to Browder, Göhde, and Kirk involving Hilbert spaces and nonexpansive mappings. Several applications of Banach's contraction principle are made. Some of these applications involve obtaining new metrics on a space, forcing a continuous map to have a fixed point, and using conditions on the boundary of a closed ball in a Banach space to obtain a fixed point. Finally, a development of the theorem due to Browder et al. is given with Hilbert spaces replaced by uniformly convex Banach spaces.
286

Går mobbade elever miste om skolundervisning? : En studie om mobbningens inverkan på olovlig frånvaro

Oona, Tuominen January 2021 (has links)
Nationalekonomisk litteratur har visat att utsatthet för skolmobbning kan ha negativa konsekvenser på lång sikt såsom lägre utbildningsnivå och löner. I denna uppsats undersöks huruvida olovlig frånvaro kan vara en mekanism som ligger till grund för de långsiktiga följderna genom att studeraorsakssambandet mellan mobbning och skolkning. Därtill undersöks ifall elevernas kön har en påverkan i denna fråga. OLS samt paneldatametoden fixed effects används i studien med hjälp av data på kommunnivå från finska enkätundersökningen Hälsa i skola för åren 2017 och 2019. I studien hittas inget starkt bevis för att mobbning skulle orsaka olovlig frånvaro hos offren eftersom den mest pålitliga modellen inte finner något signifikant samband. Dock tyder övriga resultat på att det kan finnas ett i verkligheten. Andra modeller estimerar att en procentenhets ökning i andelen mobbade elever orsakar ungefär 0,2 procentenheters ökning i andelen skolkande elever. Resultaten indikerar att det kan finnas könsskillnader i denna fråga. / It has been shown in the economical literature that being bullied at school may have negative consequences in the long run, such as lower educational levels and wages. This paper examines whether school absenteeism could possibly be one mechanism behind the long-run consequences by studying the causality between bullying and truancy. Gender’s role in the question has also been examined. OLS and panel data method fixed effects along with Finnish survey data for 2017 and 2019 are applied. No strong evidence that bullying would lead the victims to truant was found as the most reliable model couldn't find any significant association. Other results, though, suggest that such causality might exist. Alternative models estimate that a one percentage point increase in the share of students bullied causes an increase of approximately 0,2 percentage points in the share of truants. The results indicate that there might be gender differences regarding the topic.
287

Enhetsarbetskostander och inflation : Finns det något samband? / Unit labour cost and inflation : Is there any Relationship?

Löfving, Carl, Nilsson, Samuel January 2022 (has links)
I den här uppsatsen studerade vi förhållandet mellan enhetsarbetskostnaden och inflationen bland OECD-länderna. Den metod som vi använde var dels en regression av paneldata med fixed effects och dels en dynamisk modell där vi försökte urskilja det fördröjda förhållandet mellan våra intressevariabler. Förhållandet har länge debatterats och flera teorier har hävdat dess samband men empiriska studier har visat på dess motsats. Vi ville därför bidra till den tidigare forskningen genom att undersöka förhållandet mellan de båda variablerna med ett större antal länder än vad majoriteten av de tidigare studierna har undersökt, men också med mer modern data. Dessutom undersökte vi om förhållandet har försvagats över tid. För att göra detta delade vi upp våra data i två delar, en mellan 1995–2008 och en mellan 2009–2020. Den sista forskningsfrågan vi undersökte är att vi försökte se om förhållandet skiljer sig åt mellan USA och de EU-länder som är medlemmar i OECD. Uppsatsen är uppdelad i ett teoriavsnitt följt av den metod vi använder. Därefter följer ett avsnitt om resultat och analys. Till sist följer en kort sammanfattning. I uppsatsen finner vi stöd för att enhetsarbetskostnaderna har en betydande inverkan på inflationen och vice versa, vilket bekräftar Blanchards teori om att det finns en löne-prisspiral. Vi kan också se att den första eftersläpningen av inflationen har en signifikant inverkan endast när vi inte inkluderar våra kontrollvariabler. När vi vände på regressionen och studerade den första laggen av enhetsarbetskostnaderna ser vi att den har en svagt signifikant inverkan på inflationen, skild från noll. Vidare finner vi skillnader med avseende på uppdelningen i tiden. Våra resultat visar att effekten mellan våra variabler skiljer sig åt mellan 1995–2008 och 2009–2020. Vår tredje och sista forskningsfråga är om förhållandet skiljer sig åt mellan olika geografiska områden. Här fann vi att det finns en signifikant skillnad mellan USA och EU-länderna.
288

Modeling land-cover change in the Amazon using historical pathways of land cover change and Markov chains. A case study of Rondõnia, Brazil

Becerra-Cordoba, Nancy 15 August 2008 (has links)
The present dissertation research has three purposes: the first one is to predict anthropogenic deforestation caused by small farmers firstly using only pathways of past land cover change and secondly using demographic, socioeconomic and land cover data at the farm level. The second purpose is to compare the explanatory and predictive capacity of both approaches at identifying areas at high risk of deforestation among small farms in Rondõnia, Brazil. The third purpose is to test the assumptions of stationary probabilities and homogeneous subjects, both commonly used assumptions in predictive stochastic models applied to small farmers' deforestation decisions. This study uses the following data: household surveys, maps, satellite images and their land cover classification at the pixel level, and pathways of past land cover change for each farm. These data are available for a panel sample of farms in three municipios in Rondõnia, Brazil (Alto Paraiso, Nova União, and Rolim de Moura) and cover a ten-year period of study (1992-2002). Pathways of past land cover change are graphic representations in the form of flow charts that depict Land Cover Change (LCC) in each farm during the ten-year period of study. Pathways were constructed using satellite images, survey data and maps, and a set of interviews performed on a sub-sample of 70 farms. A panel data analysis of the estimated empirical probabilities was conducted to test for subject and time effects using a Fixed Group Effects Model (FGEM), specifically the Least Square Dummy Variable (LSDV1) fixed effects technique. Finally, the two predictive modeling approaches are compared. The first modeling approach predicts future LCC using only past land cover change data in the form of empirical transitional probabilities of LCC obtained from pathways of past LCC. These empirical probabilities are used in a LSDV1 for fixed–group effects, a LSDV1 for fixed-time effects, and an Ordinary Least Square model (OLS) for the pooled sample. Results from these models are entered in a modified Markov chain model's matrix multiplication. The second modeling approach predicts future LCC using socio-demographic and economic survey variables at the household level. The survey data is used to perform a multinomial logit regression model to predict the LC class of each pixel. In order to compare the explanatory and predictive capacity of both modeling approaches, LCC predictions at the pixel level are summarized in terms of percentage of cells in which future LC was predicted correctly. Percentage of correct predicted land cover class is compared against actual pixel classification from satellite images. The presence of differences among farmers in the LSDV1-fixed group effect by farmer suggests that small farmers are not a homogeneous group in term of their probabilities of LCC and that further classification of farmers into homogeneous subgroups will depict better their LCC decisions. Changes in the total area of landholdings proved a stronger influence in farmer's LCC decisions in their main property (primary lot) when compared to changes in the area of the primary lot. Panel data analysis of the LCC empirical transition probabilities (LSDV1 fixed time effects model) does not find enough evidence to prefer the fixed time effects model when compared to a Ordinary Least Square (OLS) pooled version of the probabilities. When applying the results of the panel data analysis to a modified markov chain model the LSDV1-farmer model provided a slightly better accuracy (59.25% accuracy) than the LSDV1-time and the OLS-pooled models (57.54% and 57.18%, respectively). The main finding for policy and planning purposes is that owners type 1—with stable total landholdings over time—tend to preserve forest with a much higher probability (0.9033) than owner with subdividing or expanding properties (probs. of 0.0013 and 0.0030). The main implication for policy making and planning is to encourage primary forest preservation, given that the Markov chain analysis shows that primary forest changes into another land cover, it will never go back to this original land cover class. Policy and planning recommendations are provided to encourage owner type 1 to continue their pattern of high forest conservation rates. Some recommendations include: securing land titling, providing health care and alternative sources of income for the OT1's family members and elderly owners to remain in the lot. Future research is encouraged to explore spatial autocorrelation in the pixel's probabilities of land cover change, effects of local policies and macro-economic variables in the farmer's LCC decisions. / Ph. D.
289

Força de mordida em pacientes com fissura labiopalatina reabilitados com próteses parciais fixas sobre dentes naturais e implantes / BITE FORCE IN PATIENTS WITH REHABILITATED CLEFT LIP AND PALATE WITH PARTIALLY FIXED PROSTHESES ON NATURAL TEETH AND IMPLANTS

Tavano, Rafael D'Aquino 31 July 2015 (has links)
As fissuras palatinas além de envolvem o osso alveolar podem promover a ausência do dente incisivo lateral ou apresenta-lo severamente comprometido. Nesses casos sua reabilitação poderá ser feita por próteses parciais fixas convencionais ou por prótese sobre implantes, geralmente após uma cirurgia de enxerto ósseo na região. Todavia além da estética, se faz necessário investigar sobre o potencial de força oclusal que essas reabilitações proporcionam. Assim, o propósito desse estudo foi avaliar a força máxima de mordida em indivíduos com fissura labiopalatina unilateral reabilitados com prótese parcial fixa convencional e sobre implantes e comparar esses resultados com o lado contralateral e com indivíduos sem fissura. A amostra foi constituída por 50 indivíduos, 25 pacientes com fissura (15 reabilitados com próteses parciais fixas convencionais e 10 sobre implantes) e 25 indivíduos sem fissura com dentes naturais. A força de mordida máxima foi mensurada por um único examinador utilizando o gnatodinamômetro, registrado nas regiões reabilitadas de incisivo lateral e canino, região de molares e incisivos centrais. Os valores médios obtidos, e a análise estatística com os testes t de Student e t de Student pareado permitiram observar que no mesmo indivíduo, a força de mordida do lado não afetado foi estatisticamente superior quando comparado com o lado da fissura reabilitado com prótese (p=0,005). O grupo reabilitado com prótese fixa convencional apresentou a força máxima de mordida estatisticamente igual ao grupo com prótese fixa sobre implante (p=0,781). Considerando os grupos experimental e controle na região de molares os resultados foram estatisticamente iguais (lado afetado p=0,082 e não afetado p=0,066). Na região de incisivo lateral e canino, o lado correspondente ao afetado no grupo-controle, apresentou força máxima de mordida estatisticamente maior que o grupo-experimental (p=0,004), enquanto no lado correspondente ao não afetado os resultados foram iguais. Na região de incisivos centrais o resultado médio do grupo-controle também foi estatisticamente maior que o experimental (p=0,005) / The cleft palate involving the alveolar bone may have the lateral incisor tooth missing or severely compromised. In such cases rehabilitation can be done through fixed dental prostheses or implant-supported fixed dental prostheses, generally after a bone graft surgery in the region. However, beyond aesthetics, it is necessary to investigate the potential for occlusal force that these rehabilitations provide. Thus, the purpose of this study was to evaluate the maximum bite force in subjects with unilateral cleft lip and palate rehabilitated with fixed dental prostheses or implantsupported fixed dental prostheses and compare the results with the contralateral side and with individuals without cleft. The sample consisted of 50 subjects, 25 patients with cleft (15 rehabilitated with fixed dental prostheses and 10 with implant-supported fixed dental prostheses) and 25 individuals without cleft and with natural teeth. The maximum bite force was measured by means of gnathodynamometer, registered in the rehabilitated lateral incisor and canine region, molars region and incisors regions by a single examiner. The obtained average values, and the statistical analysis with the Student t test and paired Student t test allowed to be observed that on the same individual the bite force on the side that was not affected was statistically superior when compared to the side of the rehabilitated cleft with prosthesis (p=0,005). The rehabilitated group with conventional prostheses presented maximum bite force statistically equal to the group with the implant-supported fixed dental prostheses (p=0,781). Considering the experimental and control groups on the molar regions, the results were statistically equal (affected side p=0,082 e non-affected p=0,066). The lateral incisor and canine region from the correspondent affected side on the control group presented maximum bite force statistically higher than the experimental group (p=0,004), on the non-affected side the results were the same. In the incisor region, the average result of the control group was also statistically higher than the experimental group (p=0,005)
290

Development Of Mems Technology Based Microwave And Millimeter-wave Components

Cetintepe, Cagri 01 February 2010 (has links) (PDF)
This thesis presents development of microwave lumped elements for a specific surface-micromachining based technology, a self-contained mechanical characterization of fixed-fixed type beams and realization of a shunt, capacitive-contact RF MEMS switch for millimeter-wave applications. Interdigital capacitor, planar spiral inductor and microstrip patch lumped elements developed in this thesis are tailored for a surface-micromachining technology incorporating a single metallization layer, which allows an easy and low-cost fabrication process while permitting mass production. Utilizing these elements, a bandpass filter is fabricated monolithically with success, which exhibits a measured in-band return loss better than -20 dB and insertion loss of 1.2 dB, a pass-band located in S-band and a stop-band extending up to 20 GHz. Analytical derivations for deflection profile and spring constant of fixed-fixed beams are derived for constant distributed loads while taking axial effects into account. Having built experience with the mechanical domain, next, Finite Difference solution schemes are established for pre-pull-in and post-pull-in electrostatic actuation problems. Using the developed numerical tools / pull-in, release and zipping phenomena are investigated. In particular, semi-empirical expressions are developed for the pull-in voltage with associated errors not exceeding 3.7 % of FEA (Finite Element Analysis) results for typical configurations. The shunt, capacitive-contact RF MEMS switch is designed in electromagnetic and mechanical domains for Ka-band operation. Switches fabricated in the first process run could not meet the design specifications. After identifying sources of relevant discrepancies, a design modification is attempted and re-fabricated devices are operated successfully. In particular, measured OFF-state return and insertion losses better than -16.4 dB and 0.27 dB are attained in 1-40 GHz. By applying a 20-25V actuation, ON-state resonances are tuned precisely to 35 GHz with an optimum isolation level of 39 dB.

Page generated in 0.041 seconds