• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 480
  • 106
  • 97
  • 74
  • 40
  • 14
  • 13
  • 13
  • 8
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1064
  • 291
  • 281
  • 258
  • 155
  • 142
  • 138
  • 130
  • 121
  • 120
  • 103
  • 98
  • 93
  • 83
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Polinômios ortogonais em várias variáveis

Niime, Fabio Nosse [UNESP] 24 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:18Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-02-24Bitstream added on 2014-06-13T20:28:32Z : No. of bitstreams: 1 niime_fn_me_sjrp.pdf: 457352 bytes, checksum: 318f01064234c003baca33cae4183d6d (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O objetivo des trabalho é estudar os polinômios ortogonais em várias variáveis com relação a um funcional linear, L e suas propriedades análogas às dos polinômios ortogonais em uma variável, tais como: a relação de três termos, a relação de recorrência de três termos, o teorema de Favard, os zeros comuns ea cubatura gaussiana. Além disso, apresentamos um método para gerar polinômios ortonormais em duas variáveis e alguns exemplos. / The aim here is to study the orthogonal polynomials in several variables with respect to a linear functional, L. also, to study its properties analogous to orthogonal polynomials in one variable, such as the theree term relation, the three term recurrence relation, Favard's theorem, the common zeros and Gaussian cubature. A method to generating orthonormal polynomials in two variables and some examples are presented.
582

Modélisation et simulation des connexions intra et inter systèmes électroniques / Modeling and simulation of interconnects within and between electronic systems

Iassamen, Nadia 03 December 2013 (has links)
Les progrès constants en miniaturisation des transistors et l’augmentation des fréquences des signaux utilisés sont les principales tendances dans l’évolution des circuits électroniques. Avec ces évolutions apparaissent de nombreux effets indésirables qui perturbent le comportement des systèmes électroniques et sont soupçonnés d’être responsables de la majorité des dégradations de signaux dans les systèmes en haute fréquence. Des retards de propagation indésirables sont ainsi introduits par la présence des interconnexions, et la diaphonie, phénomène dû aux couplages entre lignes d’interconnexions, peut éventuellement provoquer des commutations non désirées des transistors. La prise en compte des interconnexions, dès les premières phases de conception d'un système, est par conséquent devenue une nécessité ces dernières années. Mais la simulation temporelle d’un réseau d’interconnexions est très gourmande en temps de calcul, ce qui impacte la durée globale de conception. Le remplacement des modèles électriques, décrivant précisément les interconnexions, par des modèles plus simples est primordial pour limiter les coûts de calcul. Une méthode de réduction d'ordre des modèles peut alors être employée pour effectuer cette opération efficacement. Le modèle final doit en effet décrire assez précisément certains aspects importants du modèle original et conserver les propriétés importantes du réseau d'interconnexions. Cette démarche permettra aux concepteurs d’effectuer des simulations temporelles rapides et d’étudier les paramètres d’intégrité du signal tel que le retard, le temps de montée, le dépassement….L'objectif de cette thèse est d’établir un nouvel outil de réduction de complexité des modèles de réseaux d'interconnexions. Différentes descriptions initiales des systèmes d'interconnexions sont envisagées : modèles circuits (fonctions de transfert) ou mesures fréquentielles. L’approche développée repose sur l’utilisation des fonctions orthogonales de Müntz-Laguerre et de Kautz afin de décrire mathématiquement, de manière précise, le système d'origine. Un opérateur linéaire, lié à ces fonctions de base, est ensuite appliqué pour déterminer un modèle rationnel de moindre complexité. La technique proposée est comparée à d'autres méthodes de la littérature d’abord sur des exemples académiques. Tout le potentiel de la méthode est ensuite illustré par sa mise en œuvre sur des réseaux d'interconnexions. / The ongoing progress in transistor miniaturization and a continuous frequency increase are the main trends in the present day evolution of electronic circuits. A number of undesired effects are intrinsic to these developments and are suspected to be responsible for most of the flawed signals present in high frequency systems. Parasitic delays are thus introduced by the presence of interconnect lines and crosstalk due to coupling may lead to undesired switching events in transistor circuits. Accounting for the presence of interconnect lines, at a very early stage in the design flow has become unavoidable in recent years. However, time domain simulations of massively coupled interconnect networks may be computationally costly and have a tremendous impact on the overall duration of the design process. Replacing complex, high order circuit models by more compact surrogates is thus necessary. Model order reduction is an effective way to derive such surrogates. The final model must mimic certain aspects of the original model with sufficient accuracy and preserve the interconnect network’s most important properties. This approach enables designers to account for the undesired effects of interconnect lines such as, delays, rise-times and overshoots while maintaining the overall duration of time-domain simulations within acceptable limits. The aim of this thesis is to create a new model order reduction tool applicable to complex interconnect networks. Different initial representations were considered – circuit models (transfer functions) or frequency domain measurements. The proposed approach uses orthogonal basis functions such as Müntz-Laguerre and Kautz to build an accurate mathematical representation of the original system .A linear operator, related to these functions, is subsequently used to derive a simplified model. The technique is first compared to other approaches using examples available in literature, its full potential being demonstrated on coupled interconnect models.
583

Ambiente atmosférico favorável ao desenvolvimento de complexos convectivos de mesoescala no sul do Brasil

Moraes, Flávia Dias de Souza January 2016 (has links)
Complexos Convectivos de Mesoescala (CCM) são eventos meteorológicos de difícil previsão, que resultam em tempestades severas e desastres. O objetivo deste trabalho é indicar as características em grande escala do ambiente atmosférico favorável para a formação de CCM no Sul do Brasil, entre 1998 e 2007. Fez-se uso da base de dados de CCM de Durkee e Mote (2009), assim como das variáveis de Potencial de Energia Convectiva Disponível (CAPE), ponto de orvalho, temperatura, altura geopotencial, componentes de vento u e v e umidade relativa da reanálise do National Center for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR), coletadas entre 2,5 e 5,5 horas antes do desenvolvimento dos CCM. Com o método de Análise das Componentes Principais (ACP), geraram-se as composições do ambiente atmosférico médio favorável ao desenvolvimento dos CCM, para comparar o grupo dos que ocorreram no Sul do Brasil ao dos que atuaram em outras regiões da AS. Usando como dado de entrada as variáveis de altura geopotencial e temperatura (em 850 hPa), foram encontradas quatro componentes principais para cada um dos grupos de CCM. Com base nas componentes principais, nas variáveis atmosféricas e nas cartas sinóticas, foram reconstruídos os ambientes atmosféricos médios para identificar o comportamento das características atmosféricas prévias aos CCM para cada conjunto de eventos. Os resultados identificaram 303 CCM, 96 no Sul do Brasil, 168 em outras regiões da AS e 39 oceânicos. O ambiente atmosférico médio dos 168 CCM não apresentou características homogêneas, pois 75% das componentes não possuíam jatos de baixos níveis (JBN) dentro dos critérios adotados, mas a presença de um escoamento meridional. Esse fluxo, ao encontrar com a região de divergência dos jatos de altos níveis (JAN), foi um dos fatores favoráveis para a convecção, já que seus valores de CAPE (≥ 450 J kg-1) eram menores que a média esperada para formação de tempestades e só uma das componentes teve frentes frias associadas. Por outro lado, o grupo dos 96 CCM que atuaram no Sul do Brasil mostrou-se cerca de 50.000 km² maior em extensão que os das outras regiões da AS e dos EUA e com duração de pelo menos 3 h a mais. Além disso, as características atmosféricas do grupo de CCM do Sul do Brasil mostraram padrões homogêneos, podendo indicar a formação de CCM nessa região quando: o campo de ventos médios em 850 e 200 hPa, se encontrarem em posição ortogonal, indicando acoplamento entre os jatos de baixos e altos níveis; os valores de CAPE forem ≥ 600 J kg-1 e o cisalhamento vertical estiver entre 7 e 12 m s-1; houver atuação das frentes frias no sul da AS; a umidade relativa disponível estiver concentrada próxima à região Sul do Brasil, com valores maiores que 80%; a altura geopotencial (850 hPa) apresentar um cavado na região gênese dos CCM e a temperatura (850 hPa) estiver mais elevada próxima e ao norte da região de formação. / Mesoscale Convective Complexes (MCCs) are meteorological events difficult to forecast, which result in severe storms and other natural hazards. This study’s objective is to indicate the large-scale atmospheric environment favorable to develop MCCs in Southern Brazil during the 1998–2007 period. The MCCs database used was from Durkee and Mote (2009) and the variables selected include CAPE (Convective Available Potential Energy), dewpoint temperature, temperature, geopotential height, and relative humidity from National Center for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR), collected from 2,5 to 5,5 hours before the MCCs’ development. Principal component analysis (PCA) method was used to construct the average atmospheric environments of MCCs group that occurred in Southern Brazil to compare with MCCs that occurred in other regions of South America. Temperature and geopotential height were the variables used for the PCA, resulting in four principal components to each MCCs group. Based on these principal components, meteorological variables and synoptic charts, average atmospheric environments were built to understand the atmospheric parameters that indicate the development of MCCs in each group. Results show 303 MMCs, 96 were located in Southern Brazil, 168 in South America and 39 in the South Atlantic Ocean. The average atmospheric environment from the group of 168 MCCs did not indicate homogeneous characteristics, as 75% of its principal components cannot be characterized as having a low-level jet (LLJ) in the wind field, instead only a meridional flux of humid and warm air at 850 hPa. This air coupled with the upperlevel jet (ULJ) was found to be responsible for convection developing MCCs, as CAPE (≥ 450 J kg-1) was below the average to produce storms and only one component was associated with a cold front. On the other hand, the MCCs’ group of Southern Brazil is on the order of 50.000 km² larger and 3 hours longer than MCCs from other regions of South America and from the United States. Furthermore, the atmospheric characteristics of the Southern Brazil MCCs’ group revealed homogenous patterns, which suggest that the development of MCCs in this region starts when: the mean wind field indicate a coupled LLJ (jet streak between 10 and 12 m s-1) and ULJ (jet streak ≥ 32 m s-1), CAPE value is ≥ 600 J kg-1 and the vertical wind shear is from 7 to 12 m s-1, cold fronts are active in Southern South America, the relative humidity is concentrated in Southern Brazil and above 80%, the geopotential height (850 hPa) indicate a trough in the genesis region of MCCs and the temperature (850 hPa) is higher near and northern the genesis region.
584

Analysis of High Fidelity Turbomachinery CFD Using Proper Orthogonal Decomposition

Spencer, Ronald Alex 01 March 2016 (has links)
Assessing the impact of inlet flow distortion in turbomachinery is desired early in the design cycle. This thesis introduces and validates the use of methods based on the Proper Orthogonal Decomposition (POD) to analyze clean and 1/rev static pressure distortion simulation results at design and near stall operating condition. The value of POD comes in its ability to efficiently extract both quantitative and qualitative information about dominant spatial flow structures as well as information about temporal fluctuations in flow properties. Observation of the modes allowed qualitative identification of shock waves as well as quantification of their location and range of motion. Modal coefficients revealed the location of the passage shock at a given angular location. Distortion amplification and attenuation between rotors was also identified. A relationship was identified between how distortion manifests itself based on downstream conditions. POD provides an efficient means for extracting the most meaningful information from large CFD simulation data. Static pressure and axial velocity were analyzed to explore the flow physics of 3 rotors of a compressor with a distorted inlet. Based on the results of the analysis of static pressure using the POD modes, it was concluded that there was a decreased range of motion in passage shock oscillation. Analysis of axial velocity POD modes revealed the presence of a separated region on the low pressure surface of the blade which was most dynamic in rotor 1. The thickness of this structure decreased in the near stall operating condition. The general conclusion is made that as the fan approaches stall the apparent effects of distortion are lessened which leads to less variation in the operating condition. This is due to the change in operating condition placing the fan at a different position on the speedline such that distortion effects are less pronounced. POD modes of entropy flux were used to identify three distinct levels of entropy flux in the blade row passage. The separated region was the region with the highest entropy due to the irreversibilities associated with separation.
585

Combinações lineares de polinômios de Chebyshev e polinômios auto-recíprocos /

Hancco Suni, Mijael January 2019 (has links)
Orientador: Vanessa Avansini Botta Pirani / Resumo: O presente trabalho tem como objetivo principal estudar o comportamento dos zeros de alguns tipos de polinômios auto-recíprocos gerados a partir de polinômios quaseortogonais de Chebyshev de ordens um e dois. Os zeros dos polinômios auto-recíprocos que construímos estão ligados aos zeros de polinômios quase-ortogonais. Os polinômios quaseortogonais podem ser obtidos a partir de uma sequência de polinômios ortogonais. Neste trabalho, usaremos os polinômios de Chebyshev para obter polinômios quase-ortogonais e usaremos resultados sobre o comportamento de zeros desses polinômios para obter informações sobre o comportamento dos zeros de polinômios auto-recíprocos. / Abstract: The main objective of this work is to study the behavior of the zeros of some classes of self-reciprocal polynomials related to Chebyshev quasi-orthogonal polynomials of order one and two. The zeros of self-reciprocal polynomials are linked to the zeros of quasiorthogonal polynomials, which can be obtained from a sequence of orthogonal polynomials. In this work we use the Chebyshev polynomials to obtain classes of quasi-orthogonal polynomials and from results on the behavior of their zeros, we obtain information about the zeros of some classes of self-reciprocal polynomials. / Mestre
586

CHARACTERIZATION OF ROTARY BELL ATOMIZERS THROUGH IMAGE ANALYSIS TECHNIQUES

Wilson, Jacob E. 01 January 2018 (has links)
Three methods were developed to better understand and characterize the near-field dynamic processes of rotary bell atomization. The methods were developed with the goal of possible integration into industry to identify equipment changes through changes in the primary atomization of the bell. The first technique utilized high-speed imaging to capture qualitative ligament breakup and, in combination with a developed image processing technique and PIV software, was able to gain statistical size and velocity information about both ligaments and droplets in the image data. A second technique, using an Nd:YAG laser with an optical filter, was used to capture size statistics at even higher rotational speeds than the first technique, and was utilized to find differences between serrated and unserrated bell ligament and droplet data. The final technique was incorporating proper orthogonal decomposition (POD) into image data of a side-profile view of a damaged and undamaged bell during operation. This was done to capture differences between the data sets to come up with a characterization for identifying if a bell is damaged or not for future industrial integration.
587

Developing Experimental Methods and Assessing Metrics to Evaluate Cerebral Aneurysm Hemodynamics

Melissa C Brindise (7469096) 17 October 2019 (has links)
<p>Accurately assessing the risk and growth of rupture among intracranial aneurysms (IA) remains a challenging task for clinicians. Hemodynamic factors are known to play a critical role in the development of IAs, but the specific mechanisms are not well understood. Many studies have sought to correlate specific flow metrics to risk of growth and rupture but have reported conflicting findings. Computational fluid dynamics (CFD) has predominantly been the methodology used to study IA hemodynamics. Yet, CFD assumptions and limitations coupled with the lack of CFD validation has precluded clinical acceptance of IA hemodynamic assessments and likely contributed to the contradictory results among previous studies. Experimental particle image velocimetry (PIV) studies have been noticeably limited in both scope and number among IA studies, in part due to the complexity associated with such experiments. Moreover, the limited understanding of the robustness of hemodynamic metrics across varying flow and measurement environments and the effect of transitional flow in IAs also remain open issues. In this work, techniques to enhance IA PIV capabilities were developed and the first volumetric pulsatile IA PIV study was performed. A novel blood analog solution—a mixture of water, glycerol and urea— was developed and an autonomous methodology for reducing experimental noise in velocity fields was introduced and demonstrated. Both of these experimental techniques can also be used in PIV studies extending beyond IA applications. Further, the onset and development of transitional flow in physiological, pulsatile waveforms was explored. The robustness of hemodynamic metrics such as wall shear stress, oscillatory shear index, and relative residence time across varying modalities, spatiotemporal resolutions, and flow assumptions was explored. Additional hemodynamic metrics which have been demonstrated to be influential in other cardiovascular flows but yet to be tested in IA studies were also identified and considered. Ultimately this work provides a framework for future IA PIV studies as well as insight on using hemodynamic evaluations to assess the risk of growth and rupture of an IA, thereby taking steps towards enhancing the clinical utility of such analysis.</p>
588

Towards Adaptation of OFDM Based Wireless Communication Systems

Billoori, Sharath Reddy 31 March 2004 (has links)
OFDM has been recognized as a powerful multi-carrier modulation technique that provides efficient spectral utilization and resilience to frequency selective fading channels. Adaptive modulation is a concept whereby the modulation modes are dynamically changed based on the perceived instantaneous channel conditions. In conjunction with OFDM systems, adaptive modulation is a very powerful technique to combat the frequency selective nature of mobile channels, while simultaneously attempting to fully maximize the time-varying capacity of the channel. This is based on the fact that frequency selective fading affects the sub-carriers unevenly, causing some of them to fade more severely than others. The modulation modes are adaptively selected on the sub-carriers depending on the amount of fading, to maximize throughput and improve the overall BER. Transmission parameter adaptation is the response of the transmitter to the time-varying channel quality. To efficiently react to the dynamic nature of the channel, adaptive OFDM systems rely on efficient algorithms in three key areas namely, channel quality estimation, transmission parameter selection and signaling or blind detection mechanisms of the modified parameters. These are together termed as the enabling techniques that contribute to the effective performance of adaptive OFDM systems. This thesis deals with higher performance and efficient enabling parameter estimation algorithms that further improve the overall performance of adaptive OFDM systems. Traditional estimation of channel quality indicators, such as noise power and SNR, assume that the noise has a flat power spectral density within the transmission band of the OFDM signal. Hence, a single estimate of the noise power is obtained by averaging the instantaneous noise power values across all the sub-carriers. In reality, the noise within the OFDM bandwidth is a combination of white and correlated noise components, and has an uneven affect across the sub-carriers. It is this fact that has motivated the proposal of a windowing approach for noise power estimation. Windowing provides many local estimates of the dynamic noise statistics and allows better noise tracking across the OFDM transmission band. This method is particularly useful for better resource utilization and improved performance in sub-band adaptive modulation, where adaptation is performed on the sub-carriers on a group-by-group basis based on the observed channel conditions. Blind modulation mode detection is another relatively unexplored issue in regard to adaptation of OFDM systems. The receiver has to be informed of the appropriate modulation modes used at the transmitter for proper demodulation. If this can be done without any explicit signaling information embedded within the OFDM symbol, it has the advantage of improved throughput and data capacity. A model selection approach is taken, a novel statistical blind modulation detection method based on the Kullback-Leibler (K-L) distance is proposed. This algorithm takes into account the distribution of the Euclidian distances from the received noisy samples on the complex plane to the closest legitimate constellation points of all the modulation modes used. If this can be done without any explicit signaling information embedded within the OFDM symbol, it has the advantage of improved throughput and data capacity. A model selection approach is taken, and a novel statistical blind modulation detection method based on the Kullback-Leibler (K-L) distance is proposed. This algorithm takes into account the distribution of the Euclidian distances from the received noisy samples on the complex plane to the closest legitimate constellation points of all the modulation modes used.
589

Multi-User Detection of Overloaded Systems with Low-Density Spreading

Fantuz, Mitchell 11 September 2019 (has links)
Future wireless networks will have applications that require many devices to be connected to the network. Non-orthogonal multiple access (NOMA) is a promising multiple access scheme that allows more users to simultaneously transmit in a common channel than orthogonal signaling techniques. This overloading allows for high spectral efficiencies which can support the high demand for wireless access. One notable NOMA scheme is low-density spreading (LDS), which is a code domain multiple access scheme. Low density spreading operates like code division multiple access (CDMA) in the sense that users use a spreading sequence to spread their data, but the spreading sequences have a low number of nonzero chips, hence the term low-density. The message passing algorithm (MPA) is typically used for multi-user detection (MUD) of LDS systems. The MPA detector has complexity that is exponential to the number of users contributing to each chip. LDS systems suffer from two inherent problems: high computational complexity, and vulnerability to multipath channels. In this thesis, these two problems are addressed. A lower complexity MUD technique is presented, which offers complexity that is proportional to the number of users squared. The proposed detector is based on minimum mean square error (MMSE) and parallel interference cancellation (PIC) detectors. Simulation results show the proposed MUD technique achieves reductions in multiplications and additions by 81.84% and 67.87% with a loss of about 0.25 dB with overloading at 150%. In addition, a precoding scheme designed to mitigate the effects of the multipath channel is also presented. This precoding scheme applies an inverse channel response to the input signal before transmission. This allows for the received signal to eliminate the multipath effects that destroy the low-density structure.
590

An albumin-binding domain as a scaffold for bispecific affinity proteins

Nilvebrant, Johan January 2012 (has links)
Protein engineering and in vitro selection systems are powerful methods to generate binding proteins. In nature, antibodies are the primary affinity proteins and their usefulness has led to a widespread use both in basic and applied research. By means of combinatorial protein engineering and protein library technology, smaller antibody fragments or alternative non-immunoglobulin protein scaffolds can be engineered for various functions based on molecular recognition. In this thesis, a 46 amino acid small albumin-binding domain derived from streptococcal protein G was evaluated as a scaffold for the generation of affinity proteins. Using protein engineering, the albumin binding has been complemented with a new binding interface localized to the opposite surface of this three-helical bundle domain. By using in vitro selection from a combinatorial library, bispecific protein domains with ability to recognize several different target proteins were generated. In paper I, a bispecific albumin-binding domain was selected by phage display and utilized as a purification tag for highly efficient affinity purification of fusion proteins. The results in paper II show how protein engineering, in vitro display and multi-parameter fluorescence-activated cell sorting can be used to accomplish the challenging task of incorporating two high affinity binding-sites, for albumin and tumor necrosis factor-alpha, into this new bispecific protein scaffold. Moreover, the native ability of this domain to bind serum albumin provides a useful characteristic that can be used to extend the plasma half-lives of proteins fused to it or potentially of the domain itself. When combined with a second targeting ability, a new molecular format with potential use in therapeutic applications is provided. The engineered binding proteins generated against the epidermal growth factor receptors 2 and 3 in papers III and IV are aimed in this direction. Over-expression of these receptors is associated with the development and progression of various cancers, and both are well-validated targets for therapy. Small bispecific binding proteins based on the albumin-binding domain could potentially contribute to this field. The new alternative protein scaffold described in this thesis is one of the smallest structured affinity proteins reported. The bispecific nature, with an inherent ability of the same domain to bind to serum albumin, is unique for this scaffold. These non-immunoglobulin binding proteins may provide several advantages as compared to antibodies in several applications, particularly when a small size and an extended half-life are of key importance. / <p>QC 20121122</p>

Page generated in 0.0577 seconds