• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 757
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1549
  • 271
  • 203
  • 186
  • 154
  • 147
  • 143
  • 143
  • 128
  • 124
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Emprego de modelos de campo médio para descrição termodinâmica de monocamadas de Langmuir / Thermodynamic description of Langmuir monolayers via mean-field models

Weber da Silva Robazzi 24 August 2007 (has links)
Monocamadas insolúveis localizadas sobre a superfície de um líquido são sistemas conhecidos e estudados há mais de 100 anos. Elas são formadas quando moléculas anfifílicas são depositadas sobre algum solvente em condições especiais. Quando sofrem compressão isotérmica, tais sistemas exibem um comportamento muito complexo podendo sofrer várias transições de fase nesse processo. Embora, com o surgimento na década de 1990 de técnicas experimentais que proporcionaram um maior ?insight? no entendimento das referidas transições, há muitas questões que permanecem em aberto, principalmente no que diz respeito à influência exercida: pelas conformações intramoleculares; pelas interações entre as moléculas anfifílicas; pelas interações entre as moléculas anfifílicas e as moléculas do solvente sobre as referidas transições. Para ajudar a preencher esta lacuna são necessários modelos moleculares que auxiliem a obtenção da resposta destas questões. É neste contexto que se insere este trabalho, onde três diferentes modelos de campo médio são empregados a fim de se descrever o comportamento das transições de fase sofridas pelas monocamadas no que se refere aos aspectos acima mencionados. Cada modelo é diferente no que diz respeito ao comportamento das caudas hidrofóbicas erguidas em direção ao ar. O emprego de tais modelos proporcionou, em linhas gerais, um melhor entendimento das transições de fase nestes sistemas. / Insoluble monolayers lying on a liquid surface are known for about one century. They are formed when amphiphilic molecules are deposited on some solvent under special conditions. Under isothermal compression, these systems may exhibit a complex behavior suffering several phase transitions. Although with recent experimental development on the area new insights on the phase transitions were obtained, many questions remain unanswered. Some of these questions are related with the influence of some variables like the intramolecular conformations and the interaction between the amphiphilic molecules and the solvent molecules. In order to fill this gap molecular models are a useful and valuable tool. So, it was employed three different mean-field models in order to describe phase transitions of the molecules. The difference between the models relies on the behavior of the hydrophobic tails lifted on the air. Such models proportioned some insight on the phase transitions of the system.
172

Fine-grained error detection techniques for fast repair of FPGAs

Nazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
173

Sobre a aplicaÃÃo de Gauss para hipersuperfÃcies de curvatura mÃdia constante na esfera / On the application of Gauss for hypersurfaces of constant mean curvature in sphere

Adam Oliveira da Silva 21 January 2009 (has links)
O objetivo desta dissertaÃÃo à apresentar um resultado similar ao Teorema de Bernstein sobre hipersuperfÃcies mÃnimas no espaÃo euclidiano, isto Ã, mostrar que tal resultado se generaliza para hipersuperfÃcies de Sn+1 com curvatura mÃdia constante, cuja aplicaÃÃo de Gauss estÃcontida em um hemis- fÃrio fechado de Sn+1 (Teorema 3.1). PorÃm, no caso em que a hipersuperfÃcie à mÃnima, utilizaremos na demonstraÃÃo deste teorema, um resultado sobre caracterizaÃÃo das hiperesferas de Sn+1 entre todas hipersuperfÃcies de Sn+1 em termos de suas imagens de Gauss (Teorema 2.1). / The objective of this dissertation is to show a similar result of Bernstein theorem about minimal hypersurfaces in Euclidian space, that is, to show that that result is generalized to hypersurfaces of Sn+1 with constant mean curvature, whose Gauss image is contained in a closed hemisphere of Sn+1(Theorem 3.1). However, in the case where the hypersurface is minimal, we will use in the proof of this theorem a result about the characterization of the hyperspheres of Sn+1 among all complete hypersurfaces in Sn+1 in terms of their Gauss images (Theorem 2.1)
174

Ãndice e estabilidade de hipersuperfÃcies mÃnimas e de curvatura mÃdia constante na esfera / Index and Stability of Minimal and Constant Mean Curvature Hypersurfaces in Sphere

Raimundo Alves LeitÃo Junior 11 July 2009 (has links)
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior / Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / Neste trabalho estudaremos o Ãndice de hipersuperfÃcies mÃnimas e de curvatura mÃdia constante imersas na esfera Euclidiana Sn+1. Mais precisamente, definiremos o operador de Jacobi de hipersuperfÃcies mÃnimas e de curvatura mÃdia constante usando as fÃrmulas de variaÃÃo de Ãrea, e em seguida estabeleceremos estimativas por baixo para o Ãndice de hipersuperfÃcies mÃnimas imersas em Sn+1 . AlÃm disso, caracterizaremos os toros de Clifford mÃnimos como as hipersuperfÃcies compactas, orientÃveis e mÃnimas em Sn+1 tais que a = -2n, onde a à o primeiro autovalor do operador de Jacobi. Mostraremos que as esferas totalmente umbÃlicas Sn (r) em Sn+1, com 0 < r < 1, sÃo as hipersuperfÃcies fracamente estÃveis em Sn+1. Por Ãltimo, estabeleceremos estimativas por baixo para o Ãndice fraco de hipersuperfÃcies de curvatura mÃdia constante em Sn+1 e caracterizaremos os toros de Clifford Sk (r) x Sn-k (1 - r2) de curvatura mÃdia constante como as hipersuperfÃcies de curvatura mÃdia constante tais que o Ãndice fraco à igual a n + 2, onde (k/n + 2 ) &#8804; r &#8804; (k + 2/n + 2) Â. / The aim of this work is to study the index either of compact minimal or constant mean curvature hypersurfaces immersed into the Euclidean unit sphere Sn+1. The main ingredient to do that is the Jacobi operator which appears on the second formula of variation of area. On the minimal case we shall present low estimative for the index and we shall show that the minimal Clifford tori are the unique minimal hypersurfaces over which a = -2n , where a stands for the first eigenvalue of the Jacobi operator. Moreover, it is easy to see that totally umbilical sphere Sn (r) em Sn+1 , with 0 < r < 1, are weakly stable. Finally we shall show that the index is bigger that or equal to n+2 for compact constant mean curvature hypersurfaces of Sn+1 provides they have constant scalar curvature. Moreover , Clifford tori Sk (r) x Sn-k (1 - r2) attain such index provided (k/n + 2 ) &#8804; r &#8804; (k + 2/n + 2) Â.
175

FolheaÃÃes por hipersuperfÃcies de curvatura mÃdia constante / Foliations by hypersurfaces with constant mean curvature

Samuel Barbosa Feitosa 03 September 2009 (has links)
O presente trabalho apresenta resultados objetivando classificar folheaÃÃes de codimensÃo 1 em variedades Riemannianas cujas folhas tem curvatura mÃdia constante. O principal resultado à o teorema de Barbosa-Kenmotsu-Oshikiri([3]), Teorema: Seja M uma variedade Riemanniana compacta com curvatura de Ricci nÃo negativa e F um folheaÃÃo de codimensÃo 1 e classe C3 de M, transversalmente orientÃvel, cujas folhas tem curvatura mÃdia constante. EntÃo, qualquer folha de F à uma subvariedade totalmente geodÃsica de M. AlÃm disso, M à localmente um produto Riemanniano de uma folha de F e uma curva normal e a curvatura de Ricci na direÃÃo normal Ãs folhas à zero. O resultado anterior nÃo pode ser estendido para o caso onde M à nÃo compacta. Uma folheaÃÃo contra-exemplo pode ser construÃda a partir de uma funÃÃo f que nÃo satisfaz a conjectura de Bernstein. No final, sÃo apresentados resultados recentes sobre os problemas abordados e uma prova da desigualdade de Heinz-Chern / In this paper, we work showing results aiming classify foliations of codimension-one in Riemannian manifolds whose leaves have constant mean curvature. The main result is the theorem by Barbosa-Kenmotsu-Oshikiri([3]). Theorem: LetM be a compact Riemannian manifold with nonnegative Ricci curvature e F, a codimensiononeC3-foliation of M whose leaves have constant mean curvature. The any leaf of F is totally geodesic submanifold of M. Futhermore M is locally a Riemannian product of a leaf of F and a normal curve,and the Ricci curvature in the direction normal to the leaves is zero. The previous result can not be extended for the case where M is not compact. A foliation counterexample can be built from a function f that does not satisfy the Bernsteinâs conjecture. At the end, they are present recent results about the boarded problems and a proof of the Heinz-Chern inequality.
176

Sustainable Throughput – QoE Perspective

Darisipudi, Veeravenkata Naga S Maniteja January 2017 (has links)
In recent years, there has been a significant increase in the demand for streaming of high quality videos on the smart mobile phones. In order to meet the user quality requirements, it is important to maintain the end user quality while taking the resource consumption into consideration. This demand caught the attention of the research communities and network providers to prioritize Quality of Experience (QoE) in addition to the Quality of Service (QoS). In order to meet the users’ expectations, the QoE studies have gained utmost importance, thus creating the challenge of evaluating it in such a way that the quality, cost and energy consumption are taken into account. This gave way to the concept of QoE-aware sustainable throughput, which denotes the maximal throughput at which QoE problems can be still kept at a desired level. The aim of the thesis is to determine the sustainable throughput values from the QoE perspective. The values are observed for different delay and packet loss values in wireless and mobile scenarios. The evaluation is done using the subjective video quality assessment method. In the subjective assessment method, the evaluation is done using the ITU-T recommended Absolute Category Rating (ACR). The video quality ratings are taken from the users, and are then averaged to obtain the Mean Opinion Score (MOS). The obtained scores are used for analysis in determining the sustainable throughput values from the users’ perspective. From the results it is determined that, for all the video test cases, the videos are rated better quality at low packet loss values and low delay values. The quality of the videos with the presence of delay is rated high compared to the video quality in the case of packet loss. It was observed that the high resolution videos are feeble in the presence of higher disturbances i.e. high packet loss and larger delays. From considering all the cases, it can be observed that the QoE disturbances due to the delivery issues is at an acceptable minimum for the 360px video. Hence, the 480x360 video is the threshold to sustain the video quality.
177

PERFORMANCE EVALUATION FOR DECISION-FEEDBACK EQUALIZER WITH PARAMETER SELECTION ON UNDERWATER ACOUSTIC COMMUNICATION

Nassr, Husam, Kosbar, Kurt 10 1900 (has links)
This paper investigates the effect of parameter selection for the decision feedback equalization (DFE) on communication performance through a dispersive underwater acoustic wireless channel (UAWC). A DFE based on minimum mean-square error (MMSE-DFE) criterion has been employed in the implementation for evaluation purposes. The output from the MMSE-DFE is input to the decoder to estimate the transmitted bit sequence. The main goal of this experimental simulation is to determine the best selection, such that the reduction in the computational overload is achieved without altering the performance of the system, where the computational complexity can be reduced by selecting an equalizer with a proper length. The system performance is tested for BPSK, QPSK, 8PSK and 16QAM modulation and a simulation for the system is carried out for Proakis channel A and real underwater wireless acoustic channel estimated during SPACE08 measurements to verify the selection.
178

Integrity of offshore structures

Adedipe, Oyewole January 2015 (has links)
Corrosion and fatigue have been dominant degradation mechanisms in offshore structures, with the combination of the two, known as corrosion fatigue, having amplified effects in structures in the harsh marine environments. Newer types of structure are now being developed for use in highly dynamic, harsh marine environments, particularly for renewable energy applications. However, they have significantly different structural details and design requirements compared to oil and gas structures, due to the magnitude and frequency of operational and environmental loadings acting on the support structures. Therefore, the extent of corrosion assisted fatigue crack growth in these structures needs to be better understood. In this research, fatigue crack growth in S355J2+N steel used for offshore wind monopile fabrications was investigated in air and free corrosion conditions. Tests were conducted on parent, HAZ and weld materials at cyclic load frequencies similar to what is experienced by offshore wind monopile support structures. The seawater used for testing was prepared according to ASTM D1141 specifications and was circulated past the specimens through a purpose designed and built corrosion rig at a rate of 3 l/min, at a temperature of 8-100C and at a pH of 7.78-8.1. A new crack propagation method accompanied by constant amplitude loading was used. Crack growth rates in parent, HAZ and weld materials were significantly accelerated under free corrosion conditions, at all the stress ratios used compared to in air environment. However, in free corrosion conditions, crack growth rates in the parent, HAZ and weld materials were similar, particularly at a lower stress ratio. The results are explained with respect to the interaction of the loading condition, environment and the rate of material removal by corrosion in the weldments. A new model was developed to account for mean stress effects on crack growth rates in air and in seawater, and was found to correlate well with experimental data as well as with the other mean stress models tested.
179

Dérivation des équations de Schrödinger non linéaires par une méthode des caractéristiques en dimension infinie / Derivation of the non linear Schrödinger equations by the characteristics method in a infinite dimensional space

Liard, Quentin 08 December 2015 (has links)
Dans cette thèse, nous aborderons l'approximation de champ moyen pour des particules bosoniques. Pour un certain nombre d'états quantiques, la dérivation de la limite de champ moyen est connue, et il semble naturel d'étendre ces travaux à un cadre général d'états quantiques quelconques. L'approximation de champ moyen consiste à remplacer le problème à N corps quantique par un problème non linéaire, dit de Hartree, quand le nombre de particules est grand. Nous prouverons un résultat général pour un système de particules, confinées ou non, interagissant au travers d'un potentiel singulier. La méthode utilisée repose sur les mesures de Wigner. Notre contribution consiste en l'extension de la méthode des caractéristiques au cadre de champ de vitesse singulier associé à l'équation de Hartree. Cela complète les travaux d'Ammari et Nier et permet de prouver des résultats pour des potentiels critiques pour les équations de Hartree. En particulier, on s'intéressera à un système de bosons interagissant au travers d'un potentiel à plusieurs corps et nous démontrerons l'approximation de champ moyen sous une hypothèse de compacité forte sur ce dernier. Les résultats s’appuient en grande partie sur la flexibilité des mesures de Wigner, ce qui permet également de proposer une preuve alternative à l'approximation de champ moyen dans un cadre variationnel. / In this thesis, we justify the mean field approximation in a general framework for bosonic systems. The derivation of mean field dynamics is known for some specific quantum states. Therefore it is natural to expect the extension of these results for a general family of normal states. The mean field approximation for bosons consists in replacing the many-body quantum problem by a non linear one, so-called Hartree problem, when the number of particles tends to infinity. We establish a general result for bosons confined or not, interacting through a singular potential. The method used is based on Wigner measures. Our contribution consists in extending the characteristics method when the velocity field associated to the Hartree equation is subcritical or critical. It complements the work of Ammari and Nier and provides a result for critical potential for the Hartree equation. We also focus on bosonic systems interacting through a multi-body potential and we prove the mean field approximation under a strong assumption on this potential. All these results essentially rely on the flexibility of Wigner measures and we can give an alternative proof of the variational mean field approximation.
180

Robust Estimation of Mean Arterial Pressure in Atrial Fibrillation Using Oscillometry

Tannous, Milad January 2014 (has links)
Blood pressure measurement has been and continues to be one of the most important measurements in clinical practice and yet, it remains one of the most inaccurately performed. The use of oscillometric blood pressure measurement monitors has become common in hospitals, clinics and even homes. Typically, these monitors assume that the heartbeat rate remains stable, which is contrary to what happens in atrial fibrillation. In this thesis, a new method that provides a more precise estimate of Mean Arterial Pressure (MAP) is proposed using anon-invasive oscillometric blood pressure monitor. The proposed method is based on calculating a ratio of peak amplitude to trough amplitude for every pulse, then identifying where the ratio first reaches a value of 2. The performance of the proposed method is assessed by comparing the accuracy and variability of the readings against reference monitors -first in healthy subjects, then in atrial fibrillation patients. In healthy subjects and in atrial fibrillation patients, the proposed method achieved a performance accuracy that is well within the ANSI/AAMI SP10 protocol requirements of the reference monitors. The presence of atrial fibrillation diminished the performance of the reference monitor by increasing the variability of the reference readings. The proposed algorithm, on the other hand, performed better by achieving substantially lower variability in the readings than the reference device.

Page generated in 0.0388 seconds