Spelling suggestions: "subject:"[een] COMPONENTS"" "subject:"[enn] COMPONENTS""
621 |
Surface-enhanced Raman spectroscopy for the forensic analysis of vaginal fluidZegarelli, Kathryn Anne 05 November 2016 (has links)
Vaginal fluid is most often found at crime scenes where a sexual assault has taken place or on clothing or other items collected from sexual assault victims or perpetrators. Because the victim is generally known in these cases, detection of vaginal fluid is not a matter of individual identification, as it might be for semen identification. Instead, linkages can be made between victim and suspect if the sexual assault was carried out digitally or with a foreign object (e.g., bottle, pool cue, cigarette, handle of a hammer or other tool, etc.). If such an object is only analyzed for DNA and the victim is identified, the suspect may claim that the victim’s DNA is present because she handled and/or is the owner of the object and not because it was used to sexually assault her; identification of vaginal fluid residue would alleviate such uncertainty. Most of the research conducted thus far regarding methods for the identification of vaginal fluid involves mRNA biomarkers and identification of various bacterial strains.1-3 However, these approaches require extensive sample preparation and laboratory analysis and have not fully explored the genomic differences among all body fluid RNAs. No existing methods of vaginal fluid identification incorporate both high specificity and rapid analysis.4 Therefore, a new rapid detection method is required. Surface-enhanced Raman spectroscopy (SERS) is an emerging technique with high sensitivity for the forensic analysis of various body fluids. This technique has the potential to improve current vaginal fluid identification techniques due to its ease-of-use, rapid analysis time, portability, and non-destructive nature.
For this experiment, all vaginal fluid samples were collected from anonymous donors by saturation of a cotton swab via vaginal insertion. Samples were analyzed on gold nanoparticle chips.4 This nanostructured metal substrate is essential for the large signal-enhancement effect of SERS and also quenches any background fluorescence that sometimes interferes with normal Raman spectroscopy measurements.5
Vaginal fluid SERS signal variation of a single sample over a six-month period was evaluated under both ambient and frozen storage conditions. Vaginal fluid samples were also taken from 10 individuals over the course of a single menstrual cycle. Four samples collected at one-week intervals were obtained from each individual and analyzed using SERS.
The SERS vaginal fluid signals showed very little variation as a function of time and storage conditions, indicating that the spectral pattern of vaginal fluid is not likely to change over time. The samples analyzed over the span of one menstrual cycle showed slight intra-donor differences, however, the overall spectral patterns remained consistent and reproducible.
When cycle spectra were compared between individuals, very little donor-to-donor variation was observed indicating the potential for a universal vaginal fluid signature spectrum. A cross-validated, partial least squares – discriminant analysis (PLS-DA) model was built to classify all body fluids, where vaginal fluid was identified with 95.0% sensitivity and 96.6% specificity, which indicates that the spectral pattern of vaginal fluid was successfully distinguished from semen and blood. Thus, SERS has a high potential for application in the field of forensic science for vaginal fluid analysis.
|
622 |
Identificação e análise da função de transferência do circuito equivalente de um sistema de medição por correntes parasitasTondo, Felipe Augusto January 2016 (has links)
Este trabalho apresenta o estudo de um sistema genérico de medição que utiliza o princípio das correntes de Foucault, comumente conhecidas como correntes parasitas. O modelo do sistema é representado como um circuito elétrico equivalente composto por R1 e L1, respectivamente caracterizando a resistência e a indutância do circuito primário, as quais são conhecidas por uma bobina de excitação. Já no secundário, R2 e L2, estão representando a perda ôhmica e a indutância da amostra no qual as correntes parasitas são induzidas, além de outros dois componentes, M indutância mútua dos indutores acoplados e k, coeficiente relacionado ao acoplamento magnético entre os circuitos primário e secundário. A análise tradicionalmente utilizada para este tipo de medição é a avaliação da reflexão da impedância equivalente do circuito secundário representando a amostra no circuito primário. O trabalho analisa as equações de malha do circuito equivalente no domínio da frequência e identifica os parâmetros do modelo. A partir da identificação do sistema realizada com os ensaios experimentais, foi possível descobrir a constante de tempo indutiva τL do sistema. A partir dessa constante, observou-se a variação acentuada da mesma em relação a variação da impedância equivalente. Ainda é apresentada uma estimativa dos valores de R2 e L2 realizada por meio da unificação das informações obtidas com a identificação, aliada com as informações de campo magnético obtidas a partir de um sensor do tipo GMR e pela simulação em um software de elementos finitos COMSOL Multiphysics. / This work presents the study of a generic measurement system that uses the prin- ciple of eddy currents. The system model is represented as an equivalent electric circuit composed of R1 and L1 respectively characterizing the resistance and induc- tance of primary circuit, which are known by an excitation coil. In the secondary, R2 and L2 are representing the ohmic loss and the inductance of the sample in which the eddy currents are induced, in addition two other components, the mutual induc- tance of the coupled inductors and k, coefficient related to the magnetic coupling between the primary and secondary circuits. The analysis traditionally used for this type of measurement is the reflection evaluation of the equivalent impedance of the secondary circuit representing the sample in the primary circuit. The work analyzes the mesh equations of the equivalent circuit in the frequency domain and identifies the parameters of the model. From the identification of the system performed with the experimental tests, it was possible to discover the inductive time constant τL of the system. From this constant, it was observed the sharp variation of the same in relation to the variation of equivalent impedance. An estimate values, R2 and L2 performed by unification of the information obtained with the identification, to- gether with the magnetic field information obtained from a GMR type sensor and by simulation in a finite element software COMSOL Multiphysics.
|
623 |
Modélisation des systèmes synchrones en BIP / Modeling Synchronous Systems in BIPSfyrla, Vasiliki 21 June 2011 (has links)
Une idée centrale en ingénierie des systèmes est de construire les systèmes complexes par assemblage de composants. Chaque composant a ses propres caractéristiques, suivant différents points de vue, chacun mettant en évidence différentes dimensions d'un système. Un problème central est de définir le sens la composition de composants hétérogènes afin d'assurer leur interopérabilité correcte. Une source fondamentale d'hétérogénéité est la composition de sous-systèmes qui ont des différentes sémantiques d'execution et d' interaction. À un extrême du spectre sémantique on trouve des composants parfaitement synchronisés par une horloge globale, qui interagissent par transactions atomiques. À l'autre extrême, on a des composants complètement asynchrones, qui s'éxécutent à des vitesses indépendantes et interagissent nonatomiquement. Entre ces deux extrêmes, il existe une variété de modèles intermédiaires (par exemple, les modèles globalement asynchrones et localement synchrones). Dans ce travail, on étudie la combinaison des systèmes synchrones et asynchrones. A ce fin, on utilise BIP (Behavior-Interaction-Priority), un cadre général à base de composants permettant la conception rigoureuse de systémes. On définit une extension de BIP, appelée BIP synchrone, déstiné à modéliser les systèmes flot de données synchrones. Les pas d'éxécution sont décrites par des réseaux de Petri acycliquemunis de données et des priorités. Ces réseaux de Petri sont utilisés pour modéliser des flux concurrents de calcul. Les priorités permettent d'assurer la terminaison de chaque pas d'éxécution. Nous étudions une classe des systèmes synchrones ``well-triggered'' qui sont sans blocage par construction et le calcul de chaque pas est confluent. Dans cette classe, le comportement des composants est modélisé par des `graphes de flux modaux''. Ce sont des graphes acycliques représentant trois différents types de dépendances entre deux événements p et q: forte dépendance (p doit suivre q), dépendance faible (p peut suivre q) et dépendance conditionnelle (si p et q se produisent alors $p$ doit suivre q). On propose une transformation de modèles LUSTRE et MATLAB/Simulink discret à temps discret vers des systèmes synchrones ``well-triggered''. Ces transformations sont modulaires et explicitent les connexions entre composants sous forme de flux de données ainsi que leur synchronisation en utilisant des horloges. Cela permet d'intégrer des modèles synchrones dans les modèles BIP hétérogènes. On peut ensuite utiliser la validation et l'implantation automatique déjà disponible pour BIP. Ces deux traductions sont actuellement implementées et des résultats expérimentaux sont fournis. Pour les modèles BIP synchrones nous parvenons à générer du code efficace. Nous proposons deux méthodes: une implémentation séquentielle et une implémentation distribués. L'implémentation séquentielle consiste en une boucle infinie. L'implémentation distribuée transforme les graphes de flux modaux vers une classe particulieére de réseaux de Petri, que l'on peut transformer en réseaux de processus de Kahn. Enfin, on étudie la théorie de la conception de modeéles insensibles à la latence (latency-insensitive design, LID) qui traite le problème de latence des interconnexionsdans les systèmes synchrones. En utilisant la conception LID, les systèmes synchrones peuvent être «désynchronisés» en des réseaux de processus synchrones qui peuvent fonctionner à plus haute fréquence. Nous proposons un modèle permettant de construire des modéles insensibles à la latence en BIP synchrone, en représentant les mécanismes spécifiques d'interconnexion par des composants BIP synchrone. / A central idea in systems engineering is that complex systems are built by assembling com- ponents. Components have different characteristics, from a large variety of viewpoints, each highlighting different dimensions of a system. A central problem is the meaningful composition of heterogeneous components to ensure their correct interoperation. A fundamental source of heterogeneity is the composition of subsystems with different execution and interaction seman- tics. At one extreme of the semantic spectrum are fully synchronized components which proceed in a lockstep with a global clock and interact in atomic transactions. At the other extreme are completely asynchronous components, which proceed at independent speeds and interact non- atomically. Between the two extremes a variety of intermediate models can be defined (e.g. globally-asynchronous locally-synchronous models). In this work, we study the combination of synchronous and asynchronous systems. To achieve this, we rely on BIP (Behavior-Interaction-Priority), a general component-based framework encompassing rigorous design. We define an extension of BIP, called Synchronous BIP, dedicated to model synchronous data-flow systems. Steps are described by acyclic Petri nets equipped with data and priorities. Petri nets are used to model concurrent flow of computation. Priorities are instrumental for enforcing run-to-completion in the execution of a step. We study a class of well- triggered synchronous systems which are by construction deadlock-free and their computation within a step is confluent. For this class, the behavior of components is modeled by modal flow graphs. These are acyclic graphs representing three different types of dependency between two events p and q: strong dependency (p must follow q), weak dependency (p may follow q), conditional dependency (if both p and q occur then p must follow q). We propose translation of LUSTRE and discrete-time MATLAB/Simulink into well-triggered synchronous systems. The translations are modular and exhibit data-flow connections between components and their synchronization by using clocks. This allows for integration of synchronous models within heterogeneous BIP designs. Moreover, they enable the application of validation and automatic implementation techniques already available for BIP. Both translations are cur- rently implemented and experimental results are provided. For Synchronous BIP models we achieve efficient code generation. We provide two methods, sequential implementation and distributed implementation. The sequential implementation pro- duces endless single loop code. The distributed implementation transforms modal flow graphs to a particular class of Petri nets, that can be mapped to Kahn Process Networks. Finally, we study the theory of latency-insensitive design (LID) which deals with the problem of interconnection latencies within synchronous systems. Based on the LID design, synchronous systems can be “desynchronized” as networks of synchronous processes that might run with increased frequency. We propose a model for LID design in Synchronous BIP by representing specific LID interconnect mechanisms as synchronous BIP components.
|
624 |
Design, vérification et implémentation de systèmes à composants / Design, verification and implementation of systems of componentsQuinton, Sophie 21 January 2011 (has links)
Nous avons étudié dans le cadre de cette thèse le design, la vérification et l'implémentation des systèmes à composants. Nous nous sommes intéressés en particulier aux formalismes exprimant des interactions complexes, dans lesquels les connecteurs servent non seulement au transfert de données mais également à la synchronisation entre composants. 1. DESIGN ET VÉRIFICATION Le design par contrat est une approche largement répandue pour développer des systèmes lorsque plusieurs équipes travaillent en parallèle. Les contrats représentent des contraintes sur les implémentations qui sont préservées tout au long du développement et du cycle de vie d'un système. Ils peuvent donc servir également à la phase de vérification d'un tel système. Notre but n'est pas de proposer un nouveau formalisme de spécification, mais plutôt de définir un ensemble minimal de propriétés qu'une théorie basée sur les contrats doit satisfaire pour permettre certains raisonnements. En cela, nous cherchons à séparer explicitement les propriétés spécifiques à chaque formalisme de spécification et les règles de preuves génériques. Nous nous sommes attachés à fournir des définitions suffisamment générales pour exprimer un large panel de langages de spécification, et plus particulièrement ceux dans lesquels les interactions sont complexes, tels que Reo ou BIP. Pour ces derniers, raisonner sur la structure du système est essentiel et c'est pour cette raison que nos contrats ont une partie structurelle. Nous montrons comment découle de la propriété nommée raisonnement circulaire une règle pour prouver la dominance sans composer les contrats, et comment cette propriété peut être affaiblie en utilisant plusieurs relations de raffinement. Notre travail a été motivé par les langages de composants HRC L0 et L1 définis dans le projet SPEEDS. 2. IMPLÉMENTATION Synthétiser un contrôleur distribué imposant une contrainte globale sur un système est dans le cas général un problème indécidable. On peut obtenir la décidabilité en réduisant la concurrence: nous proposons une méthode qui synchronise les processus de façon temporaire. Dans les travaux de Basu et al., le contrôle distribué est obtenu en pré-calculant par model checking la connaissance de chaque processus, qui reflète dans un état local donné toutes les configurations possibles des autres processus. Ensuite, à l'exécution, le contrôleur local d'un processus décide si une action peut être exécutée sans violer la contrainte globale. Nous utilisons de même des techniques de model checking pour pré-calculer un ensemble minimal de points de synchronisation au niveau desquels plusieurs processus partagent leur connaissance au court de brèves périodes de coordination. Après chaque synchronisation, les processus impliqués peuvent de nouveau progresser indépendamment les uns des autres jusqu'à ce qu'une autre synchronisation ait lieu. Une des motivations pour ce travail est l'implémentation distribuée de systèmes BIP. / In this thesis, we have studied how component-based systems are designed, verified and then implemented. We have focused in particular on formalisms involving complex interactions, where connectors are not only used to transfer data but also play a role in the synchronization of components. 1. DESIGN AND VERIFICATION Contracts are emerging as a concept of choice when systems are designed by teams working independently. They are design constraints for implementations which are maintained throughout the development and life cycle of the system, thus being also useful for verification. Our goal is not to propose a new design framework but rather to define a minimal set of properties which a given contract theory should satisfy to offer some reasoning rules. In that sense, we aim at a separation of concerns between framework-dependent properties and generic proof rules. We have focused on finding definitions expressive enough to encompass a great variety of existing specification formalisms, and in particular those in which interaction is complex, like Reo and BIP. For those, reasoning about the structure of the system is essential and this is why our contracts have a structural part. We show how so-called circular reasoning entails a rule for proving dominance (refinement between contracts) without composing contracts and how it can be relaxed by combining several refinement relations. Our work has a practical motivation in the component frameworks HRC L0 and L1 defined in the SPEEDS IP project. 2. IMPLEMENTATION The problem of synthesizing a distributed controller that imposes some global constraint on a system is, in general, undecidable. One can achieve decidability at the expense of reducing concurrency: we propose a method that synchronizes processes temporarily. In Basu et al., distributed control is achieved by first using model checking to precalculate the knowledge of each process, which reflects in a given local state all the possible situations of the other processes. Then, at runtime, the local controller of a process decides whether an action of that process can be executed without violating the imposed constraint. We use model checking techniques as well to precalculate a minimal set of synchronization points, where joint knowledge, i.e., knowledge common to several processes, can be achieved during short coordination phases. After each synchronization, the participating processes can again progress independently until a further synchronization is called for. One practical motivation for this work is the distributed implementation of BIP systems.
|
625 |
Screening and deconvoluting complex mixtures of catalyst components in reaction development / Identification de nouveaux systèmes catalytiques par criblage et déconvolution de mélanges de catalyseurs potentielsWolf, Eléna 02 October 2015 (has links)
Le développement réactionnel est problème multidimensionnel complexe qui, dans un scénario représentatif, implique l’unique convergence de plusieurs paramètres à une réactivité désirée. Le choix incorrect d’un seul paramètre réactionnel tel que le pré-catalyseur, le ligand mais aussi le solvant ou encore l’acide/base peut complètement supprimer la réactivité du système. De ce fait, ce processus requiert souvent de nombreuses expérimentations pour obtenir un premier résultat probant. Pour éviter de tester toutes les combinaisons en parallèle, des approches créatives de criblage ont été développées ces dernières années mais le nombre important de reactions nécessaires à l’exploration de juste trois ou quatre paramètres est toujours un challenge pour les chimistes qui n’ont pas accès au criblage à haut debit. Afin de répondre à cette problèmatique, une stratégie combinatoire réaction-économique pour l’identification d’un lead hit dans une reaction spécifique est proposée. Des mélanges complexes de pré-catalyseurs et de ligands, choisis au préalable, sont testés avec un ou deux autres paramètres de reaction supplémentaires pour identifier de bonnes conditions de réaction dans un nombre minimum de manipulations. La déconvolution iterative permet ensuite d’identifier le catalyseur, généré in situ, le plus actif dans les conditions réactionnelles. L’application de cette approche est décrite sur une réaction de Friedel-Crafts, une arylation ortho-C–H sélective de composés benzamides, une alkylation C3 d’indole et en catalyse asymétrique sur une réaction d’hétéro Diels-Alder. / Reaction development is a complex multidimensional problem that, in a representative scenario, requires often the unique convergence of multiple parameters for a desired reactivity. The incorrect choice of a single parameter, such as the pre-catalyst, the ligand, the solvent or the acid/base, can completely eliminate the reactivity of the system. Thus, the process often requires extensive manipulations to obtain a lead hit. To avoid this time consuming process, many creative screening approaches have been developed but the large number of reactions necessary to explore the intersection of just three or four parameters is still a challenge for chemists who do not have access to high throughput experimentation. A reaction-economic combinatorial strategy is described for lead hit identification in catalyst discovery directed towards a specific transformation. Complex mixtures of rationally chosen pre-catalysts and ligands are screened against various reaction parameters to identify lead conditions in a small number of reactions. Iterative deconvolution of the resulting hits identifies which components contribute to the lead in situ generated catalyst. The application of this screening approach is described in the dehydrative Friedel-Crafts reaction, in the ortho-C–H arylation of benzamides, in the C3-indole alkylation and in the asymmetric hetero Diels-Alder cycloaddition.
|
626 |
Um estudo sobre fatores que influenciam a determinação do número de repetições em experimentos com frangos de corte / A study on factors which inuence the choice of the number of replicates in experiments with poultryHilário, Reginaldo Francisco 28 January 2014 (has links)
A justificativa para o número de animais em experimentos é uma preocupação inerente aos pesquisadores que buscam precisão nos resultados observando as recomendações dos guias de cuidados com animais em pesquisa e ensino. Embora seja do conhecimento de estudiosos a importância do número de repetições, uma vez que o seu aumento resulta em estimativas mais precisas do erro experimental, o seu cálculo no planejamento de experimentos ainda provoca incertezas entre pesquisadores. A determinação do tamanho da amostra também tem fundamental importância nesse contexto, pois a questão que surge é: Aumentar o tamanho da amostra em detrimento do número de repetições ou diminuir o tamanho da amostra favorecendo o aumento no número de repetições? A resposta dependerá da variabilidade dentro da parcela, da variação residual (variância entre parcelas) e dos recursos disponíveis. No presente trabalho, estudou-se a relação entre a variabilidade dentro da parcela e a variância do erro para dados de peso de frangos de corte em experimento completamente casualizado com número diferente de indivíduos por parcela. Tal relação foi confrontada com o número necessário de repetições para detecção de diferenças, entre as médias dos tratamentos, de 5 e 50 gramas, respectivamente, para os 7 e 42 dias. Verificou-se uma grande variabilidade entre os pesos individuais dentro da parcela, provavelmente, consequente da diferença entre as distribuições de pesos dos machos e das fêmeas que se encontravam dentro da mesma parcela no experimento descrito. Constatou-se o baixo poder de teste estatístico para detecção de 5 e 50 gramas, respectivamente, para os 7 e 42 dias e a necessidade de se aumentar o número necessário de repetições, para experimentos similares, no caso em que se queira detectar tais diferenças entre os pesos com poder de teste de, aproximadamente, 0,80 e nível de 5% de significância. / The justification for the number of animals in experiments is an inherent concern for researchers seeking accurate results noting the recommendations of care guides with animals in research and teaching . Although it is known to scholars the importance of the number of replicates, since its increase results in more accurate estimates of experimental error, the calculation in planning experiments still causes uncertainty among researchers. The determination of sample size also has fundamental importance in this context because the question arises: Increasing the sample size rather than the number of replicates or decrease the size of the sample favoring an increase in the number of replicates? The answer will depend on the variability within the plot, the residual variance (variance between plots) and available resources. In this work, we studied the relationship between the variability within the plot and the error variance for data on weight of broilers in a completely randomized experiment with different number of individuals per plot. This relationship was compared to the number of replicates needed to detect differences between the treatment means of 5 and 50 grams, respectively, for 7 and 42 days. There was a large variability between individual weights within the plot, probably resulting from the difference between the distributions of weights of males and females who were in the same plot in the experiment described. It was found low power statistical test for detecting 5 and 50 grams, respectively, for 7 and 42 days and the need to increase the number of replicates required for similar experiments in the case where one wants to detect such differences the weights with the power of approximately 0,80 and the level of significance 5%.
|
627 |
Maestro: um middleware para suporte a aplicações distribuídas baseadas em componentes de software. / Maestro: a middleware for support to distributed applications based on software componentes.Ferreira, Cláudio Luís Pereira 21 September 2001 (has links)
É o trabalho de um middleware organizar as atividades de seus diferentes elementos componentes de maneira a operar sincronamente com a execução de uma aplicação. O resultado deste trabalho deve ser transparente para quem interage com o sistema, percebendo-o como um único bloco coeso e sincronizado, orquestrado por um agente principal. Este é o objeto deste trabalho, a especificação de um middleware e seus componentes internos indicando suas principais características e funcionalidades e também sua operação na execução de uma aplicação distribuída. Também foi levado em consideração os novos ambientes nos quais as aplicações distribuídas estão inseridas tais como a diversidade de dispositivos gerenciados pelos usuários, a necessidade de constantes mudanças no sistema, o uso de novas tecnologias no desenvolvimento de software e a necessidade de definições de sistemas abertos. Para a especificação deste middleware, foi utilizado o modelo de referência Open Distributed Processing (ODP) da ISO/IEC que permite que um sistema seja visualizado em cinco pontos de vista distintos. Ao final o sistema é especificado utilizando a tecnologia de componentes de software, ilustrando seu uso numa aplicação comercial. / Its the job of a middleware to organize the activities of its different component elements as to operate in synchrony with the execution of an application. The result of this work should be transparent to whom interact with the system, perceiving it as a single synchronized and cohered block, orchestrated by a master agent. This is the subject of this work, the specification of a middleware and its internal components indicating its major characteristics and functionalities and also its operation in the execution of distributed applications. It was also taken into account the new environment in which the distributed applications are inserted such as the diversity of devices managed by the users, the necessity for constant system changing, the use of new technologies in software development and the necessity for definition of open systems. For the specification of this middleware, it was used the reference model of Open Distributed Processing (ODP) from ISO/IEC that allows a system to be visualized by five different points of view. By the end the system is specified using the technology of component software, illustrating its use through commercial component software.
|
628 |
Histórico das pontes estaiadas e sua aplicação no Brasil. / History of cable stayed bridges and its application in Brazil.Mazarim, Diego Montagnini 28 June 2011 (has links)
O princípio estrutural das pontes estaiadas não é tão recente quanto as pontes propriamente ditas. Em algumas estruturas, tais como passarelas, embarcações e tendas, já se usavam cabos como sustentação. Com a evolução da tecnologia e dos materiais, houve a possibilidade de um aperfeiçoamento dessas técnicas e sua utilização nas mais diversas áreas. As pontes estaiadas surgiram como uma alternativa eficaz para transpor grandes vãos, possibilitando a utilização de estruturas mais leves, esbeltas e econômicas. Este trabalho apresenta a evolução das pontes estaiadas no mundo e no Brasil, enfatizando os seus aspectos históricos, as novas tecnologias empregadas nestes projetos, as diversas possibilidades de geometria da estrutura e os métodos construtivos empregados nestas pontes. Para as pontes estaiadas ao redor do mundo, é elaborada uma análise geral, demonstrando sua importância ao longo da história e as vantagens que as mesmas propiciaram para o suprimento das necessidades da humanidade. Fazendo uma análise especial das pontes estaiadas brasileiras, é elaborada uma listagem das mesmas por ordem cronológica, indicando suas principais características. Finalmente, para as pontes estaiadas nacionais de maior destaque, é feita uma análise mais detalhada das suas principais características quanto ao vão central, geometria da ponte, processo construtivo, curiosidades sobre o empreendimento e período de construção. / The structural principle of cable stayed bridges is not as recent as the bridges themselves. In some structures such as catwalks, boats and tents, cable were already used as a support. With the evolution of technology and materials, there was the possibility of an improvement of these techniques and their use in several areas. The cable-stayed bridges have emerged as an effective alternative to large- span bridge, allowing the use of lighter, slim and economical structures. This paper presents the evolution of cable-stayed bridges in the world and in Brazil, emphasizing the historical aspects, the new technologies employed in these projects, the various possibilities for the geometry of the structure and the construction methods employed in these bridges. A general analysis of cable stayed bridges around the world is done, being shown their importance throughout history and the advantages that they have brought to fulfill the needs of mankind A special analysis of cable stayed bridges in Brazil will be is made a list of them in a chronological order is presented and their main features are examined. Finally, for the most prominent Brazilian cable stayed bridges national prominence, a more detailed analysis of its key features is done, being examined the central span, the bridge geometry, the constructive process, curiosities about the new development and the construction period.
|
629 |
Study of Metal Whiskers Growth and Mitigation Technique Using Additive ManufacturingGullapalli, Vikranth 08 1900 (has links)
For years, the alloy of choice for electroplating electronic components has been tin-lead (Sn-Pb) alloy. However, the legislation established in Europe on July 1, 2006, required significant lead (Pb) content reductions from electronic hardware due to its toxic nature. A popular alternative for coating electronic components is pure tin (Sn). However, pure tin has the tendency to spontaneously grow electrically conductive Sn whisker during storage. Sn whisker is usually a pure single crystal tin with filament or hair-like structures grown directly from the electroplated surfaces. Sn whisker is highly conductive, and can cause short circuits in electronic components, which is a very significant reliability problem. The damages caused by Sn whisker growth are reported in very critical applications such as aircraft, spacecraft, satellites, and military weapons systems. They are also naturally very strong and are believed to grow from compressive stresses developed in the Sn coating during deposition or over time. The new directive, even though environmentally friendly, has placed all lead-free electronic devices at risk because of whisker growth in pure tin. Additionally, interest has occurred about studying the nature of other metal whiskers such as zinc (Zn) whiskers and comparing their behavior to that of Sn whiskers. Zn whiskers can be found in flooring of data centers which can get inside electronic systems during equipment reorganization and movement and can also cause systems failure.Even though the topic of metal whiskers as reliability failure has been around for several decades to date, there is no successful method that can eliminate their growth. This thesis will give further insights towards the nature and behavior of Sn and Zn whiskers growth, and recommend a novel manufacturing technique that has potential to mitigate metal whiskers growth and extend life of many electronic devices.
|
630 |
Avaliação do método Wavelet-Galerkin multi-malha para caracterização das propriedades de petróleo e subprodutos. / Wavelet-Galerkin multigrid method\'s evaluation for characterization of the properties of petroleum and subproducts.Carranza Oropeza, María Verónica 22 February 2007 (has links)
Atualmente, restrições ambientais impostas à industria de refino de petróleo estão fazendo com que se procure otimizar os seus processos. Uma das maneiras de se alcançar este objetivo é através da melhoria dos métodos analíticos de caracterização e dos métodos de representação, cuja finalidade é permitir maior precisão na simulação. O método mais comum de representação através de pseudocomponentes, apresenta algumas desvantagens, as quais não permitem precisão adequada em determinadas situações. Uma nova metodologia apresentada neste trabalho, que permite superar essas desvantagens foi aplicada em um exemplo de flash de petróleo. Esta metodologia envolve varias etapas: a implementação dos algoritmos necessários à representação das composições da mistura por funções de distribuição contínua e sua aproximação por funções wavelets, e a simplificação do modelo flash com a discretização \"Wavelet-Galerkin\" e sua resolução através de um enfoque multi-malha adaptativo. Neste contexo, na primeira etapa da tese foram apresentados diferentes aspectos relacionados ao processo complexo de caracterização de petróleos, que consideram sua importância tanto econômica quanto tecnológica. Mostraram-se também, o uso de ferramentas matemáticas e suas vantagens para resolver os problemas complexos em diversas áreas científicas. Na segunda etapa foi desenvolvida a metodologia proposta. Para tanto, os algoritmos foram construídos na linguagem de programação de Matlab. Em seguida, duas simulações do modelo flash permitiram avaliar sua precisão e sua eficiência. A primeira foi realizada sem a implementação de seleção de malhas adaptativamente, enquanto que a segunda foi realizada utilizando dita implementação, a qual permitiu construir quatro casos e analisar os resultados dos mesmos. Finalmente, com o objetivo de avaliar seu potencial de utilização em um ambiente de simulação, estes resultados foram comparados com uma terceira simulação, utilizando o simulador HYSYS, o qual se baseia na representação de pseudocomponentes. / Current environmental constraints to the refining industry make process optimization a necessary task. It can be achieved by improving the analysis methods applied to oil characterization and the representation methods, in order to improve simulation accuracy. The most common method of representation based on pseudocomponents presents some disadvantages, which do not enable a detailed accuracy in some situations. In this work a new methodology is proposed to surpass these disadvantages, and applied to oil flash as an example. The proposed methodology involves the representation of the mixture compositions by continuous distribution functions, which are approximated by wavelet functions, and the simplification of the flash model by the \"Wavelet-Galerkin\" discretization which are solved by multigrid adaptive approach. In this contex, in the first part of this work different aspects related to the complex process of petroleum characterization are discussed, considering their economical and technological importance. Besides, the utility of mathematical tools, and their advantages to solve complex problems in different scientific fields, are presented. In the second part, the proposed methodology is developed. All algorithms are built with MatLab programming languages. Two simulations of the model of flash process enabled to evaluate the precision and the efficiency of the methodology. The first simulation was developed without the implementation of an adaptability selection grid. The second was carried out with this implementation, which enabled to build four cases and analyze their results. Finally, with the objective of evaluating its potential of utilization in a simulation environment, the results were compared with a third simulation, using the simulator HYSYS, which itself is based on the representation of pseudo-components.
|
Page generated in 0.0938 seconds