• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 17
  • 11
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 32
  • 26
  • 18
  • 17
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Adhesives for Load-Bearing Timber-Glass Elements : Elastic, plastic and time dependent properties

Phung, Kent, Chu, Charles January 2013 (has links)
This thesis work is part of an on-going project regarding load-bearing timber glass composites within the EU program WoodWisdom-Net. One major scope of that project is the adhesive material between the glass and timber parts. The underlying importance of the bonding material is related to the transfer of stress between the two materials – the influence of the adhesive stiffness and ductility on the possibility of obtaining uniform stress distributions. In this study the mechanical properties of two different adhesives are investigated, an epoxy (3M DP490) and an acrylate (SikaFast 5215). The differences of the adhesives lay in dissimilar stiffness, strength and viscous behaviour. In long term load caring design is important to understand the materials behavior under a constant load and a permanent displacement within the structure can cause major consequences. Therefore the main aim in this project is to identify the adhesives strength, deformation capacity and possible viscous (time dependent) effects. Because of the limitation of equipment and time this study is restricted to only three different experiments. Three different types of tensile tests have been conducted: monotonic, cyclic relaxation tests.The results of the experiments show that 3M DP490 has a higher strength and a smaller deformation capacity as compared to the SikaFast 5215. Thus, the SikaFast 5215 is more ductile. The 3M DP490 exhibits a lower loss of strength under constant strain (at relaxation). SikaFast 5215 showed also a large dependency of strain level on the stress loss in relaxation.
92

A influência das ações repetidas na aderência aço-concreto / The influence of repeated loads on the steel-concrete bond

Rejane Martins Fernandes 25 April 2000 (has links)
Este trabalho descreve o comportamento da aderência do concreto armado sob ações monotônicas e repetidas através de uma revisão bibliográfica e de ensaios de arrancamento padronizados. A influência de alguns parâmetros foi analisada, como diâmetro da armadura, tipo e amplitude de carregamento. Os resultados dos ensaios monotônicos foram comparados com as recomendações do CEB-FIP MC 1990, EUROCODE 2 e NB-1/78. Também foi realizada a análise numérica da aderência monotônica por meio de elementos finitos. Considerou-se a barra lisa, elementos de contato entre o aço e concreto e comportamento elástico-linear dos materiais; pois a ruína experimental da ligação ocorreu pelo corte do concreto entre as nervuras. A resistência monotônica da ligação ficou compreendida entre condições boas e ruins de aderência. Os resultados calculados de acordo com as normas foram muito diferentes em relação aos valores experimentais, e apresentaram uma dispersão muito grande. A força repetida ocasionou a perda de aderência pelo crescimento progressivo dos deslizamentos. Os modelos numéricos não representaram o comportamento experimental, devido à resposta força-deslizamento não-linear. / This research describes the bond behaviour in reinforced concrete under monotonic and repeated loading through a state-of-art and standard pull-out tests. The influence of some parameters was analysed as deformed bar diameter, type and amplitude of loading. The monotonic test results were compared with recommendations of CEB-FIP MC 1990, EUROCODE 2 and NB-1/78. The numerical analysis of monotonic bond was realized through finite elements. It was considered smooth bar, contact elements between the steel and concrete, and materials as of linear-elastic behaviour, because the experimental degradation of bond was caused by concrete between the ribs sheared off. The monotonic bond resistance resulted between good and bad bond conditions. The results calculated according to the codes were very different from the experimental values and very disperse. The repeated loading causes bond degradation by progressive increase of slip. The numerical specimens did not represent the experimental behaviour because of the non-linear load-slip response.
93

Os efeitos da desigualdade de renda sobre o crescimento econômico dos países da América Latina no período de 1970 a 2010

GOMES, Thiago Geovane Pereira 29 February 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-09-01T15:17:53Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO - THIAGO GEOVANE PEREIRA GOMES.pdf: 1464417 bytes, checksum: c2ceaeef232dab630211e6511cab3eac (MD5) / Made available in DSpace on 2016-09-01T15:17:53Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DISSERTAÇÃO - THIAGO GEOVANE PEREIRA GOMES.pdf: 1464417 bytes, checksum: c2ceaeef232dab630211e6511cab3eac (MD5) Previous issue date: 2016-02-29 / A discussão sobre os efeitos e os mecanismos/canais da desigualdade de renda sobre o crescimento econômico ganhou maior notoriedade a partir da década de 1990 com a adoção de modelos de crescimento endógeno. A principal preocupação encontra-se em responder o porquê alguns países crescem mais do que outros e o papel do capital humano ao longo desse processo. Um caso de estudo do binômio desigualdade-crescimento interessante de ser tratado é o da América Latina logo após a Segunda Guerra Mundial. Portanto, essa pesquisa tem o propósito de investigar os efeitos da desigualdade de renda sobre o crescimento econômico de países selecionados da América Latina entre 1970 e 2010. É exibido um modelo teórico com uma trajetória de ajustamento não-monotônica da produção que conduz à um modelo linear que representa a relação desigualdade-crescimento. A estratégia empírica é dividida em duas partes: a) uso dos estimadores de efeitos fixos e aleatórios; b) aplicação de um modelo dinâmico auto regressivo de defasagem distribuída para um painel cointegrado. Os resultados encontrados inferem uma relação negativa e estatisticamente significativa entre a desigualdade e o crescimento para os países da América Latina. Estes resultados corroboram com a regularidade empírica, onde afirma-se que, a desigualdade de renda apresenta efeitos negativos sobre o crescimento econômico dos países em desenvolvimento. / The discussion about the effects and the mechanisms / channels of income inequality on economic growth gained greater notoriety from the 1990s with the adoption of endogenous growth models. The main concern is to answer why some countries grow more than others and the role of human capital throughout this process. A case study of interesting inequality-growth binomial to be treated is in Latin America after World War II. Therefore, this research aims to investigate the effects of income inequality on economic growth of selected Latin American countries between 1970 and 2010 a theoretical model with a non-monotonic adjustment path of production leading to a model appears linear representing inequality-growth relationship. The empirical strategy is divided into two parts: a) use of estimators of fixed and random effects; b) application of a dynamic model autoregressive lag distributed to co-integrate a panel. The results infer a negative and statistically significant relationship between inequality and growth for the countries of Latin America. These results corroborate the empirical regularity, which indicates that income inequality has a negative effect on the economic growth of developing countries.
94

Using quantum optimal control to drive intramolecular vibrational redistribution and to perform quantum computing

Santos, Ludovic 28 November 2017 (has links)
Quantum optimal control theory is applied to find optimal pulses for controlling the motion of an ion and a molecule for two different applications. Those optimal pulses enable the control of the dynamics of the system by driving the atom or the molecule from an initial state to desired states.The evolution equations obtained by means of the quantum optimal control theory are resolved iteratively using a monotonic convergent algorithm. A number of simulation parameters are varied in order to get the optimal pulses including the duration of the pulses, the time step of the time grid, a penalty factor that limits the maximal intensity of the fields, and a guess pulse which is used to start the optimal control.The optimal pulses obtained for each application are analyzed by Fourier transform, and also by looking at the time evolution of the populations that they generate in the system.The first application is the preparation of specific vibrational states of acetylene that are usually not reachable from the ground state, and that would remain unpopulated by usual spectroscopy. Relevant state energies and transition dipole moments are extracted from the experimental literature and especially from the global acetylene Hamiltonian conferring an uncommon precision to the control simulation. The control starts from the ground state. The target states belongs to the polyad Ns=1, Nr=5 of acetylene which includes two vibrational dark states and one vibrational bright state. First, the simulation is performed with the Schrödinger equation and in a second step, with the Liouville--von Neumann equation, as mixed states are prepared. Indeed, the control starts from a Boltzmann distribution of population in the rotational levels of the vibrational ground state chosen in order to simulate an experimental condition. But the distribution is truncated to limit the computational effort. One of the dark states appears to be a potential target for a realistic experimental investigation because the average population of the Rabi oscillation remains high and decoherence is expected to be weak. The optimal pulses obtained have a high fidelity, have a spectrum with well-resolved peak frequencies, and their experimental feasibility seems achievable within the current abilities of experimental laboratories.The second application is to propose an experimental realization of a microscopic physical device able to simulate quantum dynamics. The idea is to use the motional states of a Cd^+ ion trapped in an anharmonic potential to realize a quantum dynamics simulator of a single-particle Schrödinger equation. In this way, the motional states store the information and the optimal pulse manipulates this information to realize operations. In the present case, the simulated dynamics was the propagation of a wave packet in a harmonic potential. Starting from an initial quantum state, the pulse acts on the system to modify the motional states of the ion in such a way that the final superposition of motional states corresponds to the results of the dynamics. This simulation is performed with the Liouville--von Neumann equation and also with the Lindblad equation as dissipation is included to test the robustness of the pulse against perturbations of the potential. The optimal pulses that are obtained have a high fidelity which shows that the ion trap system has correctly realized the quantum dynamics simulation. The optimal pulses are valid for any initial condition if the potential of the simulation or the mass of the propagated wave packet is unchanged. / La théorie du contrôle optimal quantique est utilisée pour trouver des impulsions optimales permettant de contrôler la dynamique d'un atome et d'une molécule les menant d'un état initial à un état final. Les équations d'évolution obtenues grâce au contrôle optimal limitent l'intensité maximale de l'impulsion et sont résolues itérativement grâce à l'algorithme de Zhu--Rabitz. Le contrôle optimal est utilisé pour réaliser deux objectifs. Le premier est la préparation d'états vibrationnels de l'acétylène qui sont généralement inaccessibles par transition au départ de l'état vibrationnel fondamental. Ces états, appelés états sombres, sont les états cibles de la simulation. Ils appartiennent à la polyade Ns=1, Nr=5 de l'acétylène qui en contient deux ainsi qu'un état, dit brillant, qui lui est accessible depuis l'état fondamental. Les énergies des états du système et les moments de transitions dipolaires sont déterminés à partir d'un Hamiltonien très précis qui confère une précision inhabituelle à la simulation. Un des états sombres apparaît être un candidat potentiel pour une réalisation expérimentale car la population moyenne de cet état reste élevée après l'application de l'impulsion.Les niveaux rotationnels des états vibrationnels sont également pris en compte.Les impulsions optimales obtenues ont une fidélité élevée et leur spectre en fréquence présente des pics résolus.Le deuxième objectif est de proposer la réalisation expérimentale d'un dispositif microscopique capable de simuler une dynamique quantique. Ce travail montre qu'on peut utiliser les états de mouvement d'un ion de Cd^+ piégé dans un potentiel anharmonique pour réaliser la propagation d'un paquet d'onde dans un potentiel harmonique. Ce dispositif stocke l'information de la dynamique simulée grâce aux états de mouvements et l'impulsion optimale manipule l'information pour réaliser les propagations. En effet, démarrant d'un état quantique initial, l'impulsion agit sur le système en modifiant les états de mouvements de l'ion de telle sorte que la superposition finale des états de mouvements corresponde aux résultats de la dynamique. De la dissipation est incluse pour tester la robustesse de l'impulsion face à des perturbations du potentiel anharmonique. Les impulsions optimales obtenues ont une fidélité élevée ce qui montre que le système a correctement réalisé la simulation de dynamique quantique. / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
95

Caracterização e proposição de métodos estimativos das propriedades monotônicas e cíclicas dos ferros fundidos nodulares / Characterization and estimative models of monotonic and cyclic properties of ductile iron

Elton Franco de Magalhães 09 March 2012 (has links)
Para o correto dimensionamento da maioria dos componentes estruturais é necessário informações sobre a resposta do material quando submetido à fadiga de alto e baixo ciclo, bem como conhecer as propriedades monotônicas (não-cíclicas) e cíclicas dos materiais. Na literatura são encontradas amplas divulgações de dados sobre diversos materiais de engenharia (Ex. SAE J1099 Technical Report on Fatigue Properties). Porém, quando se trata de ferro fundido nodular estas informações são limitadas, sendo assim, visa-se neste trabalho caracterizar as propriedades monotônicas e cíclicas destes materiais em complemento aos trabalhos já publicados na literatura e propor métodos para a estimativa destas propriedades a partir da dureza. Faz-se necessário a proposição de métodos estimativos das propriedades mecânicas destes materiais baseados na dureza devido às suas grandes variações que são inerentes ao processo de fundição. Em um mesmo componente podem existir diferentes classes de ferro fundido, que apesar de possuir a mesma composição química, podem apresentar variações nas propriedades mecânicas devido à formação de diferentes estruturas metalúrgicas que são sensíveis às taxas de resfriamento do material que variam de acordo com as características geométricas da peça que está sendo fundida, principalmente a variação da espessura. Neste estudo a determinação das relações entre as propriedades monotônicas e cíclicas dos ferros fundidos nodulares foram obtidas a partir do tratamento dos dados publicados na literatura levando-se em consideração o índice de qualidade. Foi proposto um modelo contínuo com relação à dureza para a estimativa das propriedades monotônicas, do coeficiente de resistência cíclico e do expoente de encruamento cíclico e para a estimativa das propriedades cíclicas que experimentalmente demonstraram não ter correlação com a dureza foi proposto uma forma discreta, que consistiu na recomendação de valores típicos definidas por faixas de dureza. / For the correct design of the most part of structural components is necessary to have information about the material response under both high cycle and low cycle fatigue, as well as the knowledge of monotonic and cyclic materials properties. In literature a major publication of several engineering material data can be found (e.g, SAE J1099 - Technical Report on Fatigue Properties), but regarding to ductile iron this information is quite limited. Therefore, this work aims to characterize the monotonic and cyclic properties of this material in complementing to the available data in the literature and also make a proposition of methods to estimate this properties from hardness. The mechanical properties estimation model from hardness is relevant to take into account the inherent variations of casting process, which for the same chemical composition can be found different grades in a same part. This fact occurs due to the formation of different metallurgical structures that is influenced by cooling ratio which changes accordingly to geometrical characteristic of the part, especially the thickness variation. In this study the determination of the relation between monotonic and cyclic properties from hardness has been determined from literature data processing taking into account the Quality Index. For monotonic properties, the cyclic strength coefficient and the cyclic strain hardening exponent estimation has been proposed a continuous method based on hardness and for the cyclic properties that experimentally showed to remain independent of hardness has been recommended one set of properties for specific hardness ranges.
96

A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond

Pensel, Maximilian 02 December 2019 (has links)
Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts. A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field. This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies. DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics. In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability. In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning. The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations. When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions. Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body. Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion. Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years. A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC). The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism. A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation. This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure. In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements. More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification. Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification. The main contribution of this work is to introduce stronger semantics for the lightweight DL EL_bot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method. We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly. This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood. A typicality preference relation then provides the means to single out those sets of models with maximal typicality. Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts. As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations. An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case.
97

CENTRIFUGE MODELLING AND NUMERICAL SIMULATION OF NOVEL HYBRID FOUNDATIONS FOR OFFSHORE WIND TURBINES

Li, Xinyao 07 September 2020 (has links)
No description available.
98

Closed-World Semantics for Query Answering in Temporal Description Logics

Forkel, Walter 10 February 2021 (has links)
Ontology-mediated query answering is a popular paradigm for enriching answers to user queries with background knowledge. For querying the absence of information, however, there exist only few ontology-based approaches. Moreover, these proposals conflate the closed-domain and closed-world assumption, and therefore are not suited to deal with the anonymous objects that are common in ontological reasoning. Many real-world applications, like processing electronic health records (EHRs), also contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. Our contribution consists of three main parts: Firstly, we introduce a new closed-world semantics for answering conjunctive queries with negation over ontologies formulated in the description logic ELH⊥, which is based on the minimal universal model. We propose a rewriting strategy for dealing with negated query atoms, which shows that query answering is possible in polynomial time in data complexity. Secondly, we introduce a new temporal variant of ELH⊥ that features a convexity operator. We extend this minimal-world semantics for answering metric temporal conjunctive queries with negation over the logic and obtain similar rewritability and complexity results. Thirdly, apart from the theoretical results, we evaluate minimal-world semantics in practice by selecting patients, based their EHRs, that match given criteria.
99

Le maintien de la cohérence dans les systèmes de stockage partiellement repliqués / Ensuring consistency in partially replicated data stores

Saeida Ardekani, Masoud 16 September 2014 (has links)
Dans une première partie, nous étudions la cohérence dans les systèmes transactionnels, en nous concentrant sur le problème de réconcilier la scalabilité avec des garanties transactionnelles fortes. Nous identifions quatre propriétés critiques pour la scalabilité. Nous montrons qu’aucun des critères de cohérence forte existants n’assurent l’ensemble de ces propriétés. Nous définissons un nouveau critère, appelé Non-Monotonic Snapshot Isolation ou NMSI, qui est le premier à être compatible avec les quatre propriétés à la fois. Nous présentons aussi une mise en œuvre de NMSI, appelée Jessy, que nous comparons expérimentalement à plusieurs critères connus. Une autre contribution est un canevas permettant de comparer de façon non biaisée différents protocoles. Elle se base sur la constatation qu’une large classe de protocoles transactionnels distribués est basée sur une même structure, Deferred Update Replication(DUR). Les protocoles de cette classe ne diffèrent que par les comportements spécifiques d’un petit nombre de fonctions génériques. Nous présentons donc un canevas générique pour les protocoles DUR.La seconde partie de la thèse a pour sujet la cohérence dans les systèmes de stockage non transactionnels. C’est ainsi que nous décrivons Tuba, un stockage clef-valeur qui choisit dynamiquement ses répliques selon un objectif de niveau de cohérence fixé par l’application. Ce système reconfigure automatiquement son ensemble de répliques, tout en respectant les objectifs de cohérence fixés par l’application, afin de s’adapter aux changements dans la localisation des clients ou dans le débit des requête. / In the first part, we study consistency in a transactional systems, and focus on reconciling scalability with strong transactional guarantees. We identify four scalability properties, and show that none of the strong consistency criteria ensure all four. We define a new scalable consistency criterion called Non-Monotonic Snapshot Isolation (NMSI), while is the first that is compatible with all four properties. We also present a practical implementation of NMSI, called Jessy, which we compare experimentally against a number of well-known criteria. We also introduce a framework for performing fair comparison among different transactional protocols. Our insight is that a large family of distributed transactional protocols have a common structure, called Deferred Update Replication (DUR). Protocols of the DUR family differ only in behaviors of few generic functions. We present a generic DUR framework, called G-DUR. We implement and compare several transactional protocols using the G-DUR framework.In the second part, we focus on ensuring consistency in non-transactional data stores. We introduce Tuba, a replicated key-value store that dynamically selects replicas in order to maximize the utility delivered to read operations according to a desired consistency defined by the application. In addition, unlike current systems, it automatically reconfigures its set of replicas while respecting application-defined constraints so that it adapts to changes in clients’ locations or request rates. Compared with a system that is statically configured, our evaluation shows that Tuba increases the reads that return strongly consistent data by 63%.
100

Cmos Design of an 8-Bit 1MS/S Successive Approximation Register ADC

Ganguli, Ameya Vivekanand 01 June 2019 (has links) (PDF)
Rapid evolution of integrated circuit technologies has paved a way to develop smaller and energy efficient biomedical devices which has put stringent requirements on data acquisition systems. These implantable devices are compact and have a very small footprint. Once implanted these devices need to rely on non-rechargeable batteries to sustain a life span of up to 10 years. Analog-to-digital converters (ADCs) are key components in these power limited systems. Therefore, development of ADCs with medium resolution (8-10 bits) and sampling rate (1 MHz) have been of great importance. This thesis presents an 8-bit successive approximation register (SAR) ADC incorporating an asynchronous control logic to avoid external high frequency clock, a dynamic comparator to improve linearity and a differential charger-distribution DAC with a monotonic capacitor switching procedure to achieve better power efficiency. This ADC is developed on a 0.18um TSMC process using Cadence Integrated Circuit design tools. At a sampling rate of 1MS/s and a supply voltage of 1.8V, this 8-bit SAR ADC achieves an effective number of bits (ENOB) of 7.39 and consumes 227.3uW of power, resulting in an energy efficient figure of merit (FOM) of 0.338pJ/conversion-step. Measured results show that the proposed SAR ADC achieves a spurious-free dynamic range (SFDR) of 57.40dB and a signal-to-noise and distortion ratio (SNDR) of 46.27dB. Including pad-ring measured chip area is 0.335sq-mm with the ADC core taking up only 0.055sq-mm

Page generated in 0.035 seconds