• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 17
  • 11
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 129
  • 32
  • 26
  • 18
  • 17
  • 14
  • 14
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Shallow sediment transport flow computation using time-varying sediment adaptation length

Pu, Jaan H., Shao, Songdong, Huang, Y. 06 1900 (has links)
Yes / Based on the common approach, the adaptation length in sediment transport is normally estimated in the temporal independence. However, this approach might not be theoretically justified as the process of reaching of the sediment transport equilibrium stage is affected by the flow conditions in time, especially for those fast sediment moving flows, such as scour-hole developing flow. In this study, the 2D shallow water formulation together with a sediment continuity-concentration (SCC) model were applied to flow with mobile sediment boundary. A time-varying approach was proposed to determine the sediment transport adaptation length to treat the flow sediment erosion-deposition rate. The proposed computational model was based on the Finite Volume (FV) method. The Monotone Upwind Scheme of Conservative Laws (MUSCL)-Hancock scheme was used with the Harten Lax van Leer-contact (HLLC) approximate Riemann solver to discretize the FV model. In the flow applications of this paper, a highly discontinuous dam-break fast sediment transport flow was used to calibrate the proposed time-varying sediment adaptation length model. Then the calibrated model was further applied to two separate experimental sediment transport flow applications documented in literature, i.e. a highly concentrated sediment transport flow in a wide alluvial channel and a sediment aggradation flow. Good agreements with the experimental data were presented by the proposed model simulations. The tests prove that the proposed model, which was calibrated by the discontinuous dam-break bed scouring flow, also performed well to represent the rapid bed change and the steady sediment mobility conditions. / The National Natural Science Foundation of China NSFC (Grant Number 20101311246), Major State Basic Research Development Program (973 program) of China (Grant Number 2013CB036402) and Open Fund of the State Key Laboratory of Hydraulics and Mountain River Engineering, Sichuan University of China (Grant Number SKLH-OF-1103).
102

A Language for Inconsistency-Tolerant Ontology Mapping

Sengupta, Kunal 01 September 2015 (has links)
No description available.
103

Monotonic and Cyclic Shear Response of a Needle-Punched Geosynthetic Clay Liner at High Normal Stresses

Sura, Joseph Michael 27 August 2009 (has links)
No description available.
104

Source term treatment of SWEs using surface gradient upwind method

Pu, Jaan H., Cheng, N., Tan, S.K., Shao, Songdong 16 January 2012 (has links)
No / Owing to unpredictable bed topography conditions in natural shallow flows, various numerical methods have been developed to improve the treatment of source terms in the shallow water equations. The surface gradient method is an attractive approach as it includes a numerically simple approach to model flows over topographically-varied channels. To further improve the performance of this method, this study deals with the numerical improvement of the shallow-flow source terms. The so-called surface gradient upwind method (SGUM) integrates the source term treatment in the inviscid discretization scheme. A finite volume model (FVM) with the monotonic upwind scheme for conservative laws is used. The Harten–Lax–van Leer-contact approximate Riemann solver is used to reconstruct the Riemann problem in the FVM. The proposed method is validated against published analytical, numerical, and experimental data, indicating that the SGUM is robust and treats the source terms in different flow conditions well.
105

Offline Reinforcement Learning for Downlink Link Adaption : A study on dataset and algorithm requirements for offline reinforcement learning. / Offline Reinforcement Learning för nedlänksanpassning : En studie om krav på en datauppsättning och algoritm för offline reinforcement learning

Dalman, Gabriella January 2024 (has links)
This thesis studies offline reinforcement learning as an optimization technique for downlink link adaptation, which is one of many control loops in Radio access networks. The work studies the impact of the quality of pre-collected datasets, in terms of how much the data covers the state-action space and whether it is collected by an expert policy or not. The data quality is evaluated by training three different algorithms: Deep Q-networks, Critic regularized regression, and Monotonic advantage re-weighted imitation learning. The performance is measured for each combination of algorithm and dataset, and their need for hyperparameter tuning and sample efficiency is studied. The results showed Critic regularized regression to be the most robust because it could learn well from any of the datasets that were used in the study and did not require extensive hyperparameter tuning. Deep Q-networks required careful hyperparameter tuning, but paired with the expert data it managed to reach rewards equally as high as the agents trained with Critic Regularized Regression. Monotonic advantage re-weighted imitation learning needed data from an expert policy to reach a high reward. In summary, offline reinforcement learning can perform with success in a telecommunication use case such as downlink link adaptation. Critic regularized regression was the preferred algorithm because it could perform great with all the three different datasets presented in the thesis. / Denna avhandling studerar offline reinforcement learning som en optimeringsteknik för nedlänks länkanpassning, vilket är en av många kontrollcyklar i radio access networks. Arbetet undersöker inverkan av kvaliteten på förinsamlade dataset, i form av hur mycket datan täcker state-action rymden och om den samlats in av en expertpolicy eller inte. Datakvaliteten utvärderas genom att träna tre olika algoritmer: Deep Q-nätverk, Critic regularized regression och Monotonic advantage re-weighted imitation learning. Prestanda mäts för varje kombination av algoritm och dataset, och deras behov av hyperparameterinställning och effektiv användning av data studeras. Resultaten visade att Critic regularized regression var mest robust, eftersom att den lyckades lära sig mycket från alla dataseten som användes i studien och inte krävde omfattande hyperparameterinställning. Deep Q-nätverk krävde noggrann hyperparameterinställning och tillsammans med expertdata lyckades den nå högst prestanda av alla agenter i studien. Monotonic advantage re-weighted imitation learning behövde data från en expertpolicy för att lyckas lära sig problemet. Det datasetet som var mest framgångsrikt var expertdatan. Sammanfattningsvis kan offline reinforcement learning vara framgångsrik inom telekommunikation, specifikt nedlänks länkanpassning. Critic regularized regression var den föredragna algoritmen för att den var stabil och kunde prestera bra med alla tre olika dataseten som presenterades i avhandlingen.
106

Dégradation des aspérités des joints rocheux sous différentes conditions de chargement

Fathi, Ali January 2015 (has links)
Résumé: L’objectif de cette thèse est d’interpréter la dégradation des aspérités des joints rocheux sous différentes conditions de chargement. Pour cela, la variation des aspérités durant les différentes étapes du cisaillement d’un joint rocheux est observée. Selon le concept appelé “tiny windows”, une nouvelle méthodologie de caractérisation des épontes des joints a été développée. La méthodologie est basée sur les coordonnées tridimensionnelles de la surface des joints et elles sont mesurées après chaque essai. Après la reconstruction du modèle géométrique de la surface du joint, les zones en contact sont identifiées à travers la comparaison des hauteurs des “tiny windows” superposées. Ainsi, la distribution des zones de la surface en contact, endommagées et sans contact ont été identifiées. La méthode d’analyse d’image a été utilisée pour vérifier les résultats de la méthodologie proposée. Les résultats indiquent que cette méthode est appropriée pour déterminer la taille et la distribution des surfaces du joint en contact et endommagées à différentes étapes du cisaillement. Un ensemble de 38 répliques ont été préparées en coulant du mortier sans retrait sur une surface de fracture obtenue à partir d’un bloc de granite. Différentes conditions de chargement, incluant des chargements statiques et cycliques ont été appliquées afin d’étudier la dégradation des aspérités à différentes étapes du procédé de cisaillement. Les propriétés géométriques des “tiny windows” en contact en phase pré-pic, pic, post-pic et résiduelle ont été analysées en fonction de leurs angles et de leurs auteurs. Il a été remarqué que les facettes des aspérités faisant face à la direction de cisaillement jouent un rôle majeur dans le cisaillement. Aussi, il a été observé que les aspérités présentent différentes contributions dans le cisaillement. Les aspérités les plus aigües (“tiny windows” les plus inclinées) sont abîmées et les aspérités les plus plates glissent les unes sur les autres. Les aspérités d’angles intermédiaires sont définies comme “angle seuil endommagé” et “angle seuil en contact”. En augmentant la charge normale, les angles seuils diminuent d’une part et, d’autre part, le nombre de zones endommagées et en contact augmentent. Pour un petit nombre de cycles (avec faible amplitude et fréquence), indépendamment de l’amplitude, une contraction apparaît ; par conséquent, la surface en contact et les paramètres de résistance au cisaillement augmentent légèrement. Pour un grand nombre de cycles, la dégradation est observée à l’échelle des aspérités de second ordre, d’où une baisse des paramètres de résistance au cisaillement. Il a été aussi observée que les “tiny windows” avec différentes inclinaisons contribuent au processus de cisaillement, en plus des “tiny windows” les plus inclinées (aspérités plus aigües). Les résultats de la méthode proposée montrent que la différenciation entre les zones en contact et celles endommagées s’avère utile pour une meilleure compréhension du mécanisme de cisaillement des joints rocheux. / Abstract: The objective of the current research is to interpret the asperity degradation of rock joints under different loading conditions. For this aim, the changes of asperities during different stages of shearing in the three-dimensional joint surface are tracked. According to a concept named ‘tiny window’, a new methodology for the characterization of the joint surfaces was developed. The methodology is based on the three-dimensional coordinates of the joints surface that are captured before and after each test. After the reconstruction of geometric models of joint surface, in-contact areas were identified according to the height comparison of the face to face tiny windows. Therefore, the distribution and size of just in-contact areas, in-contact damaged areas and not in-contact areas are identified. Image analysis method was used to verify the results of the proposed method. The results indicated that the proposed method is suitable for determining the size and distribution of the contact and damaged areas at any shearing stage. A total of 38 replicas were prepared by pouring non-shrinking cement mortar on a fresh joint surface of a split granite block. Various loading conditions include monotonic and cyclic loading were applied to study the asperities degradation at different stages of shearing. The geometric properties of the in-contact tiny windows in the pre-peak, peak, post-peak softening and residual shearing stages were investigated based on their angle and height. It was found that those asperities facing the shear direction have the primary role in shearing. It is remarkable that different part of these asperities has their own special cooperation in shearing. The steepest parts (steeper tiny windows) are wore and the flatter parts (flatter tiny windows) are slid. The borderlines between these tiny windows defined as “damaged threshold angle” and “in-contact threshold angle”. By increasing normal load, both the amounts of threshold angles are decreased and contact and damaged areas increased. During low numbers of cycles (with low amplitude and frequency), independent of the type of cycle, contraction occurs and consequently the contact area and the shear strength parameters slightly increased. During larger number of cycles, degradation occurred on the second order asperities, therefore the shear strength parameters slowly decreased. It was also observed that tiny windows with different heights participate in the shearing process, not just the highest ones. The results of the proposed method indicated that considering differences between just in-contact areas and damaged areas provide useful insights into understanding the shear mechanism of rock joints.
107

Default reasoning and neural networks

Govender, I. (Irene) 06 1900 (has links)
In this dissertation a formalisation of nonmonotonic reasoning, namely Default logic, is discussed. A proof theory for default logic and a variant of Default logic - Prioritised Default logic - is presented. We also pursue an investigation into the relationship between default reasoning and making inferences in a neural network. The inference problem shifts from the logical problem in Default logic to the optimisation problem in neural networks, in which maximum consistency is aimed at The inference is realised as an adaptation process that identifies and resolves conflicts between existing knowledge about the relevant world and external information. Knowledge and data are transformed into constraint equations and the nodes in the network represent propositions and constraint equations. The violation of constraints is formulated in terms of an energy function. The Hopfield network is shown to be suitable for modelling optimisation problems and default reasoning. / Computer Science / M.Sc. (Computer Science)
108

Monotonic and Fatigue Performance of RC Beams Strengthened with Externally Post-Tensioned CFRP Tendons

El Refai, Ahmed January 2007 (has links)
External post-tensioning is an attractive technique for strengthening reinforced concrete structures because of its ability to actively control stresses and deflections, speed of installation, minimum interruption for the existing structure, and ease of inspection under service conditions. However, external prestressing implies exposing the tendons to the environment outside the concrete section, which may lead to corrosion in steel tendons. Therefore, the interest in using fiber reinforced polymer (FRP) tendons, which are corrosion resistant, has increased. The present work investigated, experimentally and analytically, the flexural performance of reinforced concrete beams strengthened with externally post-tensioned Carbon FRP (CFRP) tendons, under monotonic and fatigue loadings. Initially, tensile fatigue tests were carried out on CFRP tendon-anchor assemblies to assess their response under repeated cyclic loads, before implementing them in the beam tests. New wedge-type anchors (Waterloo anchors) were used in gripping the CFRP specimens. The assemblies exhibited excellent fatigue performance with no premature failure occurring at the anchorage zone. The fatigue tests suggested a fatigue limit of a stress range of 10% of the tendon ultimate capacity (approximately 216 MPa). Monotonic and fatigue experiments on twenty-eight beams (152x254x3500 mm) were then undertaken. Test parameters included the tendon profile (straight and double draped), the initial loading condition of the beam prior to post-tensioning (in-service and overloading), the partial prestressing ratio (0.36 and 0.46), and the load ranges applied to the beam during the fatigue life (39% to 76% of the yield load). The CFRP tendons were post-tensioned at 40% of their ultimate capacity. The monotonic tests of the post-tensioned beams suggested that overloading the beam prior to post-tensioning increased the beam deflections and the strains developed in the steel reinforcing bars at any stage of loading. However, overloading had no significant effect on the yield load of the strengthened beam and the mode of failure at ultimate. It also had no discernable effect on the increase in the tendon stress at yielding. The maximum increase in the CFRP stress at yield load was approximately 20% of the initial post-tensioning stress, for the in-service and overloaded beams. A very good performance of the strengthened beams was observed under fatigue loading. The fatigue life of the beams was mainly governed by the fatigue fracture of the internal steel reinforcing bars at a flexural crack location. Fracture of the bars occurred at the root of a rib where high stress concentration was likely to occur. No evidence of wear or stress concentration were observed at the deviated points of the CFRP tendons due to fatigue. The enhancement in the fatigue life of the strengthened beams was noticeable at all load ranges applied. Post-tensioning considerably decreased the stresses in the steel reinforcing bars and, consequently, increased the fatigue life of the beams. The increase in the fatigue life was slightly affected by the loading history of the beams. At the same load range applied to the beam, increasing the amount of the steel reinforcing bars for the same post-tensioning level decreased the stress range in the bars and significantly increased the fatigue life of the strengthened beams. In the analytical study, a monotonic model that predicts the non-linear flexural response of the CFRP post-tensioned beams was developed and implemented into a computer program. The model takes into account the loading history of the strengthened beams prior to post-tensioning (in-service and overloading). Good agreement was obtained between the measured and the predicted monotonic results. A strain-life based fatigue model was proposed to predict the fatigue life of the CFRP post-tensioned beams. The model takes into consideration the stress-strain history at the stress raisers in the steel bars. It accounts for the inelastic deformation occurring at the ribs during cycling and the resulting changes in the local mean stresses induced. Good agreement between the experimental and predicted fatigue results was observed. A step-by-step fatigue design approach is proposed for the CFRP externally post-tensioned beams. General conclusions of the study and recommendations of future work are given.
109

Symétries locales et globales en logique propositionnelle et leurs extensions aux logiques non monotones

Nabhani, Tarek 09 December 2011 (has links)
La symétrie est par définition un concept multidisciplinaire. Il apparaît dans de nombreux domaines. En général, elle revient à une transformation qui laisse invariant un objet. Le problème de satisfaisabilité (SAT) occupe un rôle central en théorie de la complexité. Il est le problème de décision de référence de la classe NP-complet (Cook, 71). Il consiste à déterminer si une formule CNF admet ou non une valuation qui la rend vraie. Dans la première contribution de ce mémoire, nous avons introduit une nouvelle méthode complète qui élimine toutes les symétries locales pour la résolution du problème SAT en exploitant son groupe des symétries. Les résultats obtenus montrent que l'exploitation des symétries locales est meilleure que l'exploitation des symétries globales sur certaines instances SAT et que les deux types de symétries sont complémentaires, leur combinaison donne une meilleure exploitation.En deuxième contribution, nous proposons une approche d'apprentissage de clauses pour les solveurs SAT modernes en utilisant les symétries. Cette méthode n'élimine pas les modèles symétriques comme font les méthodes statiques d'élimination des symétries. Elle évite d'explorer des sous-espaces correspondant aux no-goods symétriques de l'interprétation partielle courante. Les résultats obtenus montrent que l'utilisation de ces symétries et ce nouveau schéma d'apprentissage est profitable pour les solveurs CDCL.En Intelligence Artificielle, on inclut souvent la non-monotonie et l'incertitude dans le raisonnement sur les connaissances avec exceptions. Pour cela, en troisième et dernière contribution, nous avons étendu la notion de symétrie à des logiques non classiques (non-monotones) telles que les logiques préférentielles, les X-logiques et les logiques des défauts.Nous avons montré comment raisonner par symétrie dans ces logiques et nous avons mis en évidence l'existence de certaines symétries dans ces logiques qui n'existent pas dans les logiques classiques. / Symmetry is by definition a multidisciplinary concept. It appears in many fields. In general, it is a transformation which leaves an object invariant. The problem of satisfiability (SAT) is one of the central problems in the complexity theory. It is the first decision Np-complete problem (Cook, 71). It deals with determining if a CNF formula admits a valuation which makes it true. First we introduce a new method which eliminates all the local symmetries during the resolution of a SAT problem by exploiting its group of symmetries. Our experimental results show that for some SAT instances, exploiting local symmetries is better than exploiting just global symmetries and both types of symmetries are complementary. As a second contribution, we propose a new approach of Conflict-Driven Clause Learning based on symmetry. This method does not eliminate the symmetrical models as the static symmetry elimination methods do. It avoids exploring sub-spaces corresponding to symmetrical No-goods of the current partial interpretation. Our experimental results show that using symmetries in clause learning is advantageous for CDCL solvers.In artificial intelligence, we usually include non-monotony and uncertainty in the reasoning on knowledge with exceptions. Finally, we extended the concept of symmetry to non-classical logics that are preferential logics, X-logics and default logics. We showed how to reason by symmetry in these logics and we prove the existence of some symmetries in these non-classical logics which do not exist in classical logics.
110

Schedulability Tests for Real-Time Uni- and Multiprocessor Systems / Planbarkeitstests für Ein- und Mehrprozessor-Echtzeitsysteme unter besonderer Berücksichtigung des partitionierten Ansatzes

Müller, Dirk 07 April 2014 (has links) (PDF)
This work makes significant contributions in the field of sufficient schedulability tests for rate-monotonic scheduling (RMS) and their application to partitioned RMS. Goal is the maximization of possible utilization in worst or average case under a given number of processors. This scenario is more realistic than the dual case of minimizing the number of necessary processors for a given task set since the hardware is normally fixed. Sufficient schedulability tests are useful for quick estimates of task set schedulability in automatic system-synthesis tools and in online scheduling where exact schedulability tests are too slow. Especially, the approach of Accelerated Simply Periodic Task Sets (ASPTSs) and the concept of circular period similarity are cornerstones of improvements in the success ratio of such schedulability tests. To the best of the author's knowledge, this is the first application of circular statistics in real-time scheduling. Finally, the thesis discusses the use of sharp total utilization thresholds for partitioned EDF. A constant-time admission control is enabled with a controlled residual risk. / Diese Arbeit liefert entscheidende Beiträge im Bereich der hinreichenden Planbarkeitstests für ratenmonotones Scheduling (RMS) und deren Anwendung auf partitioniertes RMS. Ziel ist die Maximierung der möglichen Last im Worst Case und im Average Case bei einer gegebenen Zahl von Prozessoren. Dieses Szenario ist realistischer als der duale Fall der Minimierung der Anzahl der notwendigen Prozessoren für eine gegebene Taskmenge, da die Hardware normalerweise fixiert ist. Hinreichende Planbarkeitstests sind für schnelle Schätzungen der Planbarkeit von Taskmengen in automatischen Werkzeugen zur Systemsynthese und im Online-Scheduling sinnvoll, wo exakte Einplanungstests zu langsam sind. Insbesondere der Ansatz der beschleunigten einfach-periodischen Taskmengen und das Konzept der zirkulären Periodenähnlichkeit sind Eckpfeiler für Verbesserungen in der Erfolgsrate solcher Einplanungstests. Nach bestem Wissen ist das die erste Anwendung zirkulärer Statistik im Echtzeit-Scheduling. Schließlich diskutiert die Arbeit plötzliche Phasenübergänge der Gesamtlast für partitioniertes EDF. Eine Zugangskontrolle konstanter Zeitkomplexität mit einem kontrollierten Restrisiko wird ermöglicht.

Page generated in 0.2104 seconds