• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 8
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 48
  • 27
  • 27
  • 24
  • 16
  • 16
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Phase, Frequency, and Timing Synchronization in Fully Digital Receivers with 1-bit Quantization and Oversampling

Schlüter, Martin 16 November 2021 (has links)
With the increasing demand for faster communication systems, soon data rates in the terabit regime (100 Gbit/s and beyond) are required, which yields new challenges for the design of analog-to-digital converters (ADCs) since high bandwidths imply high sampling rates. For sampling rates larger than 300MHz, which we now achieve with 5G, the ADC power consumption per conversion step scales quadratically with the sampling rate. Thus, ADCs become a major energy consumption bottleneck. To circumvent this problem, we consider digital receivers based on 1-bit quantization and oversampling. We motivate this concept by a brief comparison of the energy efficiency of a recently proposed system employing 1-bit quantization and oversampling to the conventional approach using high resolution quantization and Nyquist rate sampling. Our numerical results show that the energy efficiency can be improved significantly by employing 1-bit quantization and oversampling at the receiver at the cost of increased bandwidth. The main part of this work is concerned with the synchronization of fully digital receivers using 1-bit quantization and oversampling. As a first step, we derive performance bounds for phase, timing, and frequency estimation in order to gain a deeper insight into the impact of 1-bit quantization and oversampling. We identify uniform phase and sample dithering as crucial to combat the non-linear behavior introduced by 1-bit quantization. This dithering can be implemented by sampling at an irrational intermediate frequency and with an oversampling factor with respect to the symbol rate that is irrational, respectively. Since oversampling results in noise correlation, a closed form expression of the likelihood function is not available. To enable an analytical treatment we thus study a system model with white noise by adapting the receive filter bandwidth to the sampling rate. Considering the aforementioned dithering, we obtain very tight closed form lower bounds on the Cramér-Rao lower bound (CRLB) in the large sample regime. We show that with uniform phase and sample dithering, all large sample properties of the CRLB of the unquantized receiver are preserved under 1-bit quantization, except for a signal-to-noise ratio (SNR) dependent performance loss that can be decreased by oversampling. For the more realistic colored noise case, we discuss a numerically computable upper bound of the CRLB and show that the properties of the CRLB for white noise still hold for colored noise except that the performance loss due to 1-bit quantization is reduced. Assuming a neglectable frequency offset, we use the least squares objective function to derive a typical digital matched filter receiver with a data-and timing-aided phase estimator and a timing estimator that is based on square time recovery. We show that both estimators are consistent under very general assumptions, e.g., arbitrary colored noise and stationary ergodic transmit symbols. Performance evaluations are done via simulations and are compared against the numerically computable upper bound of the CRLB. For low SNR the estimators perform well but for high SNR they converge to an error floor. The performance loss of the phase estimator due to decision-directed operation or estimated timing information is marginal. In summary, we have derived practical solutions for the design of fully digital receivers using 1-bit quantization and oversampling and presented a mathematical analysis of the proposed receiver structure. This is an important step towards enabling energy efficient future wireless communication systems with data rates of 100 Gbit/s and beyond.
92

Essays on Financial Intermediation and Monetary Policy

Setayesh Valipour, Abolfazl 24 August 2022 (has links)
No description available.
93

Impact of Phase Information on Radar Automatic Target Recognition

Moore, Linda Jennifer January 2016 (has links)
No description available.
94

Cash is [no longer] king: is an e-krona the answer? : - a de lege ferenda investigation of the Swedish Riksbank's issuing mandate and other legal callenges in relation to economic effects on the payment market

Imamovic, Arnela January 2019 (has links)
For the past decades, the Swedish public’s payment habits have changed, where the majority of the public has abandoned the old way of making payments, using cash, and instead opted for more modern payment solutions, digital money. The difference between cash and digital money is that cash is physical and only issued by the Riksbank, whereas digital money is created by and stored on accounts at commercial banks. The question of what role the state should have on the payment market is an important point of discussion. But it is not categorically a new question; the Swedish government is tackling essentially the same problem today as it has been doing many times before. Today’s problem is to some extent however manifested in a different way. During the 20th century, discussions were held whether or not the Riksbank should have the exclusive right to issue banknotes. It was considered unnecessary, inappropriate and dangerous. The idea that the Riksbank could cover the entire economy’s need for banknotes was, according to the commercial banks, unreasonable. Nonetheless, in 1904 the exclusive right became fait accompli; the government intervened and gave the Riksbank the banknote monopoly. We are now finding ourselves facing a similar situation, where there is a difference of opinion regarding the Riksbank’s role on the payment market. It is therefore nothing new, but rather an expected task for the government, and thus the central bank, to analyze major changes and draw conclusions from them. The problem is essentially about cash being phased out by digital means of payment. In order to therefore solve the problem, the Riksbank has started a project to investigate whether or not the Riksbank should issue digital cash to the Swedish public, what the Riksbank calls an e-krona. To introduce an e-krona would be a major step, but for the public to not have access to a government alternative, seeing as cash usage is declining, is also a major step. No decision has been made yet regarding whether the e-krona will be introduced on the market or not. A decision that however has been made, is that the Riksbank is now working on building an e-krona to develop and assess the technique. Nonetheless, an introduction would undoubtedly have consequences for both the Riksbank and the commercial banks, which ultimately means it would have effects on the economy as a whole. What about regulatory aspects; is the Riksbank even allowed to issue an e-krona under current legislation? The answer is affirmative, to a certain extent. There are furthermore many other uncertainties regarding how an e-krona would affect the economy; the Riksbank does not fully answer many of the system issues in its project reports. The question of whether or not it even is up to the Riksbank to make a decision on the matter of an introduction is also questioned by the author in the thesis.
95

Nonlinear signal processing by noisy spiking neurons

Voronenko, Sergej Olegovic 12 February 2018 (has links)
Neurone sind anregbare Zellen, die mit Hilfe von elektrischen Signalen miteinander kommunizieren. Im allgemeinen werden eingehende Signale von den Nervenzellen in einer nichtlinearen Art und Weise verarbeitet. Wie diese Verarbeitung in einer umfassenden und exakten Art und Weise mathematisch beschrieben werden kann, ist bis heute nicht geklärt und ist Gegenstand aktueller Forschung. In dieser Arbeit untersuchen wir die nichtlineare Übertragung und Verarbeitung von Signalen durch stochastische Nervenzellen und wenden dabei zwei unterschiedliche Herangehensweisen an. Im ersten Teil der Arbeit befassen wir uns mit der Frage, auf welche Art und Weise ein Signal mit einer bekannten Zeitabhängigkeit die Rate der neuronalen Aktivität beeinflusst. Im zweiten Teil der Arbeit widmen wir uns der Rekonstruktion eingehender Signale aus der durch sie hervorgerufenen neuronalen Aktivität und beschäftigen uns mit der Abschätzung der übertragenen Informationsmenge. Die Ergebnisse dieser Arbeit demonstrieren, wie die etablierten linearen Theorien, die die Modellierung der neuronalen Aktivitätsrate bzw. die Rekonstruktion von Signalen beschreiben, um Beiträge höherer Ordnung erweitert werden können. Einen wichtigen Beitrag dieser Arbeit stellt allerdings auch die Darstellung der Signifikanz der nichtlinearen Theorien dar. Die nichtlinearen Beiträge erweisen sich nicht nur als schwache Korrekturen zu den etablierten linearen Theorien, sondern beschreiben neuartige Effekte, die durch die linearen Theorien nicht erfasst werden können. Zu diesen Effekten gehört zum Beispiel die Anregung von harmonischen Oszillationen der neuronalen Aktivitätsrate und die Kodierung von Signalen in der signalabhängigen Varianz einer Antwortvariablen. / Neurons are excitable cells which communicate with each other via electrical signals. In general, these signals are processed by the Neurons in a nonlinear fashion, the exact mathematical description of which is still an open problem in neuroscience. In this thesis, the broad topic of nonlinear signal processing is approached from two directions. The first part of the thesis is devoted to the question how input signals modulate the neural response. The second part of the thesis is concerned with the nonlinear reconstruction of input signals from the neural output and with the estimation of the amount of the transmitted information. The results of this thesis demonstrate how existing linear theories can be extended to capture nonlinear contributions of the signal to the neural response or to incorporate nonlinear correlations into the estimation of the transmitted information. More importantly, however, our analysis demonstrates that these extensions do not merely provide small corrections to the existing linear theories but can account for qualitatively novel effects which are completely missed by the linear theories. These effects include, for example, the excitation of harmonic oscillations in the neural firing rate or the estimation of information for systems with a signal-dependent output variance.
96

Essays on exchange rate policies and monetary integration / Essais sur les politiques de change et l’intégration monétaire

Sangare, Ibrahima 14 December 2015 (has links)
Cette thèse étudie le choix des régimes de change dans des contextes économiques particuliers. La première partie (Chapitres 1 et 2) considère le cas des petits pays dont les dettes sont libellées en monnaies étrangères et celui d’une région constituée de tels petits pays lorsqu’il existe une similitude dans la composition des paniers définissant leurs taux de change effectifs. La deuxième partie de la thèse (Chapitres 3 et 4) se penche sur la considération des différents régimes de change dans le contexte monétaire de trappe à liquidité comparativement à un environnement monétaire traditionnel. En se basant sur une modélisation théorique de type DSGE, l’économétrie bayésienne et des données de panel, la thèse utilise principalement l’analyse des fonctions de réponses, de bien-être et de désalignements monétaires comme critères de comparaison de plusieurs régimes monétaires alternatifs. Les principaux enseignements de cette thèse se résument ainsi. Le change flexible semble être le meilleur régime pour des petites économies ouvertes comme ceux de l’Asie du Sud-Est. Au niveau régional, il est montré le ciblage effectif conduit à une stabilité des taux de change bilatéraux de la région, une sorte de fixité des taux de change qui ressemblerait à une zone monétaire de facto. Dans le contexte monétaire de trappe à liquidité, on trouve que,contrairement à la croyance commune lors la crise de la zone euro, l’union monétaire est plus performante que des politiques nationales de change flexible. Seule une intervention sur le taux de change nominal pourrait permettre au régime de change indépendant de dominer l’union monétaire. A travers une analyse théorique et empirique de l’effet de la trappe à liquidité sur l’ampleur des désalignements monétaires, il est aussi montré que la contrainte ZLB tend à réduire le désalignement monétaire dans une union monétaire comparativement aux politiques nationales de flottement.Cela plaide en faveur du renforcement de l’intégration monétaire au sein d’une union durant la période de trappe à liquidité. / This thesis investigates the choice of exchange rate regimes in specific economic contexts. The first part of this work (Chapters 1 and 2) considers the case of small open economies with foreign-currency denominated debt and that of a region where there is a similarity among trade-weighted currency baskets of countries. The second part of the thesis (Chapters 3 and 4) focuses on the study of exchange rate regimes and monetary integration in a liquidity trap environment relative to “tranquil” times. Based on dynamic stochastic general equilibrium (DSGE) models and Bayesian and Panel data econometrics, the thesis mainly uses the analyses of impulse responses, welfare and currency misalignments as comparison criteria among alternative currency regimes.The key lessons from this work are summarized as follows. For small open economies heavily in debted in foreign currency, like those of Southeast Asia, the flexible exchange is the best regime, followed by intermediate and fixed exchange rate regimes. At the regional level, it is shown that the exchange rate targeting regime leads to a stability of intra-regional bilateral exchange rates, which is a sort of fixity of exchange rates similar to a “de facto currency area”. In the context of a liquidity trap, we find that, contrary to common belief during the Euro area crisis, the currency union welfare dominates the independent floating regime. Only a central bank intervention in the form of a managed float policy could allow the independent floating to outperform the monetary union.Through both the empirical and theoretical analyses of the liquidity trap effects on currency misalignments, it is shown that the ZLB constraint tends to reduce currency misalignments compared with the independent floating policy. This suggests a reinforcement of the monetary integration within a monetary union during the liquidity trap
97

Essays on the Liquidity Trap, Oil Shocks, and the Great Moderation

Nakov, Anton 19 November 2007 (has links)
The thesis studies three distinct issues in monetary economics using a common dynamic general equilibrium approach under the assumptions of rational expectations and nominal price rigidity. The first chapter deals with the so-called "liquidity trap" - an issue which was raised originally by Keynes in the aftermath of the Great Depression. Since the nominal interest rate cannot fall below zero, this limits the scope for expansionary monetary policy when the interest rate is near its lower bound. The chapter studies the conduct of monetary policy in such an environment in isolation from other possible stabilization tools (such as fiscal or exchange rate policy). In particular, a standard New Keynesian model economy with Calvo staggered price setting is simulated under various alternative monetary policy regimes, including optimal policy. The challenge lies in solving the (otherwise linear) stochastic sticky price model with an explicit occasionally binding non-negativity constraint on the nominal interest rate. This is achieved by parametrizing expectations and applying a global solution method known as "collocation". The results indicate that the dynamics and sometimes the unconditional means of the nominal rate, inflation and the output gap are strongly affected by uncertainty in the presence of the zero lower bound. Commitment to the optimal rule reduces unconditional welfare losses to around one-tenth of those achievable under discretionary policy, while constant price level targeting delivers losses which are only 60% larger than under the optimal rule. On the other hand, conditional on a strong deflationary shock, simple instrument rules perform substantially worse than the optimal policy even if the unconditional welfare loss from following such rules is not much affected by the zero lower bound per se. The second thesis chapter (co-authored with Andrea Pescatori) studies the implications of imperfect competition in the oil market, and in particular the existence of a welfare-relevant trade-off between inflation and output gap volatility. In the standard New Keynesian model exogenous oil shocks do not generate any such tradeoff: under a strict inflation targeting policy, the output decline is exactly equal to the efficient output contraction in response to the shock. I propose an extension of the standard model in which the existence of a dominant oil supplier (such as OPEC) leads to inefficient fluctuations in the oil price markup, reflecting a dynamic distortion of the economy's production process. As a result, in the face of oil sector shocks, stabilizing inflation does not automatically stabilize the distance of output from first-best, and monetary policymakers face a tradeoff between the two goals. The model is also a step away from discussing the effects of exogenous oil price changes and towards analyzing the implications of the underlying shocks that cause the oil price to change in the first place. This is an advantage over the existing literature, which treats the macroeconomic effects and policy implications of oil price movements as if they were independent of the underlying source of disturbance. In contrast, the analysis in this chapter shows that conditional on the source of the shock, a central bank confronted with the same oil price change may find it desirable to either raise or lower the interest rate in order to improve welfare. The third thesis chapter (co-authored with Andrea Pescatori) studies the extent to which the rise in US macroeconomic stability since the mid-1980s can be accounted for by changes in oil shocks and the oil share in GDP. This is done by estimating with Bayesian methods the model developed in the second chapter over two samples - before and after 1984 - and conducting counterfactual simulations. In doing so we nest two other popular explanations for the so-called "Great Moderation": (1) smaller (non-oil) shocks; and (2) better monetary policy. We find that the reduced oil share can account for around one third of the inflation moderation, and about 13% of the GDP growth moderation. At the same time smaller oil shocks can explain approximately 7% of GDP growth moderation and 11% of the inflation moderation. Thus, the oil share and oil shocks have played a non-trivial role in the moderation, especially of inflation, even if the bulk of the volatility reduction of output growth and inflation is attributed to smaller non-oil shocks and better monetary policy, respectively. / La tesis estudia tres problemas distintos de macroeconomía monetaria utilizando como marco común el equilibrio general dinámico bajo expectativas racionales y con rigidez nominal de los precios. El primer capítulo trata el problema de la "trampa de liquidez" - un tema planteado primero por Keynes después de la Gran Depresión de 1929. El hecho de que el tipo de interés nominal no pueda ser negativo limita la posibilidad de llevar una política monetaria expansiva cuando el tipo de interés se acerca a cero. El capítulo estudia la conducta de la política monetaria en este entorno en aislamiento de otros posibles instrumentos de estabilización (como la política fiscal o la política de tipo de cambio). En concreto, se simula un modelo estándar Neo-Keynesiano con rigidez de precios a la Calvo bajo diferentes regimenes de política monetaria, incluida la política monetaria óptima. El reto consiste en resolver el modelo estocástico bajo la restricción explícita ocasionalmente vinculante de no negatividad de los tipos de interés. La solución supone parametrizar las expectativas y utilizar el método de solución global conocido como "colocación". Los resultados indican que la dinámica y en ocasiones los valores medios del tipo de interés, la inflación y el output gap están muy influidos por la presencia de la restricción de no negatividad. El compromiso con la regla monetaria óptima reduce las pérdidas de bienestar esperadas hasta una décima parte de las pérdidas obtenidas bajo la mejor política discrecional, mientras una política de meta constante del nivel de precios resulta en pérdidas que son sólo 60% mayores de las obtenidas bajo la regla óptima. Por otro lado, condicionado a a un choque fuerte deflacionario, las reglas instrumentarias simples funcionan mucho peor que la política óptima, aun si las pérdidas no condicionales de bienestar asociadas a dichas reglas no están muy afectadas por la presencia de la restricción de no negatividad en si. El segundo capítulo de la tesis estudia las implicaciones de la competencia imperfecta en el mercado del petróleo, y en concreto la existencia de un conflicto relevante entre la volatilidad de la inflación y la del output gap de un país importador de petróleo. En el modelo estándar Neo Keynesiano, los choques petroleros exógenos no generan ningún conflicto de objetivos: bajo una política de metas de inflación estricta, la caída del output es exactamente igual a la contracción eficiente del output en respuesta al choque. Este capitulo propone una extensión del modelo básico en la cual la presencia de un proveedor de petróleo dominante (OPEP) lleva a fluctuaciones ineficientes en el margen del precio del petróleo que reflejan una distorsión dinámica en el proceso de producción de la economía. Como consecuencia, ante choques provinientes del sector de petróleo, una política de estabilidad de los precios no conlleva automáticamente a una estabilización de la distancia del output de su nivel eficiente y existe un conflicto entre los dos objetivos. El modelo se aleja de la discución los efectos de cambios exógenos en el precio del petróleo y se acerca al análisis de las implicaciones de los factores fundamentales que provocan los cambios en el precio del petróleo en primer lugar. Esto último representa una ventaja clara frente a la literatura existente, la cual trata tanto los efectos macroeconómicos como las implicaciones para la política monetaria de cambios en el precio del petróleo como si éstos fueran independientes de los factores fundamentales provocando dicho cambio. A diferencia de esta literatura, el análisis del capitulo II demuestra cómo frente al mismo cambio en el precio del petróleo, un banco central puede encontrar deseable bien subir o bajar el tipo de interés en función del origen del choque. El tercer capitulo estudia el grado en que el ascenso de la estabilidad macroeconómica en EE.UU. a partir de mediados de los 80 se puede atribuir a cambios en la naturaleza de los choques petroleros y/o el peso del petróleo en el PIB. Con este propósito se estima el modelo desarrollado en el capitulo II con métodos Bayesianos utilizando datos macroeconómicos de dos periodos - antes y después de 1984 - y se conducen simulaciones contrafactuales. Las simulaciones permiten dos explicaciones alternativas de la "Gran Moderación": (1) menores choques no petroleros; y (2) mejor política monetaria. Los resultados apuntan a que el petróleo ha jugado un papel no-trivial en la moderación. En particular, el menor peso del petroleo en el PIB a partir de 1984 ha contribuido a una tercera parte de la moderación de la inflación y un 13% de la moderación del output. Al mismo tiempo, un 7% de la moderación del PIB y 11% de la moderación de la inflación se pueden atribuir a menores choques petroleros.
98

Estimating the parameters of polynomial phase signals

Farquharson, Maree Louise January 2006 (has links)
Nonstationary signals are common in many environments such as radar, sonar, bioengineering and power systems. The nonstationary nature of the signals found in these environments means that classicalspectralanalysis techniques are notappropriate for estimating the parameters of these signals. Therefore it is important to develop techniques that can accommodate nonstationary signals. This thesis seeks to achieve this by firstly, modelling each component of the signal as having a polynomial phase and by secondly, developing techniques for estimating the parameters of these components. Several approaches can be used for estimating the parameters of polynomial phase signals, eachwithvarying degrees ofsuccess.Criteria to consider in potential estimation algorithms are (i) the signal-to-noise (SNR) ratio threshold of the algorithm, (ii) the amount of computation required for running the algorithm, and (iii) the closeness of the resulting estimates' mean-square errors to the minimum theoretical bound. These criteria will be used to compare the new techniques developed in this thesis with existing techniques. The literature on polynomial phase signal estimation highlights the recurring trade-off between the accuracy of the estimates and the amount of computation required. For example, the Maximum Likelihood (ML) method provides near-optimal estimates above threshold, but also incurs a heavy computational cost for higher order phase signals. On the other hand, multi-linear techniques such as the high-order ambiguity function (HAF) method require little computation, but have a significantly higher SNR threshold than the ML method. Of the existing techniques, the cubic phase (CP) function method is a promising technique because it provides an attractive SNR threshold and computational complexity trade-off. For this reason, the analysis techniques developed in this thesis will be derived from the CP function. A limitation of the CP function is its inability to accurately process phase orders greater than three. Therefore, the first novel contribution to this thesis develops a broadened class of discrete-time higher order phase (HP)functions to address this limitation.This broadened class is achieved by providing a multi-linear extension of the CP function. Monte Carlo simulations are performed to demonstrate the statistical advantage of the HP functions compared to the HAFs. A first order statistical analysis of the HP functions is presented. This analysis verifies the simulation results. The next novel contribution is a technique called the lower SNR cubic phase function (LCPF)method. It is an extension of the CP function, with the extension enabling performance at lower signal-to-noise ratios (SNRs). The improvement of the SNR threshold's performance is achieved by coherently integrating the CP function over a compact interval in the two-dimensional CP function space. The computation of the new algorithm is quite moderate, especially when compared to the ML method. Above threshold, the LCPF method's parameter estimates are asymptotically efficient. Monte Carlo simulation results are presented and a threshold analysis of the algorithm closely predicts the thresholds observed in these results. The next original contribution to this research involves extending the LCPF method so that it is able to process multicomponent cubic phase signals and higher order phase signals. The LCPF method is extended to higher orders by applying a windowing technique as opposed to adjusting the order of the kernel as implemented in the HP function method. To demonstrate the extension of the LCPF method for processing higher order phase signals and multicomponent cubic phase signals, some Monte Carlo simulations are presented. Finally, these estimation techniques are applied to real-worldscenarios in the fields of Power Systems Analysis, Neuroethology and Speech Analysis.
99

Συμβολή στη στατική και δυναμική ανάλυση τοίχων αντιστήριξης μέσω θεωρητικών και πειραματικών μεθόδων

Κλουκίνας, Παναγιώτης 09 July 2013 (has links)
Οι κατασκευές εδαφικής αντιστήριξης εξακολουθούν να βρίσκονται σε ευρύτατη χρήση, με διαρκώς αυξανόμενο ενδιαφέρον λόγω των απαιτήσεων των σύγχρονων έργων υποδομής αλλά και των αναγκών δόμησης σε πυκνό αστικό περιβάλλον. Το ενδιαφέρον εστιάζεται σε κατασκευαστικές λύσεις και μεθόδους σχεδιασμού που συνδυάζουν ασφάλεια και οικονομία. Η ανάλυση των συγκεκριμένων κατασκευών αντιμετωπίζει πλήθος δυσεπίλυτων προβλημάτων στο αντικείμενο της αλληλεπίδρασης εδάφους-κατασκευής που συχνά καθορίζουν τη συμπεριφορά του έργου. Η κατανόηση αυτών των μηχανισμών επιτρέπει το σχεδιασμό με μικρότερα περιθώρια αβεβαιότητας που οδηγούν σε οικονομικότερες και ορθολογικότερες λύσεις. Στην κατεύθυνση αυτή συμβάλει η παρούσα Διατριβή, με την ανάπτυξη αναλυτικών εργαλείων και θεωρητικών ευρημάτων που βοηθούν στην κατανόηση των μηχανισμών της αλληλεπίδρασης και στην εκτίμηση της συμπεριφοράς των τοίχων αντιστήριξης υπό συνδυασμένη βαρυτική και σεισμική φόρτιση. Έμφαση δίνεται στην παραγωγή απλών κλειστών λύσεων και μεθοδολογιών για τον υπολογισμό των εδαφικών ωθήσεων και τη στατική ανάλυση του συστήματος τοίχου εδάφους. Συγκεκριμένα, παράγονται λύσεις άνω και κάτω ορίου για ενδόσιμους τοίχους, οι οποίες, παρότι προσεγγιστικές, πλεονεκτούν έναντι των κλασικών εξισώσεων Coulomb και Mononobe-Okabe τις οποίες μπορούν να αντικαταστήσουν. Σε ειδικές περιπτώσεις, όπως η περίπτωση τοίχων προβόλων με πεπλατυσμένο πέλμα, οι προτεινόμενες λύσεις οδηγούν σε ακριβή αποτελέσματα που βασίζονται σε ένα γενικευμένο πεδίο τάσεων Rankine. Επίσης παρουσιάζονται επεκτάσεις τους οι οποίες επιτρέπουν τον υπολογισμό μη-υδροστατικών κατανομών ωθήσεων γαιών λαμβάνοντας υπόψη την κυματική διάδοση της σεισμικής διέγερσης στο επίχωμα, σύμφωνα με μια ορθότερη παραλλαγή της ιδέας των Steedman & Zeng και τις διαφορετικές κινηματικές συνθήκες που προέρχονται από την απόκριση του τοίχου με περιστροφή περί την κορυφή ή τη βάση σύμφωνα με την τεχνική της Dubrova. Για την περίπτωση ανένδοτων τοίχων παρουσιάζεται μεθοδολογία για τη δραστική απλοποίηση των διαθέσιμων ελαστοδυναμικών, κυματικών λύσεων, όπως αυτή των Veletsos & Younan, η οποία καταλήγει σε κλειστές μαθηματικές εκφράσεις για τον υπολογισμό των ωθήσεων. Τέλος, παρουσιάζονται νέα ευρήματα στην κατεύθυνση της μαθηματικής αντιμετώπισης του δυσεπίλυτου προβλήματος της οριακής ισορροπίας ριπιδίου τάσεων σε εδαφικό μέσο στο οποίο ενεργούν βαρυτικές και αδρανειακές δυνάμεις πεδίου. Η παρούσα εργασία συμβάλλει στην περαιτέρω διερεύνηση του προβλήματος το οποίο θεμελίωσαν θεωρητικά οι Levy, Boussinesq, von Karman και Caquot, μέσω της δραστικής (αλλά ακριβούς) απλοποίησης του σε μία μη-γραμμική συνήθη διαφορική εξίσωση, η οποία επιτρέπει την επίλυση με απλές αριθμητικές και ημιαναλυτικές τεχνικές. Πέρα από τα ακριβή αριθμητικά αποτελέσματα, η προτεινόμενη ανάλυση προσφέρει μια βαθύτερη εποπτεία στο πρόβλημα και ανοίγει το δρόμο για περαιτέρω διερεύνηση ή και επέκταση της μεθόδου πέρα από τα όρια της κλασικής οριακής ανάλυσης. Η αξιοπιστία των προτεινόμενων λύσεων ελέγχεται μέσω συγκρίσεων με καθιερωμένες λύσεις και πειραματικά δεδομένα από τη βιβλιογραφία, αλλά και πρόσφατα πειραματικά αποτελέσματα που παρήχθησαν από τον συγγραφέα και ερευνητές στη σεισμική τράπεζα του Πανεπιστημίου του Bristol του Ηνωμένου Βασιλείου. / Earth retaining structures are still in widespread use, with growing interest due to the demands of modern infrastructure and building needs in a dense urban environment. Building solutions and design methodologies that combine safety and economy are the objectives of modern research. Significant difficulties in the analysis of retaining structures arise from the soil-structure interaction nature of the problem that often prescribes its behavior. Understanding these mechanisms allows design under smaller uncertainties, leading to economical and rational solutions. The contribution of the present thesis consists of the development of analytical tools and theoretical findings, helpful in understanding the mechanisms of interaction and the behavior of walls under combined gravity and seismic loading. Emphasis is given to the derivation of simple closed-form solutions and methodologies for the calculation of earth pressures and the static analysis of wall-soil system. Specifically, approximate Lower and Upper Bound solutions are produced for the case of yielding walls, which are advantageous compared to the classical equations Coulomb and Mononobe-Okabe. In special cases, such as the L-shaped cantilever walls, these solutions lead to exact results, pertaining to a generalized Rankine stress field. Extensions of the above solutions are presented allowing the calculation of non-hydrostatic earth pressure distributions, due to the wave propagation of the seismic excitation in the backfill, according to a better variant of the Steedman & Zeng approach and different kinematic conditions of the wall rotating around the top or bottom, according to the technique of Dubrova. For the case of non-yielding walls, a new methodology for the drastic simplification of available wave solutions, such as the Veletsos & Younan, is presented which leads to closed-form expressions for the dynamic pressure calculation. Finally, new theoretical findings are presented for the mathematical treatment of the intractable problem of plastic limit equilibrium in soil medium subjected to gravitational and inertial forces field. This work contributes to the further investigation of the problem which is founded theoretically by Levy, Boussinesq, von Karman and Caquot, through the significant (but accurate) simplification to a single, non-linear ordinary differential equation, easier to handle by simple numerical and semi-analytical techniques. Apart from the exact numerical results, the proposed analysis provides a deeper physical insight, leading the way to further investigation or extension of the method beyond the classical limit analysis assumptions. The reliability of the proposed solutions is checked through comparisons with established solutions and experimental data from the literature and recent experimental results obtained by the author and researchers in the shake table laboratory of the University of Bristol, UK.
100

Étude de la complexité des implémentations d'objets concurrents, sans attente, abandonnables et/ou solo-rapides / On the complexity of wait-free, abortable and/or solo-fast concurrent object implementations

Capdevielle, Claire 03 November 2016 (has links)
Dans un ordinateur multiprocesseur, lors de l'accès à la mémoire partagée, il faut synchroniser les entités de calcul (processus). Cela peut se faire à l'aide de verrous, mais des problèmes se posent (par exemple interblocages, mauvaise tolérance aux pannes). On s'est intéressé à l'implémentation d'abstractions (consensus et construction universelle) qui peuvent faciliter la programmation concurrente sans attente, sans utiliser de verrous mais basés sur des lectures/écritures atomiques (LEA). L'usage exclusive des LEA ne permet pas de réaliser un consensus sans attente. Néanmoins, autoriser l'usage de primitives offrant une puissance de synchronisation plus forte que des LEA, mais coûteuse en temps de calcul, le permet. Nous nous sommes donc intéressés dans cette thèse à des programmes qui limitent l'usage de ces primitives aux seules situations où les processus sont en concurrence, ces programmes sont dit solo-rapides. Une autre piste étudiée est de permettre à l'objet, lorsqu'il y a de la concurrence, de retourner une réponse spéciale "abandon" qui signifie l'abandon des calculs en cours. Ces objets sont dit abandonnables. D'une part, nous donnons des implémentations d'objets concurrents sans attente, abandonnables et/ou solo-rapides. Pour cela, nous proposons une construction universelle qui assure à l'objet implémenté d'être abandonnable et solo-rapide ; nous avons réalisés des algorithmes de consensus solo-rapides et des algorithmes de consensus abandonnable. D'autre part nous étudions la complexité en espace de ces implémentations en proposant des bornes inférieures sur l'implémentation des objets abandonnables et sur le consensus. / In multiprocessor computer, synchronizations between processes are needed for the access to the shared memory. Usually this is done by using locks, but there are some issues as deadlocks or lack of fault-tolerance. We are interested in implementing abstractions (as consensus or universal construction) which ease the programming of wait-free concurrent objects, without using lock but based on atomic Read/Write operations (ARW). Only using the ARW does not permit to implement wait-free consensus. The use of primitives which offer a higher power of synchronization than the ARW is needed. But these primitives are more expensive in computing time. Therefore, we are interested in this thesis in the design of algorithms which restrict the use of these primitives only to the cases where processes are in contention. These algorithms are said solo-fast. Another direction is to allow the object to abort the computation in progress - and to return a special response "abort" - when there is contention. These objects are named abortable. On the one hand we give wait-free, abortable and/or solo-fast concurrent object implementations. Indeed we proposed a universal construction which ensure to the implemented object to be abortable and solo-fast. We have also realized solo-fast consensus algorithms and abortable consensus algorithms. On the other hand, we study the space complexity of these implementations : we prove space lower bound on the implementation of abortable object and consensus.

Page generated in 0.0583 seconds