• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 8
  • 7
  • 5
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 105
  • 105
  • 48
  • 27
  • 27
  • 24
  • 16
  • 16
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Měnová intervence ČNB ve světovém kontextu / CNB Monetary Intervention in Global Context

Bejdová, Markéta January 2014 (has links)
This diploma thesis describes the intervention of the Czech National Bank in November 2013 in terms of current economic theory and practice as well as its justification. In the first theoretical part is presented actual literature dealing with the zero lower bound (ZLB). This part is divided to general description of ZLB, its prevention and possibilities of monetary policy. Further are described experiences from Japan and other countries which achieved ZLB. In this context is also presented intervention made by the Czech National Bank. Hypothesis whether the intervention was justified is tested by linear and nonlinear Taylor rule. Estimate is made by the least squares method using quarterly data from the CSO and CNB. The results of model confirm the hypothesis, which means that the optimal monetary policy interest rate would fall below zero.
42

Studies in Multiple-Antenna Wireless Communications

Peel, Christian Bruce 27 January 2004 (has links) (PDF)
Wireless communications systems are used today in a variety of milieux, with a recurring theme: users and applications regularly require higher throughput. Multiple antennas enable higher throughput and/or more robust performance than single-antenna communications, with no increase in power or frequency bandwidth. Systems are required which achieve the full potential of this "space-time" communication channel under the significant challenges of time-varying fading, multiple users, and the choice of appropriate coding schemes. This dissertation is focused on solutions to these problems. For the single-user case, there are many well-known coding techniques available; in the first part of this dissertation, the performance of two of these methods are analyzed. Trained and differential modulation are simple coding techniques for single-user time-varying channels. The performance of these coding methods is characterized for a channel having a constant specular component plus a time-varying diffuse component. A first- order auto-regressive model is used to characterize diffuse channel coefficients that vary from symbol to symbol, and is shown to lead to an effective SNR that decreases with time. A lower bound on the capacity of trained modulation is found for the specular/diffuse channel. This bound is maximized over the training length, training frequency, training signal, and training power. Trained modulation is shown to have higher capacity than differential coding, despite the effective SNR penalty of trained modulation versus differential methods. The second part of the dissertation considers the multi-user, multi-antenna channel, for which capacity-approaching codes were previously unavailable. Precoding with the channel inverse is shown to provide capacity that approaches a constant as the number of users and antennas simultaneously increase. To overcome this limitation, a simple encoding algorithm is introduced that operates close to capacity at sum-rates of tens of bits/channel-use. The algorithm is a variation on channel inversion that regularizes the inverse and uses a "sphere encoder" to perturb the data to reduce the energy of the transmitted signal. Simulation results are presented which support our analysis and algorithm development.
43

An Empirical Study on the Reversal Interest Rate / En empirisk studie på brytpunktsräntan

Berglund, Pontus, Kamangar, Daniel January 2020 (has links)
Previous research suggests that a policy interest rate cut below the reversal interest rate reverses the intended effect of monetary policy and becomes contractionary for lending. This paper is an empirical investigation into whether the reversal interest rate was breached in the Swedish negative interest rate environment between February 2015 and July 2016. We find that banks with a greater reliance on deposit funding were adversely affected by the negative interest rate environment, relative to other banks. This is because deposit rates are constrained by a zero lower bound, since banks are reluctant to introduce negative deposit rates for fear of deposit withdrawals. We show with a difference-in-differences approach that the most affected banks reduced loans to households and raised 5 year mortgage lending rates, as compared to the less affected banks, in the negative interest rate environment. These banks also experienced a drop in profitability, suggesting that the zero lower bound on deposits caused the lending spread of banks to be squeezed. However, we do not find evidence that the reversal rate has been breached. / Tidigare forskning menar att en sänkning av styrräntan under brytpunktsräntan gör att penningpolitiken får motsatt effekt och blir åtstramande för utlåning. Denna rapport är en empirisk studie av huruvida brytpunktsräntan passerades i det negativa ränteläget mellan februari 2015 och juli 2016 i Sverige. Våra resultat pekar på att banker vars finansiering till större del bestod av inlåning påverkades negativt av den negativa styrräntan, relativt till andra banker. Detta beror på att inlåningsräntor är begränsade av en lägre nedre gräns på noll procent. Banker är ovilliga att introducera negativa inlåningsräntor för att undvika att kunder tar ut sina insättningar och håller kontanter istället. Vi visar med en "difference-in-differences"-analys att de mest påverkade bankerna minskade lån till hushåll och höjde bolåneräntor med 5-åriga löptider, relativt till mindre påverkade banker, som konsekvens av den negativa styrräntan. Dessa banker upplevde även en minskning av lönsamhet, vilket indikerar att noll som en nedre gräns på inlåningsräntor bidrog till att bankernas räntemarginaler minskade. Vi hittar dock inga bevis på att brytpunktsräntan har passerats.
44

Statistical Methods for Image Change Detection with Uncertainty

Lingg, Andrew James January 2012 (has links)
No description available.
45

Statistical Analysis of Geolocation Fundamentals Using Stochastic Geometry

O'Lone, Christopher Edward 22 January 2021 (has links)
The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. In the literature, benchmarking localization performance in these networks has traditionally been done in a deterministic manner. That is, for a fixed setup of anchors (nodes with known location) and a target (a node with unknown location) a commonly used benchmark for localization error, such as the Cramer-Rao lower bound (CRLB), can be calculated for a given localization strategy, e.g., time-of-arrival (TOA), angle-of-arrival (AOA), etc. While this CRLB calculation provides excellent insight into expected localization performance, its traditional treatment as a deterministic value for a specific setup is limited. Rather than trying to gain insight into a specific setup, network designers are more often interested in aggregate localization error statistics within the network as a whole. Questions such as: "What percentage of the time is localization error less than x meters in the network?" are commonplace. In order to answer these types of questions, network designers often turn to simulations; however, these come with many drawbacks, such as lengthy execution times and the inability to provide fundamental insights due to their inherent ``block box'' nature. Thus, this dissertation presents the first analytical solution with which to answer these questions. By leveraging tools from stochastic geometry, anchor positions and potential target positions can be modeled by Poisson point processes (PPPs). This allows for the CRLB of position error to be characterized over all setups of anchor positions and potential target positions realizable within the network. This leads to a distribution of the CRLB, which can completely characterize localization error experienced by a target within the network, and can consequently be used to answer questions regarding network-wide localization performance. The particular CRLB distribution derived in this dissertation is for fourth-generation (4G) and fifth-generation (5G) sub-6GHz networks employing a TOA localization strategy. Recognizing the tremendous potential that stochastic geometry has in gaining new insight into localization, this dissertation continues by further exploring the union of these two fields. First, the concept of localizability, which is the probability that a mobile is able to obtain an unambiguous position estimate, is explored in a 5G, millimeter wave (mm-wave) framework. In this framework, unambiguous single-anchor localization is possible with either a line-of-sight (LOS) path between the anchor and mobile or, if blocked, then via at least two NLOS paths. Thus, for a single anchor-mobile pair in a 5G, mm-wave network, this dissertation derives the mobile's localizability over all environmental realizations this anchor-mobile pair is likely to experience in the network. This is done by: (1) utilizing the Boolean model from stochastic geometry, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment, (2) considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and (3) considering the possibility that reflectors can either facilitate or block reflections. In addition to the derivation of the mobile's localizability, this analysis also reveals that unambiguous localization, via reflected NLOS signals exclusively, is a relatively small contributor to the mobile's overall localizability. Lastly, using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time delay of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. Due to the random nature of the propagation environment, the NLOS bias is a random variable, and as such, its distribution is sought. As before, assuming NLOS propagation is due to first-order reflections, and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor time-of-flight (TOF) range measurements. This distribution is shown to match exceptionally well with commonly assumed gamma and exponential NLOS bias models in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving the angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over the entire ensemble of infrastructure or environmental realizations that a target is likely to experience in a network. / Doctor of Philosophy / The past two decades have seen a surge in the number of applications requiring precise positioning data. Modern cellular networks offer many services based on the user's location, such as emergency services (e.g., E911), and emerging wireless sensor networks are being used in applications spanning environmental monitoring, precision agriculture, warehouse and manufacturing logistics, and traffic monitoring, just to name a few. In these sensor networks in particular, obtaining precise positioning data of the sensors gives vital context to the measurements being reported. While the Global Positioning System (GPS) has traditionally been used to obtain this positioning data, the deployment locations of these cellular and sensor networks in GPS-constrained environments (e.g., cities, indoors, etc.), along with the need for reliable positioning, requires a localization scheme that does not rely solely on GPS. This has lead to localization being performed entirely by the network infrastructure itself, or by the network infrastructure aided, in part, by GPS. When speaking in terms of localization, the network infrastructure consists of what are called anchors, which are simply nodes (points) with a known location. These can be base stations, WiFi access points, or designated sensor nodes, depending on the network. In trying to determine the position of a target (i.e., a user, or a mobile), various measurements can be made between this target and the anchor nodes in close proximity. These measurements are typically distance (range) measurements or angle (bearing) measurements. Localization algorithms then process these measurements to obtain an estimate of the target position. The performance of a given localization algorithm (i.e., estimator) is typically evaluated by examining the distance, in meters, between the position estimates it produces vs. the actual (true) target position. This is called the positioning error of the estimator. There are various benchmarks that bound the best (lowest) error that these algorithms can hope to achieve; however, these benchmarks depend on the particular setup of anchors and the target. The benchmark of localization error considered in this dissertation is the Cramer-Rao lower bound (CRLB). To determine how this benchmark of localization error behaves over the entire network, all of the various setups of anchors and the target that would arise in the network must be considered. Thus, this dissertation uses a field of statistics called stochastic geometry} to model all of these random placements of anchors and the target, which represent all the setups that can be experienced in the network. Under this model, the probability distribution of this localization error benchmark across the entirety of the network is then derived. This distribution allows network designers to examine localization performance in the network as a whole, rather than just for a specific setup, and allows one to obtain answers to questions such as: "What percentage of the time is localization error less than x meters in the network?" Next, this dissertation examines a concept called localizability, which is the probability that a target can obtain a unique position estimate. Oftentimes localization algorithms can produce position estimates that congregate around different potential target positions, and thus, it is important to know when algorithms will produce estimates that congregate around a unique (single) potential target position; hence the importance of localizability. In fifth generation (5G), millimeter wave (mm-wave) networks, only one anchor is needed to produce a unique target position estimate if the line-of-sight (LOS) path between the anchor and the target is unimpeded. If the LOS path is impeded, then a unique target position can still be obtained if two or more non-line-of-sight (NLOS) paths are available. Thus, over all possible environmental realizations likely to be experienced in the network by this single anchor-mobile pair, this dissertation derives the mobile's localizability, or in this case, the probability the LOS path or at least two NLOS paths are available. This is done by utilizing another analytical tool from stochastic geometry known as the Boolean model, which statistically characterizes the random positions, sizes, and orientations of reflectors (e.g., buildings) in the environment. Under this model, considering the availability of first-order (i.e., single-bounce) reflections as well as the LOS path, and considering the possibility that reflectors can either facilitate or block reflections, the mobile's localizability is derived. This result reveals the roles that the LOS path and the NLOS paths play in obtaining a unique position estimate of the target. Using this first-order reflection framework developed under the Boolean model, this dissertation then statistically characterizes the NLOS bias present on range measurements. This NLOS bias is a common phenomenon that arises when trying to measure the distance between two nodes via the time-of-flight (TOF) of a transmitted signal. If the LOS path is blocked, then the extra distance that the signal must travel to the receiver, in excess of the LOS path, is termed the NLOS bias. As before, assuming NLOS propagation is due to first-order reflections and that reflectors can either facilitate or block reflections, the distribution of the path length (i.e., absolute time delay) of the first-arriving multipath component (MPC) (or first-arriving ``reflection path'') is derived. This result is then used to obtain the first NLOS bias distribution in the localization literature that is based on the absolute delay of the first-arriving MPC for outdoor TOF range measurements. This distribution is shown to match exceptionally well with commonly assumed NLOS bias distributions in the literature, which were only attained previously through heuristic or indirect methods. Finally, the flexibility of this analytical framework is utilized by further deriving angle-of-arrival (AOA) distribution of the first-arriving MPC at the mobile. This distribution yields the probability that, for a specific angle, the first-arriving reflection path arrives at the mobile at this angle. This distribution gives novel insight into how environmental obstacles affect the AOA and also represents the first AOA distribution, of any kind, derived under the Boolean model. In summary, this dissertation uses the analytical tools offered by stochastic geometry to gain new insights into localization metrics by performing analyses over all of the possible infrastructure or environmental realizations that a target is likely to experience in a network.
46

Účinnost nekonvenční měnové politiky na nulové spodní hranici úrokových sazeb: využití DSGE přístupu / The Effectiveness of Unconventional Monetary Policy Tools at the Zero Lower Bound: A DSGE Approach

Malovaná, Simona January 2014 (has links)
The central bank is not able to further ease monetary conditions once it ex- hausts the space for managing short-term policy rate. Then it has to turn its attention to unconventional measures. The thesis provides a discussion about the suitability of different unconventional policy tools in the Czech situation while the foreign exchange (FX) interventions have proven to be the most appropriate choice. A New Keynesian small open economy DSGE model estimated for the Czech Republic is enhanced to model the FX interventions and to compare dif- ferent monetary policy rules at the zero lower bound (ZLB). The thesis provides three main findings. First, the volatility of the real and nominal macroeconomic variables is magnified in the response to the domestic demand shock, the for- eign financial shock and the foreign inflation shock. Second, the volatility of prices decreases significantly if the central bank adopts price-level or exchange rate targeting rule. Third, intervening to fix the nominal exchange rate on some particular target or to correct a misalignment of the real exchange rate from its fundamentals serves as a good stabilizer of prices while intervening to smooth the nominal exchange rate movements increases the overall macroeconomic volatility at the ZLB. 1
47

[en] ESSAYS ON MONETARY AND FISCAL POLICY / [pt] ENSAIOS SOBRE POLÍTICAS MONETÁRIAS E FISCAIS

ARTHUR GALEGO MENDES 21 January 2019 (has links)
[pt] Esta tese é composta por 3 capítulos. No primeiro capítulo mostro que quando um banco central não é totalmente apoiado financeiramente pelo tesouro e enfrenta uma restrição de solvência, um aumento no tamanho ou uma mudança na composição de seu balanço pode servir como um mecanismo de compromisso em um cenário de armadilha de liquidez. Em particular, quando a taxa de juros de curto prazo está em zero, operações de mercado aberto do banco central que envolvam compras de títulos de longo prazo podem ajudar a mitigar a deflação e recessão sob um equilíbrio de política discricionária. Usando um modelo simples com produto exógeno, mostramos que uma mudança no balanço do banco central, que aumenta seu tamanho e duração, incentiva o banco central a manter as taxas de juros baixas no futuro, a fim de evitar perdas e satisfazer a restrição de solvência, aproximando-se de sua política ótima de commitment. No segundo capítulo da tese, eu testo a validade do novo mecanismo desenvolvido no capítulo 1, incorporando um banco central financeiramente independente em um modelo DSGE de média escala baseado em Smets e Wouters (2007), e calibrando-o para replicar principais características da expansão do tamanho e composição do balanço do Federal Reserve no período pós-2008. Eu observo que os programas QE 2 e 3 geraram efeitos positivos na dinâmica da inflação, mas impacto modesto no hiato do produto. O terceiro capítulo da tese avalia as consequências em termos de bem-estar de regras fiscais simples em um modelo de um pequeno país exportador de commodities com uma parcela da população sem acesso ao mercado financeiro, onde a política fiscal assume a forma de transferências. Uma constatação principal é que as regras orçamentárias equilibradas para as receitas de commodities geralmente superam as regras fiscais mais sofisticadas, em que as receitas de commodities são salvas em um Fundo de Riqueza Soberana. Como os choques nos preços das commodities são tipicamente altamente persistentes, a renda atual das famílias está próxima de sua renda permanente, tornando as regras orçamentárias equilibradas próximas do ideal. / [en] This thesis is composed of 3 chapters. In the first chapter, It s shown that when a central bank is not fully financially backed by the treasury and faces a solvency constraint, an increase in the size or a change in the composition of its balance sheet (quantitative easing - QE) can serve as a commitment device in a liquidity trap scenario. In particular, when the short-term interest rate is at the zero lower bound, open market operations by the central bank that involve purchases of long-term bonds can help mitigate deflation and recession under a discretionary policy equilibrium. Using a simple endowment-economy model, it s shown that a change in the central bank balance sheet, which increases its size and duration, provides an incentive to the central bank to keep interest rates low in the future to avoid losses and satisfy its solvency constraints, approximating its full commitment policy. In the second chapter, the validity of the novel mechanism developed in chapter 1 is tested by incorporating a financiallyindependent central bank into a medium-scale DSGE model based on Smets and Wouters (2007), and calibrating it to replicate key features of the expansion of size and composition of the Federal Reserve s balance sheet in the post-2008 period. I find that the programs QE 2 and 3 generated positive effects on the dynamics of inflation, but mild effects on the output gap. The third chapter of the thesis evaluates the welfare consequences of simple fiscal rules in a model of a small commodity-exporting country with a share of financially constrained households, where fiscal policy takes the form of transfers. The main finding is that balanced budget rules for commodity revenues often outperform more sophisticated fiscal rules where commodity revenues are saved in a Sovereign Wealth Fund. Because commodity price shocks are typically highly persistent, the households current income is close to their permanent income, so commodity price shocks don t need smoothing, making simple balanced budget rules close to optimal.
48

Diophantine perspectives to the exponential function and Euler’s factorial series

Seppälä, L. (Louna) 30 April 2019 (has links)
Abstract The focus of this thesis is on two functions: the exponential function and Euler’s factorial series. By constructing explicit Padé approximations, we are able to improve lower bounds for linear forms in the values of these functions. In particular, the dependence on the height of the coefficients of the linear form will be sharpened in the lower bound. The first chapter contains some necessary definitions and auxiliary results needed in later chapters.We give precise definitions for a transcendence measure and Padé approximations of the second type. Siegel’s lemma will be introduced as a fundamental tool in Diophantine approximation. A brief excursion to exterior algebras shows how they can be used to prove determinant expansion formulas. The reader will also be familiarised with valuations of number fields. In Chapter 2, a new transcendence measure for e is proved using type II Hermite-Padé approximations to the exponential function. An improvement to the previous transcendence measures is achieved by estimating the common factors of the coefficients of the auxiliary polynomials. The exponential function is the underlying topic of the third chapter as well. Now we study the common factors of the maximal minors of some large block matrices that appear when constructing Padé-type approximations to the exponential function. The factorisation of these minors is of interest both because of Bombieri and Vaaler’s improved version of Siegel’s lemma and because they are connected to finding explicit expressions for the approximation polynomials. In the beginning of Chapter 3, two general theorems concerning factors of Vandermonde-type block determinants are proved. In the final chapter, we concentrate on Euler’s factorial series which has a positive radius of convergence in p-adic fields. We establish some non-vanishing results for a linear form in the values of Euler’s series at algebraic integer points. A lower bound for this linear form is derived as well.
49

Theory on lower bound energy and quantum chemical study of the interaction between lithium clusters and fluorine/fluoride / Théorie de l'énergie limite inférieure et étude de chimie quantique de l’interaction entre des agrégats de lithium et un fluor/fluorure

Bhowmick, Somnath 18 December 2015 (has links)
En chimie quantique, le principe variationnel est largement utilisé pour calculer la limite supérieure de l'énergie exacte d'un système atomique ou moléculaire. Des méthodes pour calculer la valeur limite inférieure de l'énergie existent mais sont bien moins connues. Une méthode précise pour calculer une telle limite inférieure permettrait de fournir une barre d'erreur théorique pour toute méthode de chimie quantique. Nous avons appliqué des méthodes de type variance pour calculer différentes énergies limites inférieures de l'atome d'hydrogène en utilisant des fonctions de base gaussiennes. L'énergie limite supérieure se trouve être toujours plus précise que ces différentes limites inférieures, i.e. plus proche de l'énergie exacte. L'importance de points singuliers sur l'évaluation de valeurs moyennes d'opérateurs quantiques a également été soulignée.Nous avons étudié les réactions d'adsorption d'un atome de fluor et d'un ion fluorure sur de petits agrégats de lithium Li$_n$ (n=2-15), à l'aide de méthodes de chimie quantique précises. Pour le plus petit système, nous avons montré que la formation de complexes stables Li$_2$F et Li$_2$F$^-$ se produit par un transfert d'électrons sans barrière et à longue portée, de Li$_2$ vers F pour le système neutre et l'inverse pour le système anionique. De telles réactions pourraient être rapides à très basse température. De plus, les complexes formés présentent des caractéristiques uniques de "longue liaison". Pour les systèmes plus gros Li$_n$F/Li$_n$F$^-$ ($n\geqslant4$), nous avons montré que les énergies d'adsorption peuvent être aussi grandes que 6~eV selon le site d'adsorption et que plus d'un état électronique est impliqué dans le processus d'adsorption. Les complexes formés présentent des propriétés intéressantes de "super alcalins" et pourraient servir d'unités de base dans la synthèse de composés à transfert de charge avec des propriétés ajustables. / In quantum chemistry, the variational principle is widely used to calculate an upper bound to the true energy of an atomic or molecular system. Methods for calculating the lower bound value to the energy exist but are much less known. An accurate method to calculate such a lower bound would allow to provide a theoretical error bar for any quantum chemistry method. We have applied variance-like methods to calculate different lower bound energies of a hydrogen atom using Gaussian basis functions. The upper bound energy is found to be always more accurate than the lower bound energies, i.e. closer to the exact energy. The importance of singular points on mean value evaluation of quantum operators has also been brought to attention.The adsorption reactions of atomic fluorine (F) and fluoride (F$^-$) on small lithium clusters Li$_n$ (n=2-15) have been investigated using accurate quantum chemistry ab initio methods. For the smallest system, we have shown that the formation of the stable Li$_2$F and Li$_2$F$^-$ complexes proceeds via a barrierless long-range electron transfer, from the Li$_2$ to F for the neutral and conversely from F$^-$ to Li$_2$ for the anionic system. Such reactions could be fast at very low temperature. Furthermore, the formed complexes show unique long bond characteristics. For the bigger Li$_n$F/Li$_n$F$^-$ systems ($n\geqslant 4$), we have shown that the adsorption energies can be as large as 6~eV depending on the adsorption site and that more than one electronic state is implied in the adsorption process. The formed complexes show interesting "superalkali" properties and could serve as building blocks in the synthesis of charge-transfer compounds with tunable properties.
50

Le meilleur des cas pour l’ordonnancement de groupes : Un nouvel indicateur proactif-réactif pour l’ordonnancement sous incertitudes / The best-case for groups of permutable operations : A new proactive-reactive parameter for scheduling under uncertainties

Yahouni, Zakaria 23 May 2017 (has links)
Cette thèse représente une étude d'un nouvel indicateur d'aide à la décision pour le problème d'ordonnancement d'ateliers de production sous présence d'incertitudes. Les contributions apportées dans ce travail se situent dans le contexte des groupes d'opérations permutables. Cette approche consiste à proposer une solution d'ordonnancement flexible caractérisant un ensemble fini non-énuméré d'ordonnancements. Un opérateur est ensuite censé sélectionner l'ordonnancement qui répond le mieux aux perturbations survenues dans l'atelier. Nous nous intéressons plus particulièrement à cette phase de sélection et nous mettons l'accent sur l’intérêt de l'humain pour la prise de décision. Dans un premier temps, nous présentons le meilleur des cas; indicateur d'aide à la décision pour le calcul du meilleur ordonnancement caractérisé par l'ordonnancement de groupes. Nous proposons des bornes inférieures pour le calcul des dates de début/fin des opérations. Ces bornes sont ensuite implémentées dans une méthode de séparation et d'évaluation permettant le calculer du meilleur des cas. Grâce à des simulations effectuées sur des instances de job shop de la littérature, nous mettons l'accent sur l'utilité et la performance d'un tel indicateur dans un système d'aide à la décision. Enfin, nous proposons une Interface Homme-Machine (IHM) adaptée à l'ordonnancement de groupes et pilotée par un système d'aide à la décision multicritères. L'implémentation de cette IHM sur un cas d'étude réel a permis de soulever certaines pratiques efficaces pour l'aide à la décision dans le contexte de l'ordonnancement sous incertitudes. / This thesis represents a study of a new decision-aid criterion for manufacturing scheduling under uncertainties. The contributions made in this work relate to the groups of permutable operations context. This approach consists of proposing a flexible scheduling solution characterizing a non-enumerated and finite set of schedules. An operator is then supposed to select the appropriate schedule that best copes with the disturbances occurred on the shop floor. We focus particularly on this selection phase and we emphasize the important of the human for decision making. First, we present the best-case; a decision-aid criterion for computing the best schedule characterized by the groups of permutable operations method. We propose lower bounds for computing the best starting/completion time of operations. These lower bounds are then implemented in a branch and bound procedure in order to compute the best-case. Through to several simulations carried out on literature benchmark instances, we stress the usefulness of such criterion in a decision-aid system. Finally, we propose a Human-Machine-Interface (HMI) adapted to the groups of permutable operations and driven by a multi-criteria decision-aid system. The implementation results of this HMI on a real case study provided some insight about the practice of decision-making and scheduling under uncertainties.

Page generated in 0.4445 seconds