• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 314
  • 61
  • 42
  • 39
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Représentations des polynômes, algorithmes et bornes inférieures / Representations of polynomials, algorithms and lower bounds

Grenet, Bruno 29 November 2012 (has links)
La complexité algorithmique est l'étude des ressources nécessaires — le temps, la mémoire, … — pour résoudre un problème de manière algorithmique. Dans ce cadre, la théorie de la complexité algébrique est l'étude de la complexité algorithmique de problèmes de nature algébrique, concernant des polynômes.Dans cette thèse, nous étudions différents aspects de la complexité algébrique. D'une part, nous nous intéressons à l'expressivité des déterminants de matrices comme représentations des polynômes dans le modèle de complexité de Valiant. Nous montrons que les matrices symétriques ont la même expressivité que les matrices quelconques dès que la caractéristique du corps est différente de deux, mais que ce n'est plus le cas en caractéristique deux. Nous construisons également la représentation la plus compacte connue du permanent par un déterminant. D'autre part, nous étudions la complexité algorithmique de problèmes algébriques. Nous montrons que la détection de racines dans un système de n polynômes homogènes à n variables est NP-difficile. En lien avec la question « VP = VNP ? », version algébrique de « P = NP ? », nous obtenons une borne inférieure pour le calcul du permanent d'une matrice par un circuit arithmétique, et nous exhibons des liens unissant ce problème et celui du test d'identité polynomiale. Enfin nous fournissons des algorithmes efficaces pour la factorisation des polynômes lacunaires à deux variables. / Computational complexity is the study of the resources — time, memory, …— needed to algorithmically solve a problem. Within these settings, algebraic complexity theory is the study of the computational complexity of problems of algebraic nature, concerning polynomials. In this thesis, we study several aspects of algebraic complexity. On the one hand, we are interested in the expressiveness of the determinants of matrices as representations of polynomials in Valiant's model of complexity. We show that symmetric matrices have the same expressiveness as the ordinary matrices as soon as the characteristic of the underlying field in different from two, but that this is not the case anymore in characteristic two. We also build the smallest known representation of the permanent by a determinant.On the other hand, we study the computational complexity of algebraic problems. We show that the detection of roots in a system of n homogeneous polynomials in n variables in NP-hard. In line with the “VP = VNP ?”question, which is the algebraic version of “P = NP?” we obtain a lower bound for the computation of the permanent of a matrix by an arithmetic circuit, and we point out the links between this problem and the polynomial identity testing problem. Finally, we give efficient algorithms for the factorization of lacunary bivariate polynomials.
262

Heterogeneous Multiscale Change-Point Inference and its Application to Ion Channel Recordings

Pein, Florian 20 October 2017 (has links)
No description available.
263

Problèmes d'ordonnancement avec production et consommation des ressources / Scheduling problems with production and consumption of resources

Sahli, Abderrahim 20 October 2016 (has links)
La plupart des travaux de recherches sur les problèmes d'ordonnancement traitent le cas des ressources renouvelables, c'est-à-dire des ressources qui sont exigées en début d'exécution de chaque tâche et sont restituées en fin d'exécution. Peu d'entre eux abordent les problèmes à ressources consommables, c'est-à-dire des ressources non restituées en fin d'exécution. Le problème de gestion de projet à contraintes de ressources (RCPSP) est le problème à ressources renouvelables le plus traité dans la littérature. Dans le cadre de cette thèse, nous nous sommes intéressés à une généralisation du problème RCPSP qui correspond au cas où les tâches sont remplacées par des événements liés par des relations de précédence étendues. Chaque événement peut produire ou consommer une quantité de ressources à sa date d'occurrence et la fonction économique reste la durée totale à minimiser. Nous avons nommé cette généralisation ERCPSP (Extended RCPSP). Nous avons élaboré des modèles de programmation linéaire pour résoudre ce problème. Nous avons proposé plusieurs bornes inférieures algorithmiques exploitant les travaux de la littérature sur les problèmes cumulatifs. Ensuite, nous avons élargi la portée des méthodes utilisées pour la mise en place de méthodes de séparation et évaluation. Nous avons traité aussi des cas particuliers par des méthodes basées sur la programmation dynamique. / This thesis investigates the Extended Resource Constrained Project Scheduling Problem (ERCPSP). ERCPSP is a general scheduling problem where the availability of a resource is depleted and replenished at the occurrence times of a set of events. It is an extension of the Resource Constrained Project Scheduling Problem (RCPSP) where activities are replaced by events, which have to be scheduled subject to generalized precedence relations. We are interested in this thesis in proposing new methodologies and approaches to solve ERCPSP. First, we study some polynomial cases of this problem and we propose a dynamic programming algorithm to solve the parallel chain case. Then, we propose lower bounds, mixed integer programming models, and a branch-and-bound method to solve ERCPSP. Finally, we develop an instance generator dedicated to this problem.
264

Positioning in wireless networks:non-cooperative and cooperative algorithms

Destino, G. (Giuseppe) 06 November 2012 (has links)
Abstract In the last few years, location-awareness has emerged as a key technology for the future development of mobile, ad hoc and sensor networks. Thanks to location information, several network optimization strategies as well as services can be developed. However, the problem of determining accurate location, i.e. positioning, is still a challenge and robust algorithms are yet to be developed. In this thesis, we focus on the development of distance-based non-cooperative and cooperative algorithms, which is derived based on a non-parametric non- Bayesian framework, specifically with a Weighted Least Square (WLS) optimization. From a theoretic perspective, we study the WLS problem and establish the optimality through the relationship with a Maximum Likelihood (ML) estimator. We investigate the fundamental limits and derive the consistency conditions by creating a connection between Euclidean geometry and inference theory. Furthermore, we derive the closed-form expression of a distance-model based Cramér-Rao Lower Bound (CRLB), as well as the formulas, that characterize information coupling in the Fisher information matrix. Non-cooperative positioning is addressed as follows. We propose a novel framework, namely the Distance Contraction, to develop robust non-cooperative positioning techniques. We prove that distance contraction can mitigate the global minimum problem and structured distance contraction yields nearly optimal performance in severe channel conditions. Based on these results, we show how classic algorithms such as the Weighted Centroid (WC) and the Non-Linear Least Square (NLS) can be modified to cope with biased ranging. For cooperative positioning, we derive a novel, low complexity and nearly optimal global optimization algorithm, namely the Range-Global Distance Continuation method, to use in centralized and distributed positioning schemes. We propose an effective weighting strategy to cope with biased measurements, which consists of a dispersion weight that captures the effect of noise while maximizing the diversity of the information, and a geometric-based penalty weight, that penalizes the assumption of bias-free measurements. Finally, we show the results of a positioning test where we employ the proposed algorithms and utilize commercial Ultra-Wideband (UWB) devices. / Tiivistelmä Viime vuosina paikkatietoisuudesta on tullut eräs merkittävä avainteknologia mobiili- ja sensoriverkkojen tulevaisuuden kehitykselle. Paikkatieto mahdollistaa useiden verkko-optimointistrategioiden sekä palveluiden kehittämisen. Kuitenkin tarkan paikkatiedon määrittäminen, esimerkiksi kohteen koordinaattien, on edelleen vaativa tehtävä ja robustit algoritmit vaativat kehittämistä. Tässä väitöskirjassa keskitytään etäisyyspohjaisten, yhteistoiminnallisten sekä ei-yhteistoiminnallisten, algoritmien kehittämiseen. Algoritmit pohjautuvat parametrittömään ei-bayesilaiseen viitekehykseen, erityisesti painotetun pienimmän neliösumman (WLS) optimointimenetelmään. Väitöskirjassa tutkitaan WLS ongelmaa teoreettisesti ja osoitetaan sen optimaalisuus todeksi tarkastelemalla sen suhdetta suurimman todennäköisyyden (ML) estimaattoriin. Lisäksi tässä työssä tutkitaan perustavanlaatuisia raja-arvoja sekä johdetaan yhtäpitävyysehdot luomalla yhteys euklidisen geometrian ja inferenssiteorian välille. Väitöskirjassa myös johdetaan suljettu ilmaisu etäisyyspohjaiselle Cramér-Rao -alarajalle (CRLB) sekä esitetään yhtälöt, jotka karakterisoivat informaation liittämisen Fisherin informaatiomatriisiin. Väitöskirjassa ehdotetaan uutta viitekehystä, nimeltään etäisyyden supistaminen, robustin ei-yhteistoiminnallisen paikannustekniikan perustaksi. Tässä työssä todistetaan, että etäisyyden supistaminen pienentää globaali minimi -ongelmaa ja jäsennetty etäisyyden supistaminen johtaa lähes optimaaliseen suorituskykyyn vaikeissa radiokanavan olosuhteissa. Näiden tulosten pohjalta väitöskirjassa esitetään, kuinka klassiset algoritmit, kuten painotetun keskipisteen (WC) sekä epälineaarinen pienimmän neliösumman (NLS) menetelmät, voidaan muokata ottamaan huomioon etäisyysmittauksen harha. Yhteistoiminnalliseksi paikannusmenetelmäksi johdetaan uusi, lähes optimaalinen algoritmi, joka on kompleksisuudeltaan matala. Algoritmi on etäisyyspohjainen globaalin optimoinnin menetelmä ja sitä käytetään keskitetyissä ja hajautetuissa paikannusjärjestelmissä. Lisäksi tässä työssä ehdotetaan tehokasta painotusstrategiaa ottamaan huomioon mittausharha. Strategia pitää sisällään dispersiopainon, joka tallentaa häiriön aiheuttaman vaikutuksen maksimoiden samalla informaation hajonnan, sekä geometrisen sakkokertoimen, joka rankaisee harhattomuuden ennakko-oletuksesta. Lopuksi väitöskirjassa esitetään tulokset kokeellisista mittauksista, joissa ehdotettuja algoritmeja käytettiin kaupallisissa erittäin laajakaistaisissa (UWB) laitteissa.
265

Korea's export performance: three empirical essays

Kang, Shin-jae January 1900 (has links)
Doctor of Philosophy / Department of Economics / Wayne Nafziger / This dissertation constructs three empirical essays. The first essay illustrates the causality on the relationship between output (GDP) growth and exports. By using the Modified Wald (MWald) test we observe unidirectional causality from exports to GDP. More specifically, for the robustness we use a Vector Error Correction Model (VECM) model and the Generalized Impulse Response Function Analysis (GIRA). The VECM and the GIRA yield bidirectional causality between exports and GDP, which weakly supports the unidirectional result of the to MWald test. Meanwhile, we confirm that there is structure break by using the structural break test. These results are plausible and consistent with the expectations of our study for the Export Led Growth Hypothesis (ELGH). However, compared with previous studies on the ELGH for Korea, our results are different. Other studies show a bidirectional causality relationship but this study only has unidirectional causality. These differences may be caused from different observation data, various variables, and use of different econometric methodologies. Also, model selection and omitting variables can also significantly change the results of causality testing. The second essay investigates a degree of competition between Korea's and China's exports in the U.S. market by using the substitute elasticity on a simple demand model. The market share of Korean exports has been decreasing while that of China's has been increasing. The results of this study are as follows. First, we find that Korea has a dominant market share of only goods group code 27 in commodity groups over that of China, otherwise having China's dominant market shares over those of Korea for other export sections by using historical trade data. Second, most estimates of substitute elasticity between both countries' exports in the U.S. market are small (inelastic). However, 61 (apparel articles and accessories, knit or crochet), 62 (apparel articles and accessories, not knit etc) and 85 (electric machinery etc, sound equipments, TV equipment, parts) commodity groups' substitute elasticities are large (elastic) and are competitive in the U.S. market compared with those of China. A small value of the elasticity of substitution may be due to an identification problem for a simple standard model as well as measurement errors in prices as a unit value in this study. So, in order to avoid problems such as these, we may need to use appropriate instrumental or proxy variables in the simple standard model, which highly correlate with the independent (unit price) variables and are uncorrelated with measurement error terms. In practice, it is not easy to find good instrumental variables. The final essay evaluates the roles of price and income as important factors that affect Korea's exports by using the most recent monthly data. By using the Autoregressive Distributed Lag (ARDL) bounds testing approach we find the long-run relationship of variables and estimate the long-run price and income elasticities. However, the estimates of these long-run elasticities are statistically insignificant. This may be due to some misspecifications or measurement errors in our model. Meanwhile, due to the existence of the long-run relationship between variables, we construct the Error Correction Model (ECM) in order to observe the short-run dynamics of the elasticities. Specifically, we add a dummy variable into our export demand model to achieve more efficient estimations since the dummy variable reflects a shock in Korea's export; Korea's economic crisis in 1997. In contrast to the long-run elasticity, we find that the short-run elasticities' estimates are more statistically significant. When we use the structure break test to check the structural stability of Korea's export demand, we find that there is no structural break point of 1997. Therefore, a shock of Korea's economic crisis in 1997 might not significantly affect Korea's export demand in a given sample. However, the Information Technology (IT) bubble of the world economy in 2001 and the entry of Korea into the OECD had triggered an increase in Korea's export demand due to existing structural break points of both events. In addition, we find that income elasticities are larger than price elasticities in the short run. This implies that income has more of an impact than that of price for the export demand model in the short run. This also implies that the change of Korea's exports in the short run is more sensitive to changes in foreign income (industrial production) compared with that of price (exchange rate). An interesting result, thus, is that Korea's exports in the short run may have higher export performance on income than that of price (exchange rate). This might be a consequence of the dependence of an increase in foreign income in recent years. In recent years, developing countries have greatly increased their economic growth compared with that of developed countries and Korea's exports have increased into these developing countries. Thus, we confirm that an increase in Korea's exports is mainly affected by income compared with price, specifically in the short run by using recent data.
266

Plan Bouquets : An Exploratory Approach to Robust Query Processing

Dutt, Anshuman January 2016 (has links) (PDF)
Over the last four decades, relational database systems, with their mathematical basis in first-order logic, have provided a congenial and efficient environment to handle enterprise data during its entire life cycle of generation, storage, maintenance and processing. An organic reason for their pervasive popularity is intrinsic support for declarative user queries, wherein the user only specifies the end objectives, and the system takes on the responsibility of identifying the most efficient means, called “plans”, to achieve these objectives. A crucial input to generating efficient query execution plans are the compile-time estimates of the data volumes that are output by the operators implementing the algebraic predicates present in the query. These volume estimates are typically computed using the “selectivities” of the predicates. Unfortunately, a pervasive problem encountered in practice is that these selectivities often differ significantly from the values actually encountered during query execution, leading to poor plan choices and grossly inflated response times. While the database research community has spent considerable efforts to address the above challenge, the prior techniques all suffer from a systemic limitation - the inability to provide any guarantees on the execution performance. In this thesis, we materially address this long-standing open problem by developing a radically different query processing strategy that lends itself to attractive guarantees on run-time performance. Specifically, in our approach, the compile-time estimation process is completely eschewed for error-prone selectivities. Instead, from the set of optimal plans in the query’s selectivity error space, a limited subset called the “plan bouquet”, is selected such that at least one of the bouquet plans is 2-optimal at each location in the space. Then, at run time, an exploratory sequence of cost-budgeted executions from the plan bouquet is carried out, eventually finding a plan that executes to completion within its assigned budget. The duration and switching of these executions is controlled by a graded progression of isosurfaces projected onto the optimal performance profile. We prove that this construction provides viable guarantees on the worst-case performance relative to an oracular system that magically possesses accurate apriori knowledge of all selectivities. Moreover, it ensures repeatable execution strategies across different invocations of a query, an extremely desirable feature in industrial settings. Our second contribution is a suite of techniques that substantively improve on the performance guarantees offered by the basic bouquet algorithm. First, we present an algorithm that skips carefully chosen executions from the basic plan bouquet sequence, leveraging the observation that an expensive execution may provide better coverage as compared to a series of cheaper siblings, thereby reducing the aggregate exploratory overheads. Next, we explore randomized variants with regard to both the sequence of plan executions and the constitution of the plan bouquet, and show that the resulting guarantees are markedly superior, in expectation, to the corresponding worst case values. From a deployment perspective, the above techniques are appealing since they are completely “black-box”, that is, non-invasive with regard to the database engine, implementable using only API features that are commonly available in modern systems. As a proof of concept, the bouquet approach has been fully prototyped in QUEST, a Java-based tool that provides a visual and interactive demonstration of the bouquet identification and execution phases. In similar spirit, we propose an efficient isosurface identification algorithm that avoids exploration of large portions of the error space and drastically reduces the effort involved in bouquet construction. The plan bouquet approach is ideally suited for “canned” query environments, where the computational investment in bouquet identification is amortized over multiple query invocations. The final contribution of this thesis is extending the advantage of compile-time sub-optimality guarantees to ad hoc query environments where the overheads of the off-line bouquet identification may turn out to be impractical. Specifically, we propose a completely revamped bouquet algorithm that constructs the cost-budgeted execution sequence in an “on-the-fly” manner. This is achieved through a “white-box” interaction style with the engine, whereby the plan output cardinalities exposed by the engine are used to compute lower bounds on the error-prone selectivities during plan executions. For this algorithm, the sub-optimality guarantees are in the form of a low order polynomial of the number of error-prone selectivities in the query. The plan bouquet approach has been empirically evaluated on both PostgreSQL and a commercial engine ComOpt, over the TPC-H and TPC-DS benchmark environments. Our experimental results indicate that it delivers orders of magnitude improvements in the worst-case behavior, without impairing the average-case performance, as compared to the native optimizers of these systems. In absolute terms, the worst case sub-optimality is upper bounded by 20 across the suite of queries, and the average performance is empirically found to be within a factor of 4 wrt the optimal. Even with the on-the-fly bouquet algorithm, the guarantees are found to be within a factor of 3 as compared to those achievable in the corresponding canned query environment. Overall, the plan bouquet approach provides novel performance guarantees that open up exciting possibilities for robust query processing.
267

Algorithmes de poursuite stochastiques et inégalités de concentration empiriques pour l'apprentissage statistique / Stochastic pursuit algorithms and empirical concentration inequalities for machine learning

Peel, Thomas 29 November 2013 (has links)
La première partie de cette thèse introduit de nouveaux algorithmes de décomposition parcimonieuse de signaux. Basés sur Matching Pursuit (MP) ils répondent au problème suivant : comment réduire le temps de calcul de l'étape de sélection de MP, souvent très coûteuse. En réponse, nous sous-échantillonnons le dictionnaire à chaque itération, en lignes et en colonnes. Nous montrons que cette approche fondée théoriquement affiche de bons résultats en pratique. Nous proposons ensuite un algorithme itératif de descente de gradient par blocs de coordonnées pour sélectionner des caractéristiques en classification multi-classes. Celui-ci s'appuie sur l'utilisation de codes correcteurs d'erreurs transformant le problème en un problème de représentation parcimonieuse simultanée de signaux. La deuxième partie expose de nouvelles inégalités de concentration empiriques de type Bernstein. En premier, elles concernent la théorie des U-statistiques et sont utilisées pour élaborer des bornes en généralisation dans le cadre d'algorithmes de ranking. Ces bornes tirent parti d'un estimateur de variance pour lequel nous proposons un algorithme de calcul efficace. Ensuite, nous présentons une version empirique de l'inégalité de type Bernstein proposée par Freedman [1975] pour les martingales. Ici encore, la force de notre borne réside dans l'introduction d'un estimateur de variance calculable à partir des données. Cela nous permet de proposer des bornes en généralisation pour l'ensemble des algorithmes d'apprentissage en ligne améliorant l'état de l'art et ouvrant la porte à une nouvelle famille d'algorithmes d'apprentissage tirant parti de cette information empirique. / The first part of this thesis introduces new algorithms for the sparse encoding of signals. Based on Matching Pursuit (MP) they focus on the following problem : how to reduce the computation time of the selection step of MP. As an answer, we sub-sample the dictionary in line and column at each iteration. We show that this theoretically grounded approach has good empirical performances. We then propose a bloc coordinate gradient descent algorithm for feature selection problems in the multiclass classification setting. Thanks to the use of error-correcting output codes, this task can be seen as a simultaneous sparse encoding of signals problem. The second part exposes new empirical Bernstein inequalities. Firstly, they concern the theory of the U-Statistics and are applied in order to design generalization bounds for ranking algorithms. These bounds take advantage of a variance estimator and we propose an efficient algorithm to compute it. Then, we present an empirical version of the Bernstein type inequality for martingales by Freedman [1975]. Again, the strength of our result lies in the variance estimator computable from the data. This allows us to propose generalization bounds for online learning algorithms which improve the state of the art and pave the way to a new family of learning algorithms taking advantage of this empirical information.
268

Méthodologies et outils de synthèse pour des fonctions de filtrage chargées par des impédances complexes / Methodologies and synthesis tools for functions filters loaded by complex impedances

Martinez Martinez, David 20 June 2019 (has links)
Le problème de l'adaptation d'impédance en ingénierie des hyper fréquences et en électronique en général consiste à minimiser la réflexion de la puissance qui doit être transmise, par un générateur, à une charge donnée dans une bande de fréquence. Les exigences d'adaptation et de filtrage dans les systèmes de communication classiques sont généralement satisfaites en utilisant un circuit d'adaptation suivi d'un filtre. Nous proposons ici de concevoir des filtres d'adaptation qui intègrent à la fois les exigences d'adaptation et de filtrage dans un seul appareil et augmentent ainsi l'efficacité globale et la compacité du système. Dans ce travail, le problème d'adaptation est formulé en introduisant un problème d'optimisation convexe dans le cadre établi par la théorie de d'adaptation de Fano et Youla. De ce contexte, au moyen de techniques modernes de programmation semi-définies non linéaires, un problème convexe, et donc avec une optimalité garantie, est obtenu. Enfin, pour démontrer les avantages fournis par la théorie développée au-delà de la synthèse de filtres avec des charges complexes variables en fréquence, nous examinons deux applications pratiques récurrentes dans la conception de ce type de dispositifs. Ces applications correspondent, d'une part, à l'adaptation d'un réseau d'antennes dans le but de maximiser l'efficacité du rayonnement, et, d'autre part, à la synthèse de multiplexeurs où chacun des filtres de canal est adapté au reste du dispositif, notamment les filtres correspondant aux autres canaux. / The problem of impedance matching in electronics and particularly in RF engineering consists on minimising the reflection of the power that is to be transmitted, by a generator, to a given load within a frequency band. The matching and filtering requirements in classical communication systems are usually satisfied by using a matching circuit followed by a filter. We propose here to design matching filters that integrate both, matching and filtering requirements, in a single device and thereby increase the overall efficiency and compactness of the system. In this work, the matching problem is formulated by introducing convex optimisation on the framework established by the matching theory of Fano and Youla. As a result, by means of modern non-linear semi-definite programming techniques, a convex problem, and therefore with guaranteed optimality, is achieved. Finally, to demonstrate the advantages provided by the developed theory beyond the synthesis of filters with frequency varying loads, we consider two practical applications which are recurrent in the design of communication devices. These applications are, on the one hand, the matching of an array of antennas with the objective of maximizing the radiation efficiency, and on the other hand the synthesis of multiplexers where each of the channel filters is matched to the rest of the device, including the filters corresponding to the other channels.
269

Techniques de coopération appliquées aux futurs réseaux cellulaires / Cooperation strategies for next generation cellular systems

Cardone, Martina 24 April 2015 (has links)
Une qualité de service uniforme pour les utilisateurs mobiles et une utilisation distribuée du spectre représentent les ingrédients clés des réseaux cellulaires de prochaine génération. Dans ce but, la coopération au niveau de la couche physique entre les nœuds de l’infrastructure et les nœuds du réseau sans fil a émergé comme une technique à fort potentiel. La coopération s’appuie sur les propriétés de diffusion du canal sans fil, c’est-à-dire que la même transmission peut être entendue par plusieurs nœuds, ouvrant ainsi la possibilité pour les nœuds de s’aider à transmettre les messages à leur destination finale. La coopération promet aussi d’offrir une façon nouvelle et intelligente de gérer les interférences, au lieu de simplement les ignorer et les traiter comme du bruit. Comprendre comment concevoir ces systèmes radio coopératifs, afin que les ressources disponibles soient pleinement utilisées, est d’une importance fondamentale. L’objectif de cette thèse est de mener une étude du point de vue de la théorie de l’information, pour des systèmes sans fil pertinents dans la pratique, où les nœuds de l’infrastructure coopèrent en essayant d’améliorer les performances du réseau. Les systèmes radio avec des relais semi-duplex ainsi que les scénarios où une station de base aide à servir les utilisateurs mobiles associés à une autre station de base, sont les réseaux sans fil coopératifs étudiés dans cette thèse. Le but principal est la progression vers la caractérisation de la capacité de ces systèmes sans fil au moyen de dérivation de nouvelles bornes supérieures pour les performances et la conception de nouvelles stratégies de transmission permettant de les atteindre. / A uniform mobile user quality of service and a distributed use of the spectrum represent the key-ingredients for next generation cellular networks. Toward this end, physical layer cooperation among the network infrastructure and the wireless nodes has emerged as a potential technique. Cooperation leverages the broadcast nature of the wireless medium, that is, the same transmission can be heard by multiple nodes, thus opening up the possibility that nodes help one another to convey the messages to their intended destination. Cooperation also promises to offer novel and smart ways to manage interference, instead of just simply disregarding it and treating it as noise. Understanding how to properly design such cooperative wireless systems so that the available resources are fully utilized is of fundamental importance.The objective of this thesis is to conduct an information theoretic study on practically relevant wireless systems where the network infrastructure nodes cooperate among themselves in an attempt to enhance the network performance in many critical aspects, such as throughput, robustness and coverage. Wireless systems with half-duplex relay stations as well as scenarios where a base station overhears another base station and consequently helps serving this other base station's associated mobile users, represent the wireless cooperative networks under investigation in this thesis. The prior focus is to make progress towards characterizing the capacity of such wireless systems by means of derivation of novel outer bounds and design of new provably optimal transmission strategies.
270

STANOVENÍ PRŮTOČNÝCH A ZATĚŽOVACÍCH CHARAKTERISTIK HLADINOVÝCH KLAPKOVÝCH UZÁVĚRŮ POMOCÍ FYZIKÁLNÍHO A NUMERICKÉHO MODELOVÁNÍ / DETERMINATION OF FLOW AND LOAD CHARACTERISTICS OF SURFACE FLAP GATES THROUGH PHYSICAL AND NUMERICAL MODELING

Picka, Daniel January 2015 (has links)
The subject of research were effect of elevated water level in the downstream apron on the weir flow capacity and load characteristics of the flap gate. The thesis is focused on possibilities of physical and CFD modeling of flap gate weir flow. In the introduction are contained current condition of knowledge, perfomed model studies conducted in the Czech republic and abroad, as well as flap weirs realized in our country and in the world. This is followed by experimental physical research on family of weir structure models in scales 1:1 (prototype), 1:2 and 1:2.5. Options CFD simulations a flap gate weir are contained in chapter of CFD modeling. In conclusion are compared physical methods with CFD simulations on flap gate weir.

Page generated in 0.0816 seconds