• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 225
  • 225
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 24
  • 24
  • 24
  • 23
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

On the Value of Prediction and Feedback for Online Decision Making With Switching Costs

Ming Shi (12621637) 01 June 2022 (has links)
<p>Online decision making with switching costs has received considerable attention in many practical problems that face uncertainty in the inputs and key problem parameters. Because of the switching costs that penalize the change of decisions, making good online decisions under such uncertainty is known to be extremely challenging. This thesis aims at providing new online algorithms with strong performance guarantees to address this challenge.</p> <p><br></p> <p>In part 1 and part 2 of this thesis, motivated by Network Functions Virtualization and smart grid, we study competitive online convex optimization with switching costs. Specifically, in part 1, we focus on the setting with an uncertainty set (one type of prediction) and hard infeasibility constraints. We develop new online algorithms that can attain optimized competitive ratios, while ensuring feasibility at all times. Moreover, we design a robustification procedure that helps these algorithms obtain good average-case performance simultaneously. In part 2, we focus on the setting with look-ahead (another type of prediction). We provide the first algorithm that attains a competitive ratio that not only decreases to 1 as the look-ahead window size increases, but also remains upper-bounded for any ratio between the switching-cost coefficient and service-cost coefficient.</p> <p><br></p> <p>In part 3 of this thesis, motivated by edge computing with artificial intelligence, we study bandit learning with switching costs where, in addition to bandit feedback, full feedback can be requested at a cost. We show that, when only 1 arm can be chosen at a time, adding costly full-feedback is not helpful in fundamentally reducing the Θ(<em>T</em>2/3) regret over a time-horizon <em>T</em>. In contrast, when 2 (or more) arms can be chosen at a time, we provide a new online learning algorithm that achieves a significantly smaller regret equal to <em>O</em>(√<em>T</em>), without even using full feedback. To the best of our knowledge, this type of sharp transition from choosing 1 arm to choosing 2 (or more) arms has never been reported in the literature.</p>
202

Parallel and Decentralized Algorithms for Big-data Optimization over Networks

Amir Daneshmand (11153640) 22 July 2021 (has links)
<p>Recent decades have witnessed the rise of data deluge generated by heterogeneous sources, e.g., social networks, streaming, marketing services etc., which has naturally created a surge of interests in theory and applications of large-scale convex and non-convex optimization. For example, real-world instances of statistical learning problems such as deep learning, recommendation systems, etc. can generate sheer volumes of spatially/temporally diverse data (up to Petabytes of data in commercial applications) with millions of decision variables to be optimized. Such problems are often referred to as Big-data problems. Solving these problems by standard optimization methods demands intractable amount of centralized storage and computational resources which is infeasible and is the foremost purpose of parallel and decentralized algorithms developed in this thesis.</p><p><br></p><p>This thesis consists of two parts: (I) Distributed Nonconvex Optimization and (II) Distributed Convex Optimization.</p><p><br></p><p>In Part (I), we start by studying a winning paradigm in big-data optimization, Block Coordinate Descent (BCD) algorithm, which cease to be effective when problem dimensions grow overwhelmingly. In particular, we considered a general family of constrained non-convex composite large-scale problems defined on multicore computing machines equipped with shared memory. We design a hybrid deterministic/random parallel algorithm to efficiently solve such problems combining synergically Successive Convex Approximation (SCA) with greedy/random dimensionality reduction techniques. We provide theoretical and empirical results showing efficacy of the proposed scheme in face of huge-scale problems. The next step is to broaden the network setting to general mesh networks modeled as directed graphs, and propose a class of gradient-tracking based algorithms with global convergence guarantees to critical points of the problem. We further explore the geometry of the landscape of the non-convex problems to establish second-order guarantees and strengthen our convergence to local optimal solutions results to global optimal solutions for a wide range of Machine Learning problems.</p><p><br></p><p>In Part (II), we focus on a family of distributed convex optimization problems defined over meshed networks. Relevant state-of-the-art algorithms often consider limited problem settings with pessimistic communication complexities with respect to the complexity of their centralized variants, which raises an important question: can one achieve the rate of centralized first-order methods over networks, and moreover, can one improve upon their communication costs by using higher-order local solvers? To answer these questions, we proposed an algorithm that utilizes surrogate objective functions in local solvers (hence going beyond first-order realms, such as proximal-gradient) coupled with a perturbed (push-sum) consensus mechanism that aims to track locally the gradient of the central objective function. The algorithm is proved to match the convergence rate of its centralized counterparts, up to multiplying network factors. When considering in particular, Empirical Risk Minimization (ERM) problems with statistically homogeneous data across the agents, our algorithm employing high-order surrogates provably achieves faster rates than what is achievable by first-order methods. Such improvements are made without exchanging any Hessian matrices over the network. </p><p><br></p><p>Finally, we focus on the ill-conditioning issue impacting the efficiency of decentralized first-order methods over networks which rendered them impractical both in terms of computation and communication cost. A natural solution is to develop distributed second-order methods, but their requisite for Hessian information incurs substantial communication overheads on the network. To work around such exorbitant communication costs, we propose a “statistically informed” preconditioned cubic regularized Newton method which provably improves upon the rates of first-order methods. The proposed scheme does not require communication of Hessian information in the network, and yet, achieves the iteration complexity of centralized second-order methods up to the statistical precision. In addition, (second-order) approximate nature of the utilized surrogate functions, improves upon the per-iteration computational cost of our earlier proposed scheme in this setting.</p>
203

Méthodologies et outils de synthèse pour des fonctions de filtrage chargées par des impédances complexes / Methodologies and synthesis tools for functions filters loaded by complex impedances

Martinez Martinez, David 20 June 2019 (has links)
Le problème de l'adaptation d'impédance en ingénierie des hyper fréquences et en électronique en général consiste à minimiser la réflexion de la puissance qui doit être transmise, par un générateur, à une charge donnée dans une bande de fréquence. Les exigences d'adaptation et de filtrage dans les systèmes de communication classiques sont généralement satisfaites en utilisant un circuit d'adaptation suivi d'un filtre. Nous proposons ici de concevoir des filtres d'adaptation qui intègrent à la fois les exigences d'adaptation et de filtrage dans un seul appareil et augmentent ainsi l'efficacité globale et la compacité du système. Dans ce travail, le problème d'adaptation est formulé en introduisant un problème d'optimisation convexe dans le cadre établi par la théorie de d'adaptation de Fano et Youla. De ce contexte, au moyen de techniques modernes de programmation semi-définies non linéaires, un problème convexe, et donc avec une optimalité garantie, est obtenu. Enfin, pour démontrer les avantages fournis par la théorie développée au-delà de la synthèse de filtres avec des charges complexes variables en fréquence, nous examinons deux applications pratiques récurrentes dans la conception de ce type de dispositifs. Ces applications correspondent, d'une part, à l'adaptation d'un réseau d'antennes dans le but de maximiser l'efficacité du rayonnement, et, d'autre part, à la synthèse de multiplexeurs où chacun des filtres de canal est adapté au reste du dispositif, notamment les filtres correspondant aux autres canaux. / The problem of impedance matching in electronics and particularly in RF engineering consists on minimising the reflection of the power that is to be transmitted, by a generator, to a given load within a frequency band. The matching and filtering requirements in classical communication systems are usually satisfied by using a matching circuit followed by a filter. We propose here to design matching filters that integrate both, matching and filtering requirements, in a single device and thereby increase the overall efficiency and compactness of the system. In this work, the matching problem is formulated by introducing convex optimisation on the framework established by the matching theory of Fano and Youla. As a result, by means of modern non-linear semi-definite programming techniques, a convex problem, and therefore with guaranteed optimality, is achieved. Finally, to demonstrate the advantages provided by the developed theory beyond the synthesis of filters with frequency varying loads, we consider two practical applications which are recurrent in the design of communication devices. These applications are, on the one hand, the matching of an array of antennas with the objective of maximizing the radiation efficiency, and on the other hand the synthesis of multiplexers where each of the channel filters is matched to the rest of the device, including the filters corresponding to the other channels.
204

Dynamic Graph Generation and an Asynchronous Parallel Bundle Method Motivated by Train Timetabling

Fischer, Frank 09 July 2013 (has links)
Lagrangian relaxation is a successful solution approach for many combinatorial optimisation problems, one of them being the train timetabling problem (TTP). We model this problem using time expanded networks for the single train schedules and coupling constraints to enforce restrictions like station capacities and headway times. Lagrangian relaxation of these coupling constraints leads to shortest path subproblems in the time expanded networks and is solved using a proximal bundle method. However, large instances of our practical partner Deutsche Bahn lead to computationally intractable models. In this thesis we develop two new algorithmic techniques to improve the solution process for this kind of optimisation problems. The first new technique, Dynamic Graph Generation (DGG), aims at improving the computation of the shortest path subproblems in large time expanded networks. Without sacrificing any accuracy, DGG allows to store only small parts of the networks and to dynamically extend them whenever the stored part proves to be too small. This is possible by exploiting the properties of the objective function in many scheduling applications to prefer early paths or due times, respectively. We prove that DGG can be implemented very efficiently and its running time and the size of nodes that have to be stored additionally does not depend on the size of the time expanded network but only on the length of the train routes. The second technique is an asynchronous and parallel bundle method (APBM). Traditional bundle methods require one solution of each subproblem in each iteration. However, many practical applications, e.g. the TTP, consist of rather loosely coupled subproblems. The APBM chooses only small subspaces corresponding to the Lagrange multipliers of strongly violated coupling constraints and optimises only these variables while keeping all other variables fixed. Several subspaces of disjoint variables may be chosen simultaneously and are optimised in parallel. The solutions of the subspace problem are incorporated into the global data as soon as it is available without any synchronisation mechanism. However, in order to guarantee convergence, the algorithm detects automatically dependencies between different subspaces and respects these dependencies in future subspace selections. We prove the convergence of the APBM under reasonable assumptions for both, the dual and associated primal aggregate data. The APBM is then further extended to problems with unknown dependencies between subproblems and constraints in the Lagrangian relaxation problem. The algorithm automatically detects these dependencies and respects them in future iterations. Again we prove the convergence of this algorithm under reasonable assumptions. Finally we test our solution approach for the TTP on some real world instances of Deutsche Bahn. Using an iterative rounding heuristic based on the approximate fractional solutions obtained by the Lagrangian relaxation we are able to compute feasible schedules for all trains in a subnetwork of about 10% of the whole German network in about 12 hours. In these timetables 99% of all passenger trains could be scheduled with no significant delay and the travel time of the freight trains could be reduced by about one hour on average.
205

MODELING, DESIGN, AND ADJOINT SENSITIVITY ANALYSIS OF NANO-PLASMONIC STRUCTURES

Ahmed, Osman S. 04 1900 (has links)
<p>The thesis intends to explain in full detail the developed techniques and approaches for the modeling, design, and sensitivity analysis of nano-plasmoic structures. However, some examples are included for audiences of general microwave background. Although the thesis is mainly focused on simulation-based techniques, analytical and convex optimization approaches are also demonstrated. The thesis is organized into two parts. Part 1 includes Chapters 2-4, which cover the simulation-based modeling and sensitivity analysis approaches and their applications. Part 2 includes Chapters 5 and 6, which cover the analytical optimization approaches.</p> / <p>We propose novel techniques for modeling, adjoint sensitivity analysis, and optimization of photonic and nano-plasmonic devices. The scope of our work is generalized to cover microwave, terahertz and optical regimes. It contains original approaches developed for different categories of materials including dispersive and plasmonic materials. Artificial materials (metamaterials) are also investigated and modeled. The modeling technique exploits the time-domain transmission line modeling (TD-TLM) technique. Generalized adjoint variable method (AVM) techniques are developed for sensitivity analysis of the modeled devices. Although TLM-based, they can be generalized to other time-domain modeling techniques like finite difference time-domain method (FDTD) and time-domain finite element method (FEM).</p> <p>We propose to extend the application of TLM-based AVM to photonic devices. We develop memory efficient approaches that overcome the limitation of excessive memory requirement in TLM-based AVM. A memory reduction of 90% can be achieved without loss of accuracy and at a more efficient calculation procedure. The developed technique is applied to slot waveguide Bragg gratings and a challenging dielectric resonator antenna problem.</p> <p>We also introduce a novel sensitivity analysis approach for materials with dispersive constitutive parameters. To our knowledge, this is the first wide-band AVM approach that takes into consideration the dependence of material properties on the frequency. The approach can be utilized for design optimization of innovative nano-plasmonic structures. The design of engineered metamaterial is systematic and efficient. Beside working with engineered new designs, dispersive AVM can be utilized in bio-imaging applications. The sensitivity of the objective function with respect to dispersive material properties enables the exploitation of parameter and gradient based optimization for imaging in the terahertz and optical regimes. Material resonance interaction can be easily investigated by the provided sensitivity information.</p> <p>In addition to the developed techniques for simulation-based optimization, several analytical optimization algorithms are proposed to foster the parameter extraction and design optimization in terahertz and optical regimes. In terahertz time-domain spectroscopy, we have developed an efficient parameter based approach that utilizes the pre-known information about the material. The algorithm allows for the estimation of the optical properties of sample materials of unknown thicknesses. The approach has been developed based on physical analytical dispersive models. It has been applied with the Debye, Lorentz, Cole-Cole, and Drude model.</p> <p>Furthermore, we propose various algorithms for design optimization of coupled resonators. The proposed algorithms are utilized to transform a highly non-linear optimization problem into a linear one. They exploit an approximate transfer function of the coupled resonators that avoids negligible multiple reflections among them. The algorithms are successful for the optimization of very large-scale coupled microcavities (150 coupled ring resonators).</p> / Doctor of Philosophy (PhD)
206

A Real-Time Capable Adaptive Optimal Controller for a Commuter Train

Yazhemsky, Dennis Ion January 2017 (has links)
This research formulates and implements a novel closed-loop optimal control system that drives a train between two stations in an optimal time, energy efficient, or mixed objective manner. The optimal controller uses sensor feedback from the train and in real-time computes the most efficient control decision for the train to follow given knowledge of the track profile ahead of the train, speed restrictions and required arrival time windows. The control problem is solved both on an open track and while safely driving no closer than a fixed distance behind another locomotive. In contrast to other research in the field, this thesis achieves a real-time capable and embeddable closed-loop optimization with advanced modeling and numerical solving techniques with a non-linear optimal control problem. This controller is first formulated as a non-convex control problem and then converted to an advanced convex second-order cone problem with the intent of using a simple numerical solver, ensuring global optimality, and improving control robustness. Convex and non-convex numerical methods of solving the control problem are investigated and closed-loop performance results with a simulated vehicle are presented under realistic modeling conditions on advanced tracks both on desktop and embedded computer architectures. It is observed that the controller is capable of robust vehicle driving in cases both with and without modeling uncertainty. The benefits of pairing the optimal controller with a parameter estimator are demonstrated for cases where very large mismatches exists between the controller model and the simulated vehicle. Stopping performance is consistently within 25cm of target stations, and the worst case closed-loop optimization time was within 100ms for the computation of a 1000 point control horizon on an i7-6700 machine. / Thesis / Master of Applied Science (MASc) / This research formulates and implements a novel closed-loop optimal control system that drives a train between two stations in an optimal time, energy efficient, or mixed objective manner. It is deployed on a commuter vehicle and directly manages the motoring and braking systems. The optimal controller uses sensor feedback from the train and in real-time computes the most efficient control decision for the train to follow given knowledge of the track profile ahead of the train, speed restrictions and required arrival time windows. The final control implementation is capable of safe, high accuracy and optimal driving all while computing fast enough to reliably deploy on a rail vehicle.
207

ASEMS: Autonomous Specific Energy Management Strategy

Amirfarhangi Bonab, Saeed January 2019 (has links)
This thesis addresses the problem of energy management of a hybrid electric power unit for an autonomous vehicle. We introduce, evaluate, and discuss the idea of autonomous-specific energy management strategy. This method is an optimization-based strategy which improves the powertrain fuel economy by exploiting motion planning data. First, to build a firm base for further evaluations, we will develop a high-fidelity system-level model for our case study using MATLAB/Simulink. This model mostly concerns about energy-related aspects of the powertrain and the vehicle. We will derive and implement the equations for each of the model subsystems. We derive model parameters using available data in the literature or online. Evaluation of the developed model shows acceptable conformity with the actual dynamometer data. We will use this model to replace the built-in rule-based logic with the proposed strategy and assess the performance.\par Second, since we are considering an optimization-based approach, we will develop a novel convex representation of the vehicle and powertrain model. This translates to reformulating the model equations using convex functions. Consequently, we will express the fuel-efficient energy management problem as the convex optimization problem. We will solve the optimization problem using dedicated numerical solvers. Extracting the control inputs using this approach and applying them on the high-fidelity model provides similar results to dynamic programming in terms of fuel consumption but in substantially less amount of time. This will act as a pivot for the subsequent real-time analysis.\par Third, we will perform a proof-of-concept for the autonomous-specific energy management strategy. We implement an optimization-based path and trajectory planning for a vehicle in the simplified driving scenario of a racing track. Accordingly, we use motion planning data to obtain the energy management strategy by solving an optimization problem. We will let the vehicle to travel around the circuit with the ability to perceive and plan up to an observable horizon using the receding horizon approach. Developed approach for energy management strategy shows a substantial reduction in the fuel consumption of the high-fidelity model, compared to the rule-based controller. / Thesis / Master of Science in Mechanical Engineering (MSME) / The automotive industry is on the verge of groundbreaking transformations as a result of electrification and autonomous driving. Electrified autonomous car of the future is sustainable, energy-efficient, more convenient, and safer. In addition to the advantages of electrification and autonomous driving individually, the intersection and interaction of these mainstreams provide new opportunities for further improvements on the vehicles. Autonomous cars generate an unprecedented amount of real-time data due to excessive use of perception sensors and processing units. This thesis considers the case of an autonomous hybrid electric vehicle and presents the novel idea of autonomous-specific energy management strategy. Specifically, this thesis is a proof-of-concept, a trial to exploit the motion planning data for a self-driving car to improve the fuel economy of the hybrid electric power unit by adopting a more efficient energy management strategy. With the ever-increasing number of autonomous hybrid electric vehicles, particularly in the self-driving fleets, the presented method shows an extremely promising potential to reduce the fuel consumption of these vehicles.
208

Minimum Cost Distributed Computing using Sparse Matrix Factorization / Minsta-kostnads Distribuerade Beräkningar genom Gles Matrisfaktorisering

Hussein, Seif January 2023 (has links)
Distributed computing is an approach where computationally heavy problems are broken down into more manageable sub-tasks, which can then be distributed across a number of different computers or servers, allowing for increased efficiency through parallelization. This thesis explores an established distributed computing setting, in which the computationally heavy task involves a number of users requesting a linearly separable function to be computed across several servers. This setting results in a condition for feasible computation and communication that can be described by a matrix factorization problem. Moreover, the associated costs with computation and communication are directly related to the number of nonzero elements of the matrix factors, making sparse factors desirable for minimal costs. The Alternating Direction Method of Multipliers (ADMM) is explored as a possible method of solving the sparse matrix factorization problem. To obtain convergence results, extensive convex analysis is conducted on the ADMM iterates, resulting in a theorem that characterizes the limiting points of the iterates as KKT points for the sparse matrix factorization problem. Using the results of the analysis, an algorithm is devised from the ADMM iterates, which can be applied to the sparse matrix factorization problem. Furthermore, an additional implementation is considered for a noisy scenario, in which existing theoretical results are used to justify convergence. Finally, numerical implementations of the devised algorithms are used to perform sparse matrix factorization. / Distribuerad beräkning är en metod där beräkningstunga problem bryts ner i hanterbara deluppgifter, som sedan kan distribueras över ett antal olika beräkningsenheter eller servrar, vilket möjliggör ökad effektivitet genom parallelisering. Denna avhandling undersöker en etablerad distribuerad beräkningssmiljö, där den beräkningstunga uppgiften involverar ett antal användare som begär en linjärt separabel funktion som beräknas över flera servrar. Denna miljö resulterar i ett villkor för tillåten beräkning och kommunikation som kan beskrivas genom ett matrisfaktoriseringsproblem. Dessutom är det möjligt att relatera kostanderna associerade med beräkning och kommunikation till antalet nollskilda element i matrisfaktorerna, vilket gör glesa matrisfaktorer önskvärda. Alternating Direction Method of Multipliers (ADMM) undersöks som en möjlig metod för att lösa det glesa matrisfaktoriseringsproblemet. För att erhålla konvergensresultat genomförs omfattande konvex analys på ADMM-iterationerna, vilket resulterar i ett teorem som karakteriserar de begränsande punkterna för iterationerna som KKT-punkter för det glesa matrisfaktoriseringsproblemet. Med hjälp av resultaten från analysen utformas en algoritm från ADMM-iterationerna, vilken kan appliceras på det glesa matrisfaktoriseringsproblemet. Dessutom övervägs en ytterligare implementering för ett brusigt scenario, där befintliga teoretiska resultat används för att motivera konvergens. Slutligen används numeriska implementeringar av de framtagna algoritmerna för att utföra gles matrisfaktorisering.
209

Analyse du comportement hétérogène des usagers dans un réseau

Klok, Zacharie-Francis 08 1900 (has links)
Le nombre important de véhicules sur le réseau routier peut entraîner des problèmes d'encombrement et de sécurité. Les usagers des réseaux routiers qui nous intéressent sont les camionneurs qui transportent des marchandises, pouvant rouler avec des véhicules non conformes ou emprunter des routes interdites pour gagner du temps. Le transport de matières dangereuses est réglementé et certains lieux, surtout les ponts et les tunnels, leur sont interdits d'accès. Pour aider à faire appliquer les lois en vigueur, il existe un système de contrôles routiers composé de structures fixes et de patrouilles mobiles. Le déploiement stratégique de ces ressources de contrôle mise sur la connaissance du comportement des camionneurs que nous allons étudier à travers l'analyse de leurs choix de routes. Un problème de choix de routes peut se modéliser en utilisant la théorie des choix discrets, elle-même fondée sur la théorie de l'utilité aléatoire. Traiter ce type de problème avec cette théorie est complexe. Les modèles que nous utiliserons sont tels, que nous serons amenés à faire face à des problèmes de corrélation, puisque plusieurs routes partagent probablement des arcs. De plus, puisque nous travaillons sur le réseau routier du Québec, le choix de routes peut se faire parmi un ensemble de routes dont le nombre est potentiellement infini si on considère celles ayant des boucles. Enfin, l'étude des choix faits par un humain n'est pas triviale. Avec l'aide du modèle de choix de routes retenu, nous pourrons calculer une expression de la probabilité qu'une route soit prise par le camionneur. Nous avons abordé cette étude du comportement en commençant par un travail de description des données collectées. Le questionnaire utilisé par les contrôleurs permet de collecter des données concernant les camionneurs, leurs véhicules et le lieu du contrôle. La description des données observées est une étape essentielle, car elle permet de présenter clairement à un analyste potentiel ce qui est accessible pour étudier les comportements des camionneurs. Les données observées lors d'un contrôle constitueront ce que nous appellerons une observation. Avec les attributs du réseau, il sera possible de modéliser le réseau routier du Québec. Une sélection de certains attributs permettra de spécifier la fonction d'utilité et par conséquent la fonction permettant de calculer les probabilités de choix de routes par un camionneur. Il devient alors possible d'étudier un comportement en se basant sur des observations. Celles provenant du terrain ne nous donnent pas suffisamment d'information actuellement et même en spécifiant bien un modèle, l'estimation des paramètres n'est pas possible. Cette dernière est basée sur la méthode du maximum de vraisemblance. Nous avons l'outil, mais il nous manque la matière première que sont les observations, pour continuer l'étude. L'idée est de poursuivre avec des observations de synthèse. Nous ferons des estimations avec des observations complètes puis, pour se rapprocher des conditions réelles, nous continuerons avec des observations partielles. Ceci constitue d'ailleurs un défi majeur. Nous proposons pour ces dernières, de nous servir des résultats des travaux de (Bierlaire et Frejinger, 2008) en les combinant avec ceux de (Fosgerau, Frejinger et Karlström, 2013). Bien qu'elles soient de nature synthétiques, les observations que nous utilisons nous mèneront à des résultats tels, que nous serons en mesure de fournir une proposition concrète qui pourrait aider à optimiser les décisions des responsables des contrôles routiers. En effet, nous avons réussi à estimer, sur le réseau réel du Québec, avec un seuil de signification de 0,05 les valeurs des paramètres d'un modèle de choix de routes discrets, même lorsque les observations sont partielles. Ces résultats donneront lieu à des recommandations sur les changements à faire dans le questionnaire permettant de collecter des données. / Using transportation roads enables workers to reach their work facilities. Security and traffic jam issues are all the more important given that the number of vehicles is always increasing and we will focus on merchandise transporters in this study. Dangerous items transportation is under strict control as it is for example forbidden for them to be carried through a tunnel or across a bridge. Some transporters may drive a vehicle that has defects or/and they may be ta\-king some forbidden roads so as to reach their destination faster. Transportation of goods is regulated by the law and there exists a control system, whose purpose is to detect frauds and to make sure controlled vehicles are in order. The strategic deployment of control resources can be based on the knowledge of transporters behaviour, which is going to be studied through their route choice analysis. The number of routes can be unbounded especially if we consider loops, which leads to a complex problem to be solved. We can also mention issues closely related to route choice problem using discrete choice models such as correlation between routes sharing links and point out the fact that human decision process is not considered something easy. A route choice problem can be modelled based on the random utility theory and as a consequence we will focus on the discrete choice models. We are going to use such model on the real road network of Quebec and we will derive an expression of the probability, for a transporter, to pick one route. We are going to explain the way we did our study. It started first by doing a data description job as we are convinced this is a step that will help other analysts to have a clear view of the data situation. Some data are network related and the corresponding attributes collected will be used to model the road network of Quebec. We will use some attributes to explain the utility function, which leads to the definition of the function that gives the probability that a user takes a given route. Once this function is fully specified, the behaviour study can be done, except that we have a set of observations that are absolutely incomplete. When observations are a gathering of data collected during a road control, the information they provide us is not enough and thus, the parameters estimation will fail. We might seem blocked but in fact, we brought the idea of using simulated observations. We are going to estimate model parameters with firstly complete observations and in order to imitate the real conditions, we then are going to use partial observations. This constitutes a main challenge and we overcome it by using the results presented in (Bierlaire et Frejinger, 2008) combined with those from (Fosgerau, Frejinger et Karlström, 2013). We will demonstrate that even though the observations used are simulated, we will deliver conclusions that can be useful for road network managers. The main results we provide in this work is that estimation can be done with a 0,05 signification level on real road network of Quebec, while the observations are incomplete. Eventually, our results should motivate network managers to improve the set of questions they use to collect data as it would help them to strengthen their knowledge about the merchandise transporters and hopefully, the decision process will lead to optimized resource deployments.
210

Engineering the near field of radiating systems at millimeter waves : from theory to applications / Manipulation du champ proche des systèmes rayonnants en ondes millimétriques : théorie et applications

Iliopoulos, Ioannis 20 December 2017 (has links)
L'objectif général est de développer un nouvel outil numérique dédié à la focalisation en 3D de l'énergie en zone de champ très proche par un système antennaire. Cet outil permettra de définir la distribution spatiale complexe des champs dans l'ouverture rayonnante afin de focaliser l'énergie sur un volume quelconque en zone de champ réactif. L'hybridation de cet outil avec un code de calcul dédié à l'analyse rapide d‘antennes SIW par la méthode des moments permettra de synthétiser une antenne SIW ad-hoc. Les structures antennaires sélectionnées seront planaires comme par exemple les antennes RLSA (Radial Line Slot Array). Les dimensions de l'antenne (positions, dimensions et nombre de fentes) seront définies à l'aide des outils décrits ci-dessus. Les résultats numériques ainsi obtenus seront validés d'abord numériquement par analyse électromagnétique globale à l'aide de simulateurs commerciaux, puis expérimentalement en ondes millimétriques (mesure en zone de champ très proche). Pour atteindre ces objectifs, nous avons défini quatre tâches principales : Développement d'un outil de synthèse de champ dans l'ouverture rayonnante (formulation théorique couplée à une méthode dite des projections alternées) ; développement d'un outil de calcul rapide (sur la base de traitements par FFT) du champ électromagnétique rayonné en zone de champ proche par une ouverture rayonnante, et retro-propagation ; hybridation de ces algorithmes avec un code de calcul (méthode des moments) en cours de développement à l'IETR et dédié à l'analyse très rapide d'antennes en technologie SIW ; conception d'une preuve ou plusieurs preuves de concept, et validations numérique et expérimentale des concepts proposés. / With the demand for near-field antennas continuously growing, the antenna engineer is charged with the development of new concepts and design procedures for this regime. From the microwave and up to terahertz frequencies, a vast number of applications, especially in the biomedical domain, are in need for focused or shaped fields in the antenna proximity. This work proposes new theoretical methods for near-field shaping based on different optimization schemes. Continuous radiating planar apertures are optimized to radiate a near field with required characteristics. In particular, a versatile optimization technique based on the alternating projection scheme is proposed. It is demonstrated that, based on this scheme, it is feasible to achieve 3-D control of focal spots generated by planar apertures. Additionally, with the same setup, also the vectorial problem (shaping the norm of the field) is addressed. Convex optimization is additionally introduced for near-field shaping of continuous aperture sources. The capabilities of this scheme are demonstrated in the context of different shaping scenarios. Additionally, the discussion is extended to shaping the field in lossy stratified media, based on a spectral Green's functions approach. Besides, the biomedical applications of wireless power transfer to implants and breast cancer imaging are addressed. For the latter, an extensive study is included here, which delivers an outstanding improvement on the penetration depth at higher frequencies. The thesis is completed by several prototypes used for validation. Four different antennas have been designed, based either on the radial line slot array topology or on metasurfaces. The prototypes have been manufactured and measured, validating the overall approach of the thesis.

Page generated in 0.0823 seconds