• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 147
  • 14
  • 13
  • 11
  • 6
  • 1
  • 1
  • Tagged with
  • 226
  • 226
  • 53
  • 45
  • 42
  • 39
  • 38
  • 32
  • 25
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Engineering the near field of radiating systems at millimeter waves : from theory to applications / Manipulation du champ proche des systèmes rayonnants en ondes millimétriques : théorie et applications

Iliopoulos, Ioannis 20 December 2017 (has links)
L'objectif général est de développer un nouvel outil numérique dédié à la focalisation en 3D de l'énergie en zone de champ très proche par un système antennaire. Cet outil permettra de définir la distribution spatiale complexe des champs dans l'ouverture rayonnante afin de focaliser l'énergie sur un volume quelconque en zone de champ réactif. L'hybridation de cet outil avec un code de calcul dédié à l'analyse rapide d‘antennes SIW par la méthode des moments permettra de synthétiser une antenne SIW ad-hoc. Les structures antennaires sélectionnées seront planaires comme par exemple les antennes RLSA (Radial Line Slot Array). Les dimensions de l'antenne (positions, dimensions et nombre de fentes) seront définies à l'aide des outils décrits ci-dessus. Les résultats numériques ainsi obtenus seront validés d'abord numériquement par analyse électromagnétique globale à l'aide de simulateurs commerciaux, puis expérimentalement en ondes millimétriques (mesure en zone de champ très proche). Pour atteindre ces objectifs, nous avons défini quatre tâches principales : Développement d'un outil de synthèse de champ dans l'ouverture rayonnante (formulation théorique couplée à une méthode dite des projections alternées) ; développement d'un outil de calcul rapide (sur la base de traitements par FFT) du champ électromagnétique rayonné en zone de champ proche par une ouverture rayonnante, et retro-propagation ; hybridation de ces algorithmes avec un code de calcul (méthode des moments) en cours de développement à l'IETR et dédié à l'analyse très rapide d'antennes en technologie SIW ; conception d'une preuve ou plusieurs preuves de concept, et validations numérique et expérimentale des concepts proposés. / With the demand for near-field antennas continuously growing, the antenna engineer is charged with the development of new concepts and design procedures for this regime. From the microwave and up to terahertz frequencies, a vast number of applications, especially in the biomedical domain, are in need for focused or shaped fields in the antenna proximity. This work proposes new theoretical methods for near-field shaping based on different optimization schemes. Continuous radiating planar apertures are optimized to radiate a near field with required characteristics. In particular, a versatile optimization technique based on the alternating projection scheme is proposed. It is demonstrated that, based on this scheme, it is feasible to achieve 3-D control of focal spots generated by planar apertures. Additionally, with the same setup, also the vectorial problem (shaping the norm of the field) is addressed. Convex optimization is additionally introduced for near-field shaping of continuous aperture sources. The capabilities of this scheme are demonstrated in the context of different shaping scenarios. Additionally, the discussion is extended to shaping the field in lossy stratified media, based on a spectral Green's functions approach. Besides, the biomedical applications of wireless power transfer to implants and breast cancer imaging are addressed. For the latter, an extensive study is included here, which delivers an outstanding improvement on the penetration depth at higher frequencies. The thesis is completed by several prototypes used for validation. Four different antennas have been designed, based either on the radial line slot array topology or on metasurfaces. The prototypes have been manufactured and measured, validating the overall approach of the thesis.
212

Évaluation de modèles computationnels de la vision humaine en imagerie par résonance magnétique fonctionnelle / Evaluating Computational Models of Vision with Functional Magnetic Resonance Imaging

Eickenberg, Michael 21 September 2015 (has links)
L'imagerie par résonance magnétique fonctionnelle (IRMf) permet de mesurer l'activité cérébrale à travers le flux sanguin apporté aux neurones. Dans cette thèse nous évaluons la capacité de modèles biologiquement plausibles et issus de la vision par ordinateur à représenter le contenu d'une image de façon similaire au cerveau. Les principaux modèles de vision évalués sont les réseaux convolutionnels.Les réseaux de neurones profonds ont connu un progrès bouleversant pendant les dernières années dans divers domaines. Des travaux antérieurs ont identifié des similarités entre le traitement de l'information visuelle à la première et dernière couche entre un réseau de neurones et le cerveau. Nous avons généralisé ces similarités en identifiant des régions cérébrales correspondante à chaque étape du réseau de neurones. Le résultat consiste en une progression des niveaux de complexité représentés dans le cerveau qui correspondent à l'architecture connue des aires visuelles: Plus la couche convolutionnelle est profonde, plus abstraits sont ses calculs et plus haut niveau sera la fonction cérébrale qu'elle sait modéliser au mieux. Entre la détection de contours en V1 et la spécificité à l'objet en cortex inférotemporal, fonctions assez bien comprises, nous montrons pour la première fois que les réseaux de neurones convolutionnels de détection d'objet fournissent un outil pour l'étude de toutes les étapes intermédiaires du traitement visuel effectué par le cerveau.Un résultat préliminaire à celui-ci est aussi inclus dans le manuscrit: L'étude de la réponse cérébrale aux textures visuelles et sa modélisation avec les réseaux convolutionnels de scattering.L'autre aspect global de cette thèse sont modèles de “décodage”: Dans la partie précédente, nous prédisions l'activité cérébrale à partir d'un stimulus (modèles dits d’”encodage”). La prédiction du stimulus à partir de l'activité cérébrale est le méchanisme d'inférence inverse et peut servir comme preuve que cette information est présente dans le signal. Le plus souvent, des modèles linéaires généralisés tels que la régression linéaire ou logistique ou les SVM sont utilisés, donnant ainsi accès à une interprétation des coefficients du modèle en tant que carte cérébrale. Leur interprétation visuelle est cependant difficile car le problème linéaire sous-jacent est soit mal posé et mal conditionné ou bien non adéquatement régularisé, résultant en des cartes non-informatives. En supposant une organisation contigüe en espace et parcimonieuse, nous nous appuyons sur la pénalité convexe d'une somme de variation totale et la norme L1 (TV+L1) pour développer une pénalité regroupant un terme d'activation et un terme de dérivée spatiale. Cette pénalité a la propriété de mettre à zéro la plupart des coefficients tout en permettant une variation libre des coefficients dans une zone d'activation, contrairement à TV+L1 qui impose des zones d’activation plates. Cette méthode améliore l'interprétabilité des cartes obtenues dans un schéma de validation croisée basé sur la précision du modèle prédictif.Dans le contexte des modèles d’encodage et décodage nous tâchons à améliorer les prétraitements des données. Nous étudions le comportement du signal IRMf par rapport à la stimulation ponctuelle : la réponse impulsionnelle hémodynamique. Pour générer des cartes d'activation, au lieu d’un modèle linéaire classique qui impose une réponse impulsionnelle canonique fixe, nous utilisons un modèle bilinéaire à réponse hémodynamique variable spatialement mais fixe à travers les événements de stimulation. Nous proposons un algorithme efficace pour l'estimation et montrons un gain en capacité prédictive sur les analyses menées, en encodage et décodage. / Blood-oxygen-level dependent (BOLD) functional magnetic resonance imaging (fMRI) makes it possible to measure brain activity through blood flow to areas with metabolically active neurons. In this thesis we use these measurements to evaluate the capacity of biologically inspired models of vision coming from computer vision to represent image content in a similar way as the human brain. The main vision models used are convolutional networks.Deep neural networks have made unprecedented progress in many fields in recent years. Even strongholds of biological systems such as scene analysis and object detection have been addressed with enormous success. A body of prior work has been able to establish firm links between the first and last layers of deep convolutional nets and brain regions: The first layer and V1 essentially perform edge detection and the last layer as well as inferotemporal cortex permit a linear read-out of object category. In this work we have generalized this correspondence to all intermediate layers of a convolutional net. We found that each layer of a convnet maps to a stage of processing along the ventral stream, following the hierarchy of biological processing: Along the ventral stream we observe a stage-by-stage increase in complexity. Between edge detection and object detection, for the first time we are given a toolbox to study the intermediate processing steps.A preliminary result to this was obtained by studying the response of the visual areas to presentation of visual textures and analysing it using convolutional scattering networks.The other global aspect of this thesis is “decoding” models: In the preceding part, we predicted brain activity from the stimulus presented (this is called “encoding”). Predicting a stimulus from brain activity is the inverse inference mechanism and can be used as an omnibus test for presence of this information in brain signal. Most often generalized linear models such as linear or logistic regression or SVMs are used for this task, giving access to a coefficient vector the same size as a brain sample, which can thus be visualized as a brain map. However, interpretation of these maps is difficult, because the underlying linear system is either ill-defined and ill-conditioned or non-adequately regularized, resulting in non-informative maps. Supposing a sparse and spatially contiguous organization of coefficient maps, we build on the convex penalty consisting of the sum of total variation (TV) seminorm and L1 norm (“TV+L1”) to develop a penalty grouping an activation term with a spatial derivative. This penalty sets most coefficients to zero but permits free smooth variations in active zones, as opposed to TV+L1 which creates flat active zones. This method improves interpretability of brain maps obtained through cross-validation to determine the best hyperparameter.In the context of encoding and decoding models, we also work on improving data preprocessing in order to obtain the best performance. We study the impulse response of the BOLD signal: the hemodynamic response function. To generate activation maps, instead of using a classical linear model with fixed canonical response function, we use a bilinear model with spatially variable hemodynamic response (but fixed across events). We propose an efficient optimization algorithm and show a gain in predictive capacity for encoding and decoding models on different datasets.
213

Regularization of inverse problems in image processing

Jalalzai, Khalid 09 March 2012 (has links) (PDF)
Les problèmes inverses consistent à retrouver une donnée qui a été transformée ou perturbée. Ils nécessitent une régularisation puisque mal posés. En traitement d'images, la variation totale en tant qu'outil de régularisation a l'avantage de préserver les discontinuités tout en créant des zones lisses, résultats établis dans cette thèse dans un cadre continu et pour des énergies générales. En outre, nous proposons et étudions une variante de la variation totale. Nous établissons une formulation duale qui nous permet de démontrer que cette variante coïncide avec la variation totale sur des ensembles de périmètre fini. Ces dernières années les méthodes non-locales exploitant les auto-similarités dans les images ont connu un succès particulier. Nous adaptons cette approche au problème de complétion de spectre pour des problèmes inverses généraux. La dernière partie est consacrée aux aspects algorithmiques inhérents à l'optimisation des énergies convexes considérées. Nous étudions la convergence et la complexité d'une famille récente d'algorithmes dits Primal-Dual.
214

Proximal Splitting Methods in Nonsmooth Convex Optimization

Hendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
215

Decentralized multiantenna transceiver optimization for heterogeneous networks

Kaleva, J. (Jarkko) 19 June 2018 (has links)
Abstract This thesis focuses on transceiver optimization for heterogeneous multi-user multiple-input multiple-output (MIMO) wireless communications systems. The aim is to design decentralized beamforming methods with low signaling overhead for improved spatial spectrum utilization. A wide range of transceiver optimization techniques are covered, with particular consideration of decentralized optimization, fast convergence, computational complexity and signaling limitations. The proposed methods are shown to provide improved rate of convergence, when compared to the conventional weighted minimum MSE (WMMSE) approach. This makes them suitable for time-correlated channel conditions, in which the ability to follow the changing channel conditions is essential. Coordinated beamforming under quality of service (QoS) constraints is considered for interfering broadcast channel. Decomposition based decentralized processing approaches are shown to enable the weighted sum rate maximization (WSRMax) in time-correlated channel conditions. Pilot-aided decentralized WSRMax beamformer estimation is studied for coordinated multi-point (CoMP) joint processing (JP). In stream specific estimation (SSE), all effective channels are individually estimated. The beamformers are then constructed from the locally estimated channels. On the other hand, with direct estimation (DE) of the beamformers, only the intended signal needs to be separately estimated and the covariance matrices are implicitly estimated from the received pilot training matrices. This makes the pilot design more robust to pilot contamination. These methods show that CoMP JP is feasible even in relatively fading channel conditions and with limited backhaul capacity by employing decentralized beamformer processing. In the final part of the thesis, a relay-assisted cellular system with decentralized processing is considered, in which users are served either directly by the base stations or via relays for WSRMax or sum power minimization subject to rate constraints. Zero-forcing and coordinated beamforming provide a trade-off between complexity, in-band signaling and spectrum utilization. Relays are shown to be beneficial in many scenarios when the in-band signaling is accounted for. This thesis shows that decentralized downlink MIMO transceiver design with a reasonable computational complexity is feasible in various system architectures even when signaling resources are limited and channel conditions are moderately fast fading. / Tiivistelmä Tämä väitöskirja keskittyy lähetin- ja vastaanotinoptimointiin heterogeenisissä monikäyttäjä- ja moniantennijärjestelmissä. Tavoitteena on parantaa tilatason suorituskykyä tutkimalla hajautettuja keilanmuodostusmenetelmiä, joissa ohjaussignaloinnin tarve on alhainen. Erityisesti keskitytään hajautetun keilanmuodostuksen optimointiin, nopeaan konvergenssiin, laskennalliseen kompleksisuuteen sekä signaloinnin rajoitteisiin. Esitettyjen menetelmien osoitetaan parantavan konvergenssinopeutta ja vähentävän signaloinnin tarvetta, verrattaessa tunnettuun WMMSE-menetelmään. Nämä mahdollistavat lähetyksen aikajatkuvissa kanavissa, joissa kanavan muutosten seuraaminen on erityisen tärkeää. Näiden menetelmien osoitetaan mahdollistavan hajautetun ja priorisoidun tiedonsiirtonopeuden maksimoinnin monisolujärjestelmissä sekä aikajatkuvissa kanavissa käyttäjäkohtaisilla siirtonopeustakuilla. Pilottiavusteisten lähetys- ja vastaanotinkeilojen estimointia tutkitaan yhteislähetysjärjestelmissä. Yksittäisten lähetyskanavien estimoinnissa effektiiviset kanavat estimoidaan yksitellen, ja lähetys- ja vastaanotinkovarianssimatriisit muodostetaan summaamalla paikalliset kanavaestimaatit. Suoraestimoinnissa ainoastaan oman käyttäjän effektiivinen kanava estimoimaan erikseen. Tällöin kovarianssimatriisit saadaan suoraan vastaanotetuista pilottisignaaleista. Tämä tekee estimaateista vähemmän herkkiä häiriölle. Hajautetun yhteislähetyksen osoitetaan olevan mahdollista, jopa verrattain nopeasti muuttuvissa kanavissa sekä rajallisella verkkoyhteydellä lähettimien välillä. Viimeisessä osassa tutkitaan välittäjä-avusteisia järjestelmiä, joissa käyttäjiä palvellaan joko suoraan tukiasemasta tai välittäjä-aseman kautta. Optimointikriteereinä käytetään siirtonopeuden maksimointia sekä lähetystehon minimointia siirtonopeustakuilla. Nollaanpakottava sekä koordinoitu keilanmuodostus tarjoavat valinna laskennallisen kompleksisuuden, ohjaussignaloinnin sekä suorituskyvyn välillä. Välittäjä-avusteisen lähetyksen osoitetaan olevan hyödyllisiä useissa tilanteissa, kun radiorajanpinnan yli tapahtuvan signaloinnin tarve otetaan huomioon keilanmuodostuksessa. Tässä väitöskirjassa osoitetaan hajautetun keilanmuodostuksen olevan mahdollista verrattaen vähäisillä laskennallisilla resursseilla heterogeenisissä moniantennijärjestelmissä. Esitetyt menetelmät tarjoavat ratkaisuja järjestelmiin, joissa ohjaussignalointiresurssit ovat rajallisia ja radiokanava on jatkuvasti muuttuva.
216

Coordinated beamforming in cellular and cognitive radio networks

Pennanen, H. (Harri) 08 September 2015 (has links)
Abstract This thesis focuses on the design of coordinated downlink beamforming techniques for wireless multi-cell multi-user multi-antenna systems. In particular, cellular and cognitive radio networks are considered. In general, coordinated beamforming schemes aim to improve system performance, especially at the cell-edge area, by controlling inter-cell interference. In this work, special emphasis is put on practical coordinated beamforming designs that can be implemented in a decentralized manner by relying on local channel state information (CSI) and low-rate backhaul signaling. The network design objective is the sum power minimization (SPMin) of base stations (BSs) while providing the guaranteed minimum rate for each user. Decentralized coordinated beamforming techniques are developed for cellular multi-user multiple-input single-output (MISO) systems. The proposed iterative algorithms are based on classical primal and dual decomposition methods. The SPMin problem is decomposed into two optimization levels, i.e., BS-specific subproblems for the beamforming design and a network-wide master problem for the inter-cell interference coordination. After the acquisition of local CSI, each BS can independently compute its transmit beamformers by solving the subproblem via standard convex optimization techniques. Interference coordination is managed by solving the master problem via a traditional subgradient method that requires scalar information exchange between the BSs. The algorithms make it possible to satisfy the user-specific rate constraints for any iteration. Hence, delay and signaling overhead can be reduced by limiting the number of performed iterations. In this respect, the proposed algorithms are applicable to practical implementations unlike most of the existing decentralized approaches. The numerical results demonstrate that the algorithms provide significant performance gains over zero-forcing beamforming strategies. Coordinated beamforming is also studied in cellular multi-user multiple-input multiple-output (MIMO) systems. The corresponding non-convex SPMin problem is divided into transmit and receive beamforming optimization steps that are alternately solved via successive convex approximation method and the linear minimum mean square error criterion, respectively, until the desired level of convergence is attained. In addition to centralized design, two decentralized primal decomposition-based algorithms are proposed wherein the transmit and receive beamforming designs are facilitated by a combination of pilot and backhaul signaling. The results show that the proposed MIMO algorithms notably outperform the MISO ones. Finally, cellular coordinated beamforming strategies are extended to multi-user MISO cognitive radio systems, where primary and secondary networks share the same spectrum. Here, network optimization is performed for the secondary system with additional interference constraints imposed for the primary users. Decentralized algorithms are proposed based on primal decomposition and an alternating direction method of multipliers. / Tiivistelmä Tämä väitöskirja keskittyy yhteistoiminnallisten keilanmuodostustekniikoiden suunnitteluun langattomissa monisolu- ja moniantennijärjestelmissä, erityisesti solukko- ja kognitiiviradioverkoissa. Yhteistoiminnalliset keilanmuodostustekniikat pyrkivät parantamaan verkkojen suorituskykyä kontrolloimalla monisoluhäiriötä, erityisesti tukiasemasolujen reuna-alueilla. Tässä työssä painotetaan erityisesti käytännöllisten yhteistoiminnallisten keilanmuodostustekniikoiden suunnittelua, joka voidaan toteuttaa hajautetusti perustuen paikalliseen kanavatietoon ja tukiasemien väliseen informaationvaihtoon. Verkon suunnittelutavoite on minimoida tukiasemien kokonaislähetysteho samalla, kun jokaiselle käyttäjälle taataan tietty vähimmäistiedonsiirtonopeus. Hajautettuja yhteistoiminnallisia keilanmuodostustekniikoita kehitetään moni-tulo yksi-lähtö -solukkoverkoille. Oletuksena on, että tukiasemat ovat varustettuja monilla lähetysantenneilla, kun taas päätelaitteissa on vain yksi vastaanotinantenni. Ehdotetut iteratiiviset algoritmit perustuvat klassisiin primaali- ja duaalihajotelmiin. Lähetystehon minimointiongelma hajotetaan kahteen optimointitasoon: tukiasemakohtaisiin aliongelmiin keilanmuodostusta varten ja verkkotason pääongelmaan monisoluhäiriön hallintaa varten. Paikallisen kanavatiedon hankkimisen jälkeen jokainen tukiasema laskee itsenäisesti lähetyskeilansa ratkaisemalla aliongelmansa käyttäen apunaan standardeja konveksioptimointitekniikoita. Monisoluhäiriötä kontrolloidaan ratkaisemalla pääongelma käyttäen perinteistä aligradienttimenetelmää. Tämä vaatii tukiasemien välistä informaationvaihtoa. Ehdotetut algoritmit takaavat käyttäjäkohtaiset tiedonsiirtonopeustavoitteet jokaisella iterointikierroksella. Tämä mahdollistaa viiveen pienentämisen ja tukiasemien välisen informaatiovaihdon kontrolloimisen. Tästä syystä ehdotetut algoritmit soveltuvat käytännön toteutuksiin toisin kuin useimmat aiemmin ehdotetut hajautetut algoritmit. Numeeriset tulokset osoittavat, että väitöskirjassa ehdotetut algoritmit tuovat merkittävää verkon suorituskyvyn parannusta verrattaessa aiempiin nollaanpakotus -menetelmiin. Yhteistoiminnallista keilanmuodostusta tutkitaan myös moni-tulo moni-lähtö -solukkoverkoissa, joissa tukiasemat sekä päätelaitteet ovat varustettuja monilla antenneilla. Tällaisessa verkossa lähetystehon minimointiongelma on ei-konveksi. Optimointiongelma jaetaan lähetys- ja vastaanottokeilanmuodostukseen, jotka toistetaan vuorotellen, kunnes algoritmi konvergoituu. Lähetyskeilanmuodostusongelma ratkaistaan peräkkäisillä konvekseilla approksimaatioilla. Vastaanottimen keilanmuodostus toteutetaan summaneliövirheen minimoinnin kautta. Keskitetyn algoritmin lisäksi tässä työssä kehitetään myös kaksi hajautettua algoritmia, jotka perustuvat primaalihajotelmaan. Hajautettua toteutusta helpotetaan pilottisignaloinnilla ja tukiasemien välisellä informaationvaihdolla. Numeeriset tulokset osoittavat, että moni-tulo moni-lähtö -tekniikoilla on merkittävästi parempi suorituskyky kuin moni-tulo yksi-lähtö -tekniikoilla. Lopuksi yhteistoiminnallista keilanmuodostusta tarkastellaan kognitiiviradioverkoissa, joissa primaari- ja sekundaarijärjestelmät jakavat saman taajuuskaistan. Lähetystehon optimointi suoritetaan sekundaariverkolle samalla minimoiden primaarikäyttäjille aiheuttamaa häiriötä. Väitöskirjassa kehitetään kaksi hajautettua algoritmia, joista toinen perustuu primaalihajotelmaan ja toinen kerrointen vaihtelevan suunnan menetelmään.
217

New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance Imaging

Schomburg, Helen 29 September 2017 (has links)
No description available.
218

Stochastic approximation and least-squares regression, with applications to machine learning / Approximation stochastique et régression par moindres carrés : applications en apprentissage automatique

Flammarion, Nicolas 24 July 2017 (has links)
De multiples problèmes en apprentissage automatique consistent à minimiser une fonction lisse sur un espace euclidien. Pour l’apprentissage supervisé, cela inclut les régressions par moindres carrés et logistique. Si les problèmes de petite taille sont résolus efficacement avec de nombreux algorithmes d’optimisation, les problèmes de grande échelle nécessitent en revanche des méthodes du premier ordre issues de la descente de gradient. Dans ce manuscrit, nous considérons le cas particulier de la perte quadratique. Dans une première partie, nous nous proposons de la minimiser grâce à un oracle stochastique. Dans une seconde partie, nous considérons deux de ses applications à l’apprentissage automatique : au partitionnement de données et à l’estimation sous contrainte de forme. La première contribution est un cadre unifié pour l’optimisation de fonctions quadratiques non-fortement convexes. Celui-ci comprend la descente de gradient accélérée et la descente de gradient moyennée. Ce nouveau cadre suggère un algorithme alternatif qui combine les aspects positifs du moyennage et de l’accélération. La deuxième contribution est d’obtenir le taux optimal d’erreur de prédiction pour la régression par moindres carrés en fonction de la dépendance au bruit du problème et à l’oubli des conditions initiales. Notre nouvel algorithme est issu de la descente de gradient accélérée et moyennée. La troisième contribution traite de la minimisation de fonctions composites, somme de l’espérance de fonctions quadratiques et d’une régularisation convexe. Nous étendons les résultats existants pour les moindres carrés à toute régularisation et aux différentes géométries induites par une divergence de Bregman. Dans une quatrième contribution, nous considérons le problème du partitionnement discriminatif. Nous proposons sa première analyse théorique, une extension parcimonieuse, son extension au cas multi-labels et un nouvel algorithme ayant une meilleure complexité que les méthodes existantes. La dernière contribution de cette thèse considère le problème de la sériation. Nous adoptons une approche statistique où la matrice est observée avec du bruit et nous étudions les taux d’estimation minimax. Nous proposons aussi un estimateur computationellement efficace. / Many problems in machine learning are naturally cast as the minimization of a smooth function defined on a Euclidean space. For supervised learning, this includes least-squares regression and logistic regression. While small problems are efficiently solved by classical optimization algorithms, large-scale problems are typically solved with first-order techniques based on gradient descent. In this manuscript, we consider the particular case of the quadratic loss. In the first part, we are interestedin its minimization when its gradients are only accessible through a stochastic oracle. In the second part, we consider two applications of the quadratic loss in machine learning: clustering and estimation with shape constraints. In the first main contribution, we provided a unified framework for optimizing non-strongly convex quadratic functions, which encompasses accelerated gradient descent and averaged gradient descent. This new framework suggests an alternative algorithm that exhibits the positive behavior of both averaging and acceleration. The second main contribution aims at obtaining the optimal prediction error rates for least-squares regression, both in terms of dependence on the noise of the problem and of forgetting the initial conditions. Our new algorithm rests upon averaged accelerated gradient descent. The third main contribution deals with minimization of composite objective functions composed of the expectation of quadratic functions and a convex function. Weextend earlier results on least-squares regression to any regularizer and any geometry represented by a Bregman divergence. As a fourth contribution, we consider the the discriminative clustering framework. We propose its first theoretical analysis, a novel sparse extension, a natural extension for the multi-label scenario and an efficient iterative algorithm with better running-time complexity than existing methods. The fifth main contribution deals with the seriation problem. We propose a statistical approach to this problem where the matrix is observed with noise and study the corresponding minimax rate of estimation. We also suggest a computationally efficient estimator whose performance is studied both theoretically and experimentally.
219

Model-based co-design of sensing and control systems for turbo-charged, EGR-utilizing spark-ignited engines

Xu Zhang (9976460) 01 March 2021 (has links)
<div>Stoichiometric air-fuel ratio (AFR) and air/EGR flow control are essential control problems in today’s advanced spark-ignited (SI) engines to enable effective application of the three-way-catalyst (TWC) and generation of required torque. External exhaust gas recirculation (EGR) can be used in SI engines to help mitigate knock, reduce enrichment and improve efficiency[1 ]. However, the introduction of the EGR system increases the complexity of stoichiometric engine-out lambda and torque management, particularly for high BMEP commercial vehicle applications. This thesis develops advanced frameworks for sensing and control architecture designs to enable robust air handling system management, stoichiometric cylinder air-fuel ratio (AFR) control and three-way-catalyst emission control.</div><div><br></div><div><div>The first work in this thesis derives a physically-based, control-oriented model for turbocharged SI engines utilizing cooled EGR and flexible VVA systems. The model includes the impacts of modulation to any combination of 11 actuators, including the throttle valve, bypass valve, fuel injection rate, waste-gate, high-pressure (HP) EGR, low-pressure (LP) EGR, number of firing cylinders, intake and exhaust valve opening and closing timings. A new cylinder-out gas composition estimation method, based on the inputs’ information of cylinder charge flow, injected fuel amount, residual gas mass and intake gas compositions, is proposed in this model. This method can be implemented in the control-oriented model as a critical input for estimating the exhaust manifold gas compositions. A new flow-based turbine-out pressure modeling strategy is also proposed in this thesis as a necessary input to estimate the LP EGR flow rate. Incorporated with these two sub-models, the control-oriented model is capable to capture the dynamics of pressure, temperature and gas compositions in manifolds and the cylinder. Thirteen physical parameters, including intake, boost and exhaust manifolds’ pressures, temperatures, unburnt and burnt mass fractions as well as the turbocharger speed, are defined as state variables. The outputs such as flow rates and AFR are modeled as functions of selected states and inputs. The control-oriented model is validated with a high fidelity SI engine GT-Power model for different operating conditions. The novelty in this physical modeling work includes the development and incorporation of the cylinder-out gas composition estimation method and the turbine-out pressure model in the control-oriented model.</div></div><div><br></div><div><div>The second part of the work outlines a novel sensor selection and observer design algorithm for linear time-invariant systems with both process and measurement noise based on <i>H</i>2 optimization to optimize the tradeoff between the observer error and the number of required sensors. The optimization problem is relaxed to a sequence of convex optimization problems that minimize the cost function consisting of the <i>H</i>2 norm of the observer error and the weighted <i>l</i>1 norm of the observer gain. An LMI formulation allows for efficient solution via semi-definite programing. The approach is applied here, for the first time, to a turbo-charged spark-ignited (SI) engine using exhaust gas recirculation to determine the optimal sensor sets for real-time intake manifold burnt gas mass fraction estimation. Simulation with the candidate estimator embedded in a high fidelity engine GT-Power model demonstrates that the optimal sensor sets selected using this algorithm have the best <i>H</i>2 estimation performance. Sensor redundancy is also analyzed based on the algorithm results. This algorithm is applicable for any type of modern internal combustion engines to reduce system design time and experimental efforts typically required for selecting optimal sensor sets.</div></div><div><br></div><div><div>The third study develops a model-based sensor selection and controller design framework for robust control of air-fuel-ratio (AFR), air flow and EGR flow for turbocharged stoichiometric engines using low pressure EGR, waste-gate turbo-charging, intake throttling and variable valve timing. Model uncertainties, disturbances, transport delays, sensor and actuator characteristics are considered in this framework. Based on the required control performance and candidate sensor sets, the framework synthesizes an H1 feedback controller and evaluates the viability of the candidate sensor set through analysis of the structured</div><div>singular value μ of the closed-loop system in the frequency domain. The framework can also be used to understand if relaxing the controller performance requirements enables the use of a simpler (less costly) sensor set. The sensor selection and controller co-design approach is applied here, for the first time, to turbo-charged engines using exhaust gas circulation. High fidelity GT-Power simulations are used to validate the approach. The novelty of the work in this part can be summarized as follows: (1) A novel control strategy is proposed for the stoichiometric SI engines using low pressure EGR to simultaneously satisfy both the AFR and air/EGR-path control performance requirements; (2) A parametrical method to simultaneously select the sensors and design the controller is first proposed for the internal combustion engines.</div></div><div><br></div><div><div>In the fourth part of the work, a novel two-loop estimation and control strategy is proposed to reduce the emission of the three-way-catalyst (TWC). In the outer loop, an FOS estimator consisting of a TWC model and an extended Kalman-filter is used to estimate the current TWC fractional oxygen state (FOS) and a robust controller is used to control the TWC FOS by manipulating the desired engine λ. The outer loop estimator and controller are combined with an existing inner loop controller. The inner loop controller controls the engine λ based on the desired λ value and the control inaccuracies are considered and compensated by the outer loop robust controller. This control strategy achieves good emission reduction performance and has advantages over the constant λ control strategy and the conventional two-loop switch-type control strategy.</div></div>
220

Optimization framework for large-scale sparse blind source separation / Stratégies d'optimisation pour la séparation aveugle de sources parcimonieuses grande échelle

Kervazo, Christophe 04 October 2019 (has links)
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d’un algorithme classique de BSS parcimonieuse pour l’application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse non-linéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées. / During the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed.

Page generated in 0.1323 seconds