• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 883
  • 293
  • 258
  • 94
  • 91
  • 35
  • 25
  • 21
  • 18
  • 15
  • 14
  • 13
  • 10
  • 7
  • 5
  • Tagged with
  • 2045
  • 234
  • 223
  • 191
  • 166
  • 156
  • 155
  • 141
  • 115
  • 108
  • 104
  • 102
  • 98
  • 98
  • 94
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Breaking Privacy in Model-Heterogeneous Federated Learning

Haldankar, Atharva Amit 14 May 2024 (has links)
Federated learning (FL) is a communication protocol that allows multiple distrustful clients to collaboratively train a machine learning model. In FL, data never leaves client devices; instead, clients only share locally computed gradients or model parameters with a central server. As individual gradients may leak information about a given client's dataset, secure aggregation was proposed. With secure aggregation, the server only receives the aggregate gradient update from the set of all sampled clients without being able to access any individual gradient. One challenge in FL is the systems-level heterogeneity that is quite often present among client devices. Specifically, clients in the FL protocol may have varying levels of compute power, on-device memory, and communication bandwidth. These limitations are addressed by model-heterogeneous FL schemes, where clients are able to train on subsets of the global model. Despite the benefits of model-heterogeneous schemes in addressing systems-level challenges, the implications of these schemes on client privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the computational heterogeneity among client devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information from target clients. To this end, we propose two novel attacks in the model-heterogeneous setting, even with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where clients train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters are dynamically updated each round. We show that a malicious adversary is able to compromise the model and data confidentiality of a target group of clients. We evaluate our attacks on the MNIST dataset and show that using our techniques, an adversary can reconstruct data samples with high fidelity. / Master of Science / Federated learning (FL) is a communication protocol that allows multiple distrustful users to collaboratively train a machine learning model together. In FL, data never leaves user devices; instead, users only share locally computed gradients or model parameters (e.g. weight and bias values) with an aggregation server. As individual gradients may leak information about a given user's dataset, secure aggregation was proposed. Secure aggregation is a protocol that users and the server run together, where the server only receives the aggregate gradient update from the set of all sampled users instead of each individual user update. In FL, users often have varying levels of compute power, on-device memory, and communication bandwidth. These differences between users are collectively referred to as systems-level (or system) heterogeneity. While there are a number of techniques to address system heterogeneity, one popular approach is to have users train on different subsets of the global model. This approach is known as model-heterogeneous FL. Despite the benefits of model-heterogeneous FL schemes in addressing systems-level challenges, the implications of these schemes on user privacy have not been thoroughly investigated. In this thesis, we investigate whether the nature of model distribution and the differences in compute power between user devices in model-heterogeneous FL schemes may result in the server being able to recover sensitive information. To this end, we propose two novel attacks in the model-heterogeneous setting with secure aggregation in place. We call these attacks the Convergence Rate Attack and the Rolling Model Attack. The Convergence Rate Attack targets schemes where users train on the same subset of the global model, while the Rolling Model Attack targets schemes where model-parameters may change each round. We first show that a malicious server is able to obtain individual user updates, despite secure aggregation being in place. Then, we demonstrate how an adversary can utilize those updates to reverse engineer data samples from users. We evaluate our attacks on the MNIST dataset, a commonly used dataset of handwritten digits and their labels. We show that by running our attacks, an adversary can accurately identify what images a user trained on.
312

Algorithmic Modifications to a Multidisciplinary Design Optimization Model of Containerships

Ganguly, Sandipan 24 July 2002 (has links)
When designing a ship, a designer often begins with "an idea" of what the ship might look like and what specifications the ship should meet. The multidisciplinary design optimization model is a tool that combines an analysis and an optimization process and uses a measure of merit to obtain what it infers to be the best design. All that the designer has to know is the range of values of certain design variables that confine the design within a lower and an upper bound. The designer then feeds the MDO model with any arbitrary design within the bounds and the model searches for the best design that minimizes or maximizes a measure of merit and also meets a set of structural and stability requirements. The model is multidisciplinary because the analysis process, which calculates the measure of merit and other performance parameters, can be a combination of sub-processes used in various fields of engineering. The optimization process can also be a variety of mathematical programming techniques depending on the type of the design problem. The container ship design problem is a combination of discreet and continuous sub-problems. But to avail the advantages of gradient-based optimization algorithms, the design problem is molded into a fully continuous problem. The efficiency and effectiveness with which an optimization process achieves the best design depends on how well the design problem is posed for the optimizer and how well that particular optimization algorithm tackles the type of design problems posed before it. This led the author to investigate the details of the analysis and the optimization process within the MDO model and make modifications to each of the processes, so that the two become more compatible towards achieving a better final design. Modifications made within the optimization algorithm were then used to develop a generalized modification method that can be used to improve any gradient-based optimization algorithm. / Master of Science
313

Ionospheric Scintillation Prediction, Modeling, and Observation Techniques for the August 2017 Solar Eclipse

Brosie, Kayla Nicole 16 August 2017 (has links)
A full solar eclipse is going to be visible from a range of states in the contiguous United States on August 21, 2017. Since the atmosphere of the Earth is charged by the sun, the blocking of the sunlight by the moon may cause short term changes to the atmosphere, such as density and temperature alterations. There are many ways to measure these changes, one of these being ionospheric scintillation. Ionospheric scintillation is rapid amplitude and phase fluctuations of signals passing through the ionosphere caused by electron density irregularities in the ionosphere. At mid-latitudes, scintillation is not as common of an occurrence as it is in equatorial or high-altitude regions. One of the theories that this paper looks into is the possibility of the solar eclipse producing an instability in the ionosphere that will cause the mid-latitude region to experience scintillations that would not normally be present. Instabilities that could produce scintillation are reviewed and altered further to model similar conditions to those that might occur during the solar eclipse. From this, the satellites that are being used are discuses, as is hardware and software tools were developed to record the scintillation measurements. Although this work was accomplished before the eclipse occurred, measurement tools were developed and verified along with generating a model that predicted if the solar eclipse will produce an instability large enough to cause scintillation for high frequency satellite downlinks. / Master of Science / A full solar eclipse is going to be visible from a range of states in the contiguous United States on August 21, 2017. Since the atmosphere of the Earth is charged by the sun, the blocking of the sunlight by the moon may cause short term changes to the atmosphere, such as density and temperature alterations. There are many ways to measure these changes, one of these being ionospheric scintillation. Ionospheric scintillation is rapid amplitude and phase fluctuations of signals passing through the ionosphere caused by electron density irregularities in the ionosphere. At mid-latitudes, scintillation is not as common of an occurrence as it is in equatorial or high-altitude regions. One of the theories that this paper looks into is the possibility of the solar eclipse producing an instability in the ionosphere that will cause the mid-latitude region to experience scintillations that would not normally be present. Instabilities that could produce scintillation are reviewed and altered further to model similar conditions to those that might occur during the solar eclipse. From this, the satellites that are being used are discuses, as is hardware and software tools were developed to record the scintillation measurements. Although this work was accomplished before the eclipse occurred, measurement tools were developed and verified along with generating a model that predicted if the solar eclipse will produce an instability large enough to cause scintillation for high frequency satellite downlinks.
314

New approaches and algorithms for the analysis of vertical refractivity profile below 1 KM in a subtropical region

AbouAlmal, A., Abd-Alhameed, Raed, Jones, Steven M.R., AlAhmad, H. 26 September 2014 (has links)
Yes / In this paper, 17 years of high resolution surface and radiosonde meteorological data from 1997-2013 for the subtropical Gulf region are analysed. Relationships between the upper air refractivity, Nh, and vertical refractivity gradient, ΔN, in the low troposphere and the commonly available data of surface refractivity, Ns are investigated. A new approach is discussed to estimate Nh and ΔN from the analysis of the dry and wet components of Ns, which gives better results for certain cases. Results are compared with those obtained from existing linear and exponential models in the literature. The investigation focusses on three layer heights at 65 m, 100 m and 1 km above ground level. Correlation between the components of Ns with both Nh and ΔN are studied for each atmospheric layer. Where high correlations were found, empirical models are derived from best-fitting curves.
315

Spectral edge image fusion: theory and applications

Connah, David, Drew, M.S., Finlayson, G. January 2014 (has links)
No / This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is subsequently reintegrated to generate an output. Constraints on the output colours are provided by an initial RGB rendering to produce ‘naturalistic’ colours: we provide a theorem for projecting higher-D contrast onto the initial colour gradients such that they remain close to the original gradients whilst maintaining exact high-D contrast. The solution to this constrained optimisation is closed-form, allowing for a very simple and hence fast and efficient algorithm. Our approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this paper we focus on the problem of mapping N-D inputs to 3-D colour outputs. We present results in three applications: hyperspectral remote sensing, fusion of colour and near-infrared images, and colour visualisation of MRI Diffusion-Tensor imaging.
316

Long term evolution of the surface refractivity for arctic regions

Bettouche, Y., Kouki, A., Agba, B., Obeidat, Huthaifa A.N., Alhassan, H., Rodriguez, Jonathan, Abd-Alhameed, Raed, Jones, Steven M.R. 02 July 2019 (has links)
Yes / In this paper, local meteorological data for a period of 35 years (from 1979 to 2013) from Kuujuaq station have been used to calculate the surface refractivity, N and to estimate the vertical refractivity gradient, dN1, in the lowest atmospheric layer above the ground. Monthly and yearly variations of the mean of N and dN1 are provided. The values obtained are compared with the corresponding values from the ITU maps. The long-term trend of the surface refractivity is also investigated. The data demonstrate that the indices N and dN1 are subject to an evolution which may have significance in the context of climate change (CC). Monthly means of N show an increasing departure from ITU-R values since 1990. Yearly mean values of the dN1 show a progressive decrease over the period of study. Seasonal means of dN1 show a decrease over time, especially for summer. Such a trend may increase the occurrence of super-refraction. However, currently available ITU-R recommendations for microwave link design assume a stationary climate, so there is a need for a new modelling approach.
317

Block-decomposition and accelerated gradient methods for large-scale convex optimization

Ortiz Diaz, Camilo 08 June 2015 (has links)
In this thesis, we develop block-decomposition (BD) methods and variants of accelerated *9gradient methods for large-scale conic programming and convex optimization, respectively. The BD methods, discussed in the first two parts of this thesis, are inexact versions of proximal-point methods applied to two-block-structured inclusion problems. The adaptive accelerated methods, presented in the last part of this thesis, can be viewed as new variants of Nesterov's optimal method. In an effort to improve their practical performance, these methods incorporate important speed-up refinements motivated by theoretical iteration-complexity bounds and our observations from extensive numerical experiments. We provide several benchmarks on various important problem classes to demonstrate the efficiency of the proposed methods compared to the most competitive ones proposed earlier in the literature. In the first part of this thesis, we consider exact BD first-order methods for solving conic semidefinite programming (SDP) problems and the more general problem that minimizes the sum of a convex differentiable function with Lipschitz continuous gradient, and two other proper closed convex (possibly, nonsmooth) functions. More specifically, these problems are reformulated as two-block monotone inclusion problems and exact BD methods, namely the ones that solve both proximal subproblems exactly, are used to solve them. In addition to being able to solve standard form conic SDP problems, the latter approach is also able to directly solve specially structured non-standard form conic programming problems without the need to add additional variables and/or constraints to bring them into standard form. Several ingredients are introduced to speed-up the BD methods in their pure form such as: adaptive (aggressive) choices of stepsizes for performing the extragradient step; and dynamic updates of scaled inner products to balance the blocks. Finally, computational results on several classes of SDPs are presented showing that the exact BD methods outperform the three most competitive codes for solving large-scale conic semidefinite programming. In the second part of this thesis, we present an inexact BD first-order method for solving standard form conic SDP problems which avoids computations of exact projections onto the manifold defined by the affine constraints and, as a result, is able to handle extra large-scale SDP instances. In this BD method, while the proximal subproblem corresponding to the first block is solved exactly, the one corresponding to the second block is solved inexactly in order to avoid finding the exact solution of a linear system corresponding to the manifolds consisting of both the primal and dual affine feasibility constraints. Our implementation uses the conjugate gradient method applied to a reduced positive definite dual linear system to obtain inexact solutions of the latter augmented primal-dual linear system. In addition, the inexact BD method incorporates a new dynamic scaling scheme that uses two scaling factors to balance three inclusions comprising the optimality conditions of the conic SDP. Finally, we present computational results showing the efficiency of our method for solving various extra large SDP instances, several of which cannot be solved by other existing methods, including some with at least two million constraints and/or fifty million non-zero coefficients in the affine constraints. In the last part of this thesis, we consider an adaptive accelerated gradient method for a general class of convex optimization problems. More specifically, we present a new accelerated variant of Nesterov's optimal method in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method; and substantially improve its practical performance in comparison to the other existing variants. Computational results are presented to demonstrate that the proposed adaptive accelerated method performs quite well compared to other variants proposed earlier in the literature.
318

Impact des perturbations anthropiques sur la végétation du complexe de milieux humides des Tourbières-de-Lanoraie

Tousignant, Marie-Eve January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
319

Le rôle des interactions biotiques dans la régénération des chênes au niveau des communautés de forêts dunaires de la région Aquitaine (Sud-Ouest de la France) / The role of biotic interactions for oak regeneration in the coastal sand dune forest communities of the Aquitaine region (south-western France)

Muhamed, Hassan 18 September 2012 (has links)
Bien que les interactions biotiques soient connues pour être déterminantes dans l’établissement des espèces, il est encore difficile de savoir quels facteurs sont impliqués dans l’équilibre entre interaction positive et interaction négative. Il est de fait difficile de savoir sous quelles conditions les interactions biotiques peuvent favoriser ou empêcher la régénération des espèces. Cette thèse vise à étudier le rôle des interactions biotiques d'arbustes avec des semis de chêne sur la régénération de trois espèces de chênes sur les forêts des dunes côtières d'Aquitaine en testant comment l’effet net de ces interactions varie le long d’un gradient d’aridité, sur deux étages de canopée et en fonction des stratégies fonctionnelles de trois espèces de Chêne dans un contexte de changement global. Ce travail a été effectué en utilisant deux approches, une approche descriptive en utilisant un patron de points répartis dans l’espace et une approche expérimentale en transplantant les semis cibles. Les résultats montrent que la variation spatiale, en terme d’interactions biotiques, est fortement corrélée avec la sévérité environnementale, avec des interactions entre jeunes pousses de chêne très sensibles aux sécheresses estivales et aux trouées dans les canopées. Les interactions testées étaient de nature facilitatrice dans les plots découverts dans les dunes sèches du nord de Soulac et tournaient à la compétition sous le couvert forestier dans les dunes plus humides du sud, à Seignosse. La nature des interactions était constant entre les stratégies fonctionnelles des espèces cibles de chêne. Les résultats de cette thèse montrent de manière générale une confirmation de la formulation originale du SGH qui prédit une augmentation de la facilitation en lien avec une augmentation de la sévérité environnementale et souligne le fait que la réduction du stress hydrique atmosphérique par des arbustes est nécessaire à la régénération des semis de chêne. Dans cette perspective, le sylviculteur doit conserver les arbustes du sous-étage, en particulier dans les trouées, afin de permettre une meilleure régénération des plants de chêne. Cette thèse met en évidence la nécessaire considération des interactions biotiques dans la régénération du chêne dans les actuelles sévères conditions climatiques et le rôle prépondérant de ces interactions dans la réponse aux changements climatiques futurs probables dans cette région Aquitaine. / Although biotic interactions are known to be important determinants of species establishment, it is uncertain what factors determine the net balance between positive and negative interactions thus, under what conditions biotic interactions could enhance or impede species regeneration. Bien que les interactions biotiques soient connues pour être This thesis aims to study the role of biotic interactions of shrubs with oak seedlings for regeneration of three oak species on the Aquitaine coastal dune forests, by testing how the net effect of these interactions vary along aridity gradient, between two overstory canopies and in respect to the functional strategies of three oak species in the context of climate change. This was done by using two approaches, descriptive approach using spatial point pattern data and experimental approach by transplanting the target seedlings. The results show that the spatial variation in the nature of biotic interactions is strongly relate to environmental severity conditions, where the shrub-oak seedling interactions were very sensitive to increasing summer drought and canopy opening, the interactions strength was facilitative under gap plots in the dry northern dunes in Soulac and switch on competitive under forest plots in the wet southern dunes in Seignosse. The nature of the interactions was constant across the functional strategies of the targets species of oak. For the most part, results of this thesis show general support to the original formulation of SGH which predicts increasing facilitation with increasing severity and underscore the fact that atmospheric water stress reduction by shrubs is required for oak seedling regeneration. In this perspective, silviculturist should conserve understory shrubs, in particular in gaps, in order to allow a better regeneration niche of oak seedlings. This thesis highlights the importance of considering biotic interactions in oak regeneration under current harshness climatic conditions and with expectation to have an ambitious role in alleviation future climatic change consequence in this region.
320

Contribution à l'optimisation globale : approche déterministe et stochastique et application / Contribution to global optimization : deterministic, stochastic approachs and application

Es-Sadek, Mohamed Zeriab 21 November 2009 (has links)
Dans les situations convexes, le problème d'optimisation globale peut être abordé par un ensemble de méthodes classiques, telles, par exemple, celles basées sur le gradient, qui ont montré leur efficacité en ce domaine. Lorsque la situation n'est pas convexe, ces méthodes peuvent être mises en défaut et ne pas trouver un optimum global. La contribution de cette thèse est une méthodologie pour la détermination de l'optimum global d'une fonction non convexe, en utilisant des algorithmes hybrides basés sur un couplage entre des algorithmes stochastiques issus de familles connues, telles, par exemple, celle des algorithmes génétiques ou celle du recuit simulé et des algorithmes déterministes perturbés aléatoirement de façon convenable. D'une part, les familles d'algorithmes stochastiques considérées ont fait preuve d'efficacité pour certaines classes de problèmes et, d'autre part, l'adjonction de perturbations aléatoires permet de construire des méthodes qui sont en théorie convergents vers un optimum global. En pratique, chacune de ces approches a ses limitations et insuffisantes, de manière que le couplage envisagé dans cette thèse est une alternative susceptible d'augmenter l'efficacité numérique. Nous examinons dans cette thèse quelques unes de ces possibilités de couplage. Pour établir leur efficacité, nous les appliquons à des situations test classiques et à un problème de nature stochastique du domaine des transports. / This thesis concerns the global optimization of a non convex function under non linear restrictions, this problem cannot be solved using the classic deterministic methods like the projected gradient algorithm and the sqp method because they can solve only the convex problems. The stochastic algorithms like the genetic algorithm and the simulated annealing algorithm are also inefficients for solving this type of problems. For solving this kind of problems, we try to perturb stocasicly the deterministic classic method and to combine this perturbation with genetic algorithm and the simulated annealing. So we do the combination between the perturbed projected gradient and the genetic algorithm, the perturbed sqp method and the genetic algorithm, the perturbed projected gradient and the simulated annealing, the Piyavskii algorithm and the genetic algorithm. We applicate the coupled algorithms to different classic examples for concretited the thesis. For illustration in the real life, we applicate the coupled perturbed projected gradient end the genetic algorithm to logistic problem eventuelly transport. In this view, we sold the efficient practices.

Page generated in 0.0609 seconds