• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 17
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Optimisation sans dérivées sous incertitudes appliquées à des simulateurs coûteux / Derivative-free optimization under uncertainty applied to costly simulators

Pauwels, Benoît 10 March 2016 (has links)
La modélisation de phénomènes complexes rencontrés dans les problématiques industrielles peut conduire à l'étude de codes de simulation numérique. Ces simulateurs peuvent être très coûteux en temps d'exécution (de quelques heures à plusieurs jours), mettre en jeu des paramètres incertains et même être intrinsèquement stochastiques. Fait d'importance en optimisation basée sur de tels simulateurs, les dérivées des sorties en fonction des entrées peuvent être inexistantes, inaccessibles ou trop coûteuses à approximer correctement. Ce mémoire est organisé en quatre chapitres. Le premier chapitre traite de l'état de l'art en optimisation sans dérivées et en modélisation d'incertitudes. Les trois chapitres suivants présentent trois contributions indépendantes --- bien que liées --- au champ de l'optimisation sans dérivées en présence d'incertitudes. Le deuxième chapitre est consacré à l'émulation de codes de simulation stochastiques coûteux --- stochastiques au sens où l'exécution de simulations avec les mêmes paramètres en entrée peut donner lieu à des sorties distinctes. Tel était le sujet du projet CODESTOCH mené au Centre d'été de mathématiques et de recherche avancée en calcul scientifique (CEMRACS) au cours de l'été 2013 avec deux doctorants de Électricité de France (EDF) et du Commissariat à l'énergie atomique et aux énergies alternatives (CEA). Nous avons conçu quatre méthodes de construction d'émulateurs pour des fonctions dont les valeurs sont des densités de probabilité. Ces méthodes ont été testées sur deux exemples-jouets et appliquées à des codes de simulation industriels concernés par trois phénomènes complexes: la distribution spatiale de molécules dans un système d'hydrocarbures (IFPEN), le cycle de vie de grands transformateurs électriques (EDF) et les répercussions d'un hypothétique accident dans une centrale nucléaire (CEA). Dans les deux premiers cas l'émulation est une étape préalable à la résolution d'un problème d'optimisation. Le troisième chapitre traite de l'influence de l'inexactitude des évaluations de la fonction objectif sur la recherche directe directionnelle --- un algorithme classique d'optimisation sans dérivées. Dans les problèmes réels, l'imprécision est sans doute toujours présente. Pourtant les utilisateurs appliquent généralement les algorithmes de recherche directe sans prendre cette imprécision en compte. Nous posons trois questions. Quelle précision peut-on espérer obtenir, étant donnée l'inexactitude ? À quel prix cette précision peut-elle être atteinte ? Quels critères d'arrêt permettent de garantir cette précision ? Nous répondons à ces trois questions pour l'algorithme de recherche directe directionnelle appliqué à des fonctions dont l'imprécision sur les valeurs --- stochastique ou non --- est uniformément bornée. Nous déduisons de nos résultats un algorithme adaptatif pour utiliser efficacement des oracles de niveaux de précision distincts. Les résultats théoriques et l'algorithme sont validés avec des tests numériques et deux applications réelles: la minimisation de surface en conception mécanique et le placement de puits pétroliers en ingénierie de réservoir. Le quatrième chapitre est dédié aux problèmes d'optimisation affectés par des paramètres imprécis, dont l'imprécision est modélisée grâce à la théorie des ensembles flous. Plusieurs méthodes ont déjà été publiées pour résoudre les programmes linéaires où apparaissent des coefficients flous, mais très peu pour traiter les problèmes non linéaires. Nous proposons un algorithme pour répondre à une large classe de problèmes par tri non-dominé itératif. / The modeling of complex phenomena encountered in industrial issues can lead to the study of numerical simulation codes. These simulators may require extensive execution time (from hours to days), involve uncertain parameters and even be intrinsically stochastic. Importantly within the context of simulation-based optimization, the derivatives of the outputs with respect to the inputs may be inexistent, inaccessible or too costly to approximate reasonably. This thesis is organized in four chapters. The first chapter discusses the state of the art in derivative-free optimization and uncertainty modeling. The next three chapters introduce three independent---although connected---contributions to the field of derivative-free optimization in the presence of uncertainty. The second chapter addresses the emulation of costly stochastic simulation codes---stochastic in the sense simulations run with the same input parameters may lead to distinct outputs. Such was the matter of the CODESTOCH project carried out at the Summer mathematical research center on scientific computing and its applications (CEMRACS) during the summer of 2013, together with two Ph.D. students from Electricity of France (EDF) and the Atomic Energy and Alternative Energies Commission (CEA). We designed four methods to build emulators for functions whose values are probability density functions. These methods were tested on two toy functions and applied to industrial simulation codes concerned with three complex phenomena: the spatial distribution of molecules in a hydrocarbon system (IFPEN), the life cycle of large electric transformers (EDF) and the repercussions of a hypothetical accidental in a nuclear plant (CEA). Emulation was a preliminary process towards optimization in the first two cases. In the third chapter we consider the influence of inaccurate objective function evaluations on direct search---a classical derivative-free optimization method. In real settings inaccuracy may never vanish, however users usually apply direct search algorithms disregarding inaccuracy. We raise three questions. What precision can we hope to achieve, given the inaccuracy? How fast can this precision be attained? What stopping criteria can guarantee this precision? We answer these three questions for directional direct search applied to objective functions whose evaluation inaccuracy stochastic or not is uniformly bounded. We also derive from our results an adaptive algorithm for dealing efficiently with several oracles having different levels of accuracy. The theory and algorithm are validated with numerical tests and two industrial applications: surface minimization in mechanical design and oil well placement in reservoir engineering. The fourth chapter considers optimization problems with imprecise parameters, whose imprecision is modeled with fuzzy sets theory. A number of methods have been published to solve linear programs involving fuzzy parameters, but only a few as for nonlinear programs. We propose an algorithm to address a large class of fuzzy optimization problems by iterative non-dominated sorting. The distributions of the fuzzy parameters are assumed only partially known. We also provide a criterion to assess the precision of the solutions and make comparisons with other methods found in the literature. We show that our algorithm guarantees solutions whose level of precision at least equals the precision on the available data.
12

Otimização sem derivadas : sobre a construção e a qualidade de modelos quadráticos na solução de problemas irrestritos / Derivative-free optimization : on the construction and quality of quadratic models for unconstrained optimization problems

Nascimento, Ivan Xavier Moura do, 1989- 25 August 2018 (has links)
Orientador: Sandra Augusta Santos / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-25T00:20:47Z (GMT). No. of bitstreams: 1 Nascimento_IvanXavierMourado_M.pdf: 5587602 bytes, checksum: 769fbf124a59d55361b184a6ec802f66 (MD5) Previous issue date: 2014 / Resumo: Métodos de região de confiança formam uma classe de algoritmos iterativos amplamente utilizada em problemas de otimização não linear irrestrita para os quais as derivadas da função objetivo não estão disponíveis ou são imprecisas. Uma das abordagens clássicas desses métodos envolve a otimização de modelos polinomiais aproximadores para a função objetivo, construídos a cada iteração com base em conjuntos amostrais de pontos. Em um trabalho recente, Scheinberg e Toint [SIAM Journal on Optimization, 20 (6) (2010), pp. 3512-3532 ] mostram que apesar do controle do posicionamento dos pontos amostrais ser essencial para a convergência do método, é possível que tal controle ocorra de modo direto apenas no estágio final do algoritmo. Baseando-se nessas ideias e incorporando-as a um esquema algorítmico teórico, os autores investigam analiticamente uma curiosa propriedade de autocorreção da geometria dos pontos, a qual se evidencia nas iterações de insucesso. A convergência global do novo algoritmo é, então, obtida como uma consequência da geometria autocorretiva. Nesta dissertação estudamos o posicionamento dos pontos em métodos baseados em modelos quadráticos de interpolação e analisamos o desempenho computacional do algoritmo teórico proposto por Scheinberg e Toint, cujos parâmetros são determinados / Abstract: Trust-region methods are a class of iterative algorithms widely applied to nonlinear unconstrained optimization problems for which derivatives of the objective function are unavailable or inaccurate. One of the classical approaches involves the optimization of a polynomial model for the objective function, built at each iteration and based on a sample set. In a recent work, Scheinberg and Toint [SIAM Journal on Optimization, 20 (6) (2010), pp. 3512¿3532 ] proved that, despite being essential for convergence results, the improvement of the geometry (poisedness) of the sample set might occur only in the final stage of the algorithm. Based on these ideas and incorporating them into a theoretical algorithm framework, the authors investigate analytically an interesting self-correcting geometry mechanism of the interpolating set, which becomes evident at unsuccessful iterations. Global convergence for the new algorithm is then proved as a consequence of this self-correcting property. In this work we study the positioning of the sample points within interpolation-based methods that rely on quadratic models and investigate the computational performance of the theoretical algorithm proposed by Scheinberg and Toint, whose parameters are based upon either choices of previous works or numerical experiments / Mestrado / Matematica Aplicada / Mestre em Matemática Aplicada
13

Sobre um método de minimização irrestrita baseado em derivadas simplex / About an unconstrained minimization method based on simplex derivatives

Cervelin, Bruno Henrique, 1988- 04 August 2013 (has links)
Orientador: Maria Aparecida Diniz Ehrhardt / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica / Made available in DSpace on 2018-08-22T15:48:00Z (GMT). No. of bitstreams: 1 Cervelin_BrunoHenrique_M.pdf: 1935510 bytes, checksum: 91d17dd60bdd280c9eddd301cb3d2c24 (MD5) Previous issue date: 2013 / Resumo: O objetivo deste trabalho é apresentar alguns métodos de minimização irrestrita sem derivadas, tais como, Nelder-Mead, busca padrão e SID-PSM, assim como compará-los. Ainda pretendemos apresentar o problema de otimização de parâmetros de algoritmos, e aplicar o método SID-PSM de modo a encontrar parâmetros ótimos para o próprio método SID-PSM em relação ao número de avaliações de função que o método realiza. Os experimentos numéricos realizados mostram que o SID-PSM _e mais robusto e mais eficiente que os métodos clássicos sem derivadas (busca padrão e Nelder-Mead). Outros experimentos nos mostram o potencial do problema de otimização de parâmetros de algoritmos em melhorar tanto a eficiência quanto a robustez dos métodos / Abstract: The aim of this paper is to present some derivative-free methods for unconstrained minimization problems, such as Nelder-Mead, pattern search and SID-PSM, and compare them. We also intend to present the problem of optimal algorithmic parameters, and apply the method SID-PSM in order to find optimal parameters for the method SID-PSM itself in relation to the number of function evaluations performed by the method. The numerical experiments performed show that the SID-PSM is more robust and more efficient than the classical derivative-free methods (pattern search and Nelder-Mead). Other experiments show us the potential of the problem of optimal algorithmic parameters to improve both the efficiency and the robustness of the methods / Mestrado / Matematica Aplicada / Mestre em Matemática Aplicada
14

Otimização sem derivadas em conjuntos magros / Derivative-free optimization on thin domains

Sobral, Francisco Nogueira Calmon, 1984- 20 August 2018 (has links)
Orientador: José Mario Martínez Pérez / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-20T03:18:55Z (GMT). No. of bitstreams: 1 Sobral_FranciscoNogueiraCalmon_D.pdf: 3255516 bytes, checksum: 380cc11e2ad93213e66f456ef5945f1c (MD5) Previous issue date: 2012 / Resumo: Os problemas de otimização sem derivadas surgem de modelos para os quais as derivadas das funções e das restrições envolvidas, por alguma razão, não estão disponíveis. Os motivos variam desde usuários que não querem programar as derivadas até funções excessivamente complexas e caixas-pretas, oriundas de simulações só possíveis graças ao crescimento na capacidade de processamento dos computadores. Acompanhando esse crescimento, o número de algoritmos para resolver problemas de otimização sem derivadas aumentou nos últimos anos. Porém, poucos são aqueles que conseguem lidar de forma eficiente com problemas cujos domínios são magros, como, por exemplo, quando há restrições de igualdade. Neste trabalho, apresentamos a teoria e implementação de dois algoritmos capazes de trabalhar com domínios magros em problemas de otimização sem derivadas. Ambos partem da premissa de que a parte mais custosa na resolução é a avaliação da função objetivo. Com isso em mente, o processo de resolução é dividido em duas fases. Na fase de restauração, buscamos por pontos menos inviáveis sem utilizar avaliações da função objetivo. Na fase de minimização, ou otimização, o objetivo é reduzir a função objetivo com o uso de algoritmos bem estabelecidos para problemas sem derivadas com restrições simples. O primeiro algoritmo utiliza ideias de Restauração Inexata associadas a uma tolerância decrescente à inviabilidade. Utilizando hipóteses simples e usuais dos métodos de busca direta direcional, mostramos propriedades de convergência a minimizadores globais. O segundo algoritmo recupera totalmente os resultados teóricos de um algoritmo recente de Restauração Inexata com busca linear e aplica-se a problemas nos quais apenas as derivadas da função objetivo não estão disponíveis. Testes numéricos mostram as boas propriedades dos dois algoritmos, em particular quando comparados com algoritmos baseados em penalidades / Abstract: Derivative-free optimization problems arise from models whose derivatives of some functions are not available. This information is unavailable due to extremely complex and black-box functions, originated from simulation procedures, or even to user inability. Following the growth in the number of applications, the number of derivative-free algorithms has increased in the last years. However, few algorithms are able to handle thin feasible domains efficiently, for example, in the presence of equality nonlinear constraints. In the present work, we describe the theory and implementation of two algorithms capable of dealing with thin-constrained derivative-free problems. Their definition considers that the objective function evaluation is the most expensive part of the problem. Based on this principle, the process of solving a problem is split into two phases. In the restoration phase, we try to improve the feasibility without evaluating the objective function. In the minimization phase, the aim is to decrease the objective function value by using well-established algorithms in order to solve derivative-free problems with simple constraints. The _rst algorithm uses Inexact Restoration ideas together with a decreasing infeasibility tolerance. Under the usual hypotheses of direct search methods, we show global minimization results. The second algorithm extends to the derivative-free case all the theoretical results obtained in a recent line-search Inexact Restoration algorithm. In this approach, only the derivatives of the objective function are not available. We perform numerical experiments to show the advantages of each algorithm, in particular when comparing with penalty-like algorithms / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
15

Métodos híbridos e livres de derivadas para resolução de sistemas não lineares / Hybrid derivative-free methods for nonlinear systems

Begiato, Rodolfo Gotardi, 1980- 09 May 2012 (has links)
Orientadores: Márcia Aparecida Gomes Ruggiero, Sandra Augusta Santos / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-21T10:21:10Z (GMT). No. of bitstreams: 1 Begiato_RodolfoGotardi_D.pdf: 3815627 bytes, checksum: 59584610cfd737a94e68dc5bf3735e25 (MD5) Previous issue date: 2012 / Resumo: O objetivo desta tese é tratar da resolução de sistemas não lineares de grande porte, em que as funções são continuamente diferenciáveis, por meio de uma abordagem híbrida que utiliza um método iterativo com duas fases. A primeira fase consiste de versões sem derivadas do método do ponto fixo empregando parâmetros espectrais para determinar o tamanho do passo da direção residual. A segunda fase é constituída pelo método de Newton inexato em uma abordagem matrix-free, em que é acoplado o método GMRES para resolver o sistema linear que determina a nova direção de busca. O método híbrido combina ordenadamente as duas fases de forma que a segunda é acionada somente em caso de falha na primeira e, em ambas, uma condição de decréscimo não-monótono deve ser verificada para aceitação de novos pontos. Desenvolvemos ainda um segundo método, em que uma terceira fase de busca direta é acionada em situações em que o excesso de buscas lineares faz com que o tamanho de passo na direção do método de Newton inexato torne-se demasiadamente pequeno. São estabelecidos os resultados de convergência dos métodos propostos. O desempenho computacional é avaliado em uma série de testes numéricos com problemas tradicionalmente encontrados na literatura. Tanto a análise teórica quanto a numérica evidenciam a viabilidade das abordagens apresentadas neste trabalho / Abstract: This thesis handles large-scale nonlinear systems for which all the involved functions are continuously differentiable. They are solved by means of a hybrid approach based on an iterative method with two phases. The first phase is defined by derivative-free versions of a fixed-point method that employs spectral parameters to define the steplength along the residual direction. The second phase consists of a matrix-free inexact Newton method that employs the GMRES to solve the linear system that computes the search direction. The proposed hybrid method neatly combines the two phases in such a way that the second is called only in case the first one fails. To accept new points in both phases, a nonmonotone decrease condition upon a merit function has to be verified. A second method is developed as well, with a third phase based on direct search, that should act whenever too many line searches have excessively decreased the steplenght along the inexact- Newton direction. Convergence results for the proposed methods are established. The computational performance is assessed in a set of numerical experiments with problems from the literature. Both the theoretical and the experimental analysis corroborate the feasibility of the proposed strategies / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
16

Simulation de transfert de chaleur et l'optimisation automatique des probes trajectoires multiple de la planification pré-opératoire pour les interventions percutanées thermique / Simulation of heat transfer and automatic optimization of multiple probes trajectories for pre-operative planning of percutaneous thermoablation interventions

Jaberzadeh, Amir 13 February 2015 (has links)
Différentes techniques de chirurgie mini-invasive permettent aujourd’hui d’effectuer les procédures d'ablation de tumeurs. La cryochirurgie est une de ces techniques et fonctionne grâce à une technique de décompression très rapide de l'argon à l’extrémité d’une sonde en forme d'aiguille. La planification pré-opératoire de ce type d’intervention est très difficile pour le chirurgien, qui doit se représenter mentalement la disposition finale des aiguilles par rapport à la position des structures anatomiques complexe. Une sur-ablation ou une sous-ablation peuvent entraîner des complications donc, devant le besoin crucial d'une telle planification, dans cette thèse nous nous sommes concentrés sur la planification pré-opératoire automatisée de la cryochirurgie,avec les objectifs de assister le chirurgien grâce à une prédiction plus réaliste des zones d'ablation et proposer automatiquement un placement d'aiguille avec un risque minimal pour le patient dans un délai acceptable pour une utilisation en salle d'opération. / There exist several minimally invasive techniques to perform tumor ablation procedures.Cryosurgery is one of these techniques and works by decompressing very rapidly the argon gas through a needle-like probe. It is hard for the surgeons to imagine final results and plan the surgery in advance in a complicated anatomical environment. Over-ablation or under ablation may result in complications during the treatment. So, due to a crucial need for having such a planning tool, in this thesis we focused on an automated pre-surgical planning for cryosurgery with goals to support the physician by utilizing a more realistic prediction of ablation zones and proposing a needle placement setup with a close to minimum risk to the patient and an optimal coverage of the tumor by the iceball in an acceptable time for the use in the operation room.
17

Apprentissage basé sur le Qini pour la prédiction de l’effet causal conditionnel

Belbahri, Mouloud-Beallah 08 1900 (has links)
Les modèles uplift (levier en français) traitent de l'inférence de cause à effet pour un facteur spécifique, comme une intervention de marketing. En pratique, ces modèles sont construits sur des données individuelles issues d'expériences randomisées. Un groupe traitement comprend des individus qui font l'objet d'une action; un groupe témoin sert de comparaison. La modélisation uplift est utilisée pour ordonner les individus par rapport à la valeur d'un effet causal, par exemple, positif, neutre ou négatif. Dans un premier temps, nous proposons une nouvelle façon d'effectuer la sélection de modèles pour la régression uplift. Notre méthodologie est basée sur la maximisation du coefficient Qini. Étant donné que la sélection du modèle correspond à la sélection des variables, la tâche est difficile si elle est effectuée de manière directe lorsque le nombre de variables à prendre en compte est grand. Pour rechercher de manière réaliste un bon modèle, nous avons conçu une méthode de recherche basée sur une exploration efficace de l'espace des coefficients de régression combinée à une pénalisation de type lasso de la log-vraisemblance. Il n'y a pas d'expression analytique explicite pour la surface Qini, donc la dévoiler n'est pas facile. Notre idée est de découvrir progressivement la surface Qini comparable à l'optimisation sans dérivée. Le but est de trouver un maximum local raisonnable du Qini en explorant la surface près des valeurs optimales des coefficients pénalisés. Nous partageons ouvertement nos codes à travers la librairie R tools4uplift. Bien qu'il existe des méthodes de calcul disponibles pour la modélisation uplift, la plupart d'entre elles excluent les modèles de régression statistique. Notre librairie entend combler cette lacune. Cette librairie comprend des outils pour: i) la discrétisation, ii) la visualisation, iii) la sélection de variables, iv) l'estimation des paramètres et v) la validation du modèle. Cette librairie permet aux praticiens d'utiliser nos méthodes avec aise et de se référer aux articles méthodologiques afin de lire les détails. L'uplift est un cas particulier d'inférence causale. L'inférence causale essaie de répondre à des questions telle que « Quel serait le résultat si nous donnions à ce patient un traitement A au lieu du traitement B? ». La réponse à cette question est ensuite utilisée comme prédiction pour un nouveau patient. Dans la deuxième partie de la thèse, c’est sur la prédiction que nous avons davantage insisté. La plupart des approches existantes sont des adaptations de forêts aléatoires pour le cas de l'uplift. Plusieurs critères de segmentation ont été proposés dans la littérature, tous reposant sur la maximisation de l'hétérogénéité. Cependant, dans la pratique, ces approches sont sujettes au sur-ajustement. Nous apportons une nouvelle vision pour améliorer la prédiction de l'uplift. Nous proposons une nouvelle fonction de perte définie en tirant parti d'un lien avec l'interprétation bayésienne du risque relatif. Notre solution est développée pour une architecture de réseau de neurones jumeaux spécifique permettant d'optimiser conjointement les probabilités marginales de succès pour les individus traités et non-traités. Nous montrons que ce modèle est une généralisation du modèle d'interaction logistique de l'uplift. Nous modifions également l'algorithme de descente de gradient stochastique pour permettre des solutions parcimonieuses structurées. Cela aide dans une large mesure à ajuster nos modèles uplift. Nous partageons ouvertement nos codes Python pour les praticiens désireux d'utiliser nos algorithmes. Nous avons eu la rare opportunité de collaborer avec l'industrie afin d'avoir accès à des données provenant de campagnes de marketing à grande échelle favorables à l'application de nos méthodes. Nous montrons empiriquement que nos méthodes sont compétitives avec l'état de l'art sur les données réelles ainsi qu'à travers plusieurs scénarios de simulations. / Uplift models deal with cause-and-effect inference for a specific factor, such as a marketing intervention. In practice, these models are built on individual data from randomized experiments. A targeted group contains individuals who are subject to an action; a control group serves for comparison. Uplift modeling is used to order the individuals with respect to the value of a causal effect, e.g., positive, neutral, or negative. First, we propose a new way to perform model selection in uplift regression models. Our methodology is based on the maximization of the Qini coefficient. Because model selection corresponds to variable selection, the task is haunting and intractable if done in a straightforward manner when the number of variables to consider is large. To realistically search for a good model, we conceived a searching method based on an efficient exploration of the regression coefficients space combined with a lasso penalization of the log-likelihood. There is no explicit analytical expression for the Qini surface, so unveiling it is not easy. Our idea is to gradually uncover the Qini surface in a manner inspired by surface response designs. The goal is to find a reasonable local maximum of the Qini by exploring the surface near optimal values of the penalized coefficients. We openly share our codes through the R Package tools4uplift. Though there are some computational methods available for uplift modeling, most of them exclude statistical regression models. Our package intends to fill this gap. This package comprises tools for: i) quantization, ii) visualization, iii) variable selection, iv) parameters estimation and v) model validation. This library allows practitioners to use our methods with ease and to refer to methodological papers in order to read the details. Uplift is a particular case of causal inference. Causal inference tries to answer questions such as ``What would be the result if we gave this patient treatment A instead of treatment B?" . The answer to this question is then used as a prediction for a new patient. In the second part of the thesis, it is on the prediction that we have placed more emphasis. Most existing approaches are adaptations of random forests for the uplift case. Several split criteria have been proposed in the literature, all relying on maximizing heterogeneity. However, in practice, these approaches are prone to overfitting. In this work, we bring a new vision to uplift modeling. We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk. Our solution is developed for a specific twin neural network architecture allowing to jointly optimize the marginal probabilities of success for treated and control individuals. We show that this model is a generalization of the uplift logistic interaction model. We modify the stochastic gradient descent algorithm to allow for structured sparse solutions. This helps fitting our uplift models to a great extent. We openly share our Python codes for practitioners wishing to use our algorithms. We had the rare opportunity to collaborate with industry to get access to data from large-scale marketing campaigns favorable to the application of our methods. We show empirically that our methods are competitive with the state of the art on real data and through several simulation setting scenarios.

Page generated in 0.1417 seconds