• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 21
  • 21
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Piece-wise Linear Approximation for Improved Detection in Structural Health Monitoring

Essegbey, John W. 08 October 2012 (has links)
No description available.
12

Géométrie et topologie des processus périodiquement corrélés induit par la dilation : Application à l'étude de la variabilité des épidémies pédiatriques saisonnières / Geometry and topology of periodically correlated processes : Analysis of the variability of seasonal pediatric epidemics

Dugast, Maël 21 December 2018 (has links)
Chaque année lors de la période hivernale, des phénomènes épidémiques affectent l’organisation des services d’urgences pédiatriques et dégradent la qualité de la réponse fournie. Ces phénomènes présentent une forte variabilité qui rend leur analyse difficile. Nous nous proposons d’étudier cette volatilité pour apporter une vision nouvelle et éclairante sur le comportement de ces épidémies. Pour ce faire, nous avons adopté une vision géométrique et topologique originale directement issue d’une application de la théorie de la dilation: le processus de variabilité étant périodiquement corrélé, cette théorie fournit un ensemble de matrices dites de dilations qui portent toute l’information utile sur ce processus. Cet ensemble de matrices nous permet de représenter les processus stochastiques comme des éléments d’un groupe de Lie particulier, à savoir le groupe de Lie constitué de l’ensemble des courbes sur une variété. Il est alors possible de comparer des processus par ce biais. Pour avoir une perception plus intuitive du processus de variabilité, nous nous sommes ensuite concentrés sur le nuage de points formé par l’ensemble des matrices de dilations. En effet, nous souhaitons mettre en évidence une relation entre la forme temporelle d’un processus et l’organisation de ces matrices de dilations. Nous avons utilisé et développé des outils d’homologie persistante et avons établi un lien entre la désorganisation de ce nuage de points et le type de processus sous-jacents. Enfin nous avons appliqué ces méthodes directement sur le processus de variabilité pour pouvoir détecter le déclenchement de l’épidémie. Ainsi nous avons établi un cadre complet et cohérent, à la fois théorique et appliqué pour répondre à notre problématique. / Each year emergency department are faced with epidemics that affect their organisation and deteriorate the quality of the cares. The analyse of these outbreak is tough due to their huge variability. We aim to study these phenomenon and to bring out a new paradigm in the analysis of their behavior. With this aim in mind, we propose to tackle this problem through geometry and topology: the variability process being periodically correlated, the theory of dilation exhibit a set of matrices that carry all the information about this process. This set of matrices allow to map the process into a Lie group, defined as the set of all curves on a manifold. Thus, it is possible to compare stochastic processes using properties of Lie groups. Then, we consider the point cloud formed by the set of dilation matrices, to gain more intuitions about the underlying process. We proved a relation between the temporal aspect of the signal and the structure of the set of its dilation matrices. We used and developped persistent homology tools, and were able to classify non-stationary processes. Eventually, we implement these techniques directly on the process of arrivals to detect the trigger of the epidemics. Overall we established a complete and a coherent framework, both theoretical and practical.
13

Interpolação linear logaritmica / Linear Interpolation logarithmic

Rossi, Rosângela de Lourdes 04 September 2015 (has links)
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-09-20T12:21:32Z No. of bitstreams: 1 DissRLR.pdf: 4212262 bytes, checksum: f63697a666a8852c76e6c394a0ea7d56 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-21T12:44:46Z (GMT) No. of bitstreams: 1 DissRLR.pdf: 4212262 bytes, checksum: f63697a666a8852c76e6c394a0ea7d56 (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-21T12:44:53Z (GMT) No. of bitstreams: 1 DissRLR.pdf: 4212262 bytes, checksum: f63697a666a8852c76e6c394a0ea7d56 (MD5) / Made available in DSpace on 2016-09-21T12:45:01Z (GMT). No. of bitstreams: 1 DissRLR.pdf: 4212262 bytes, checksum: f63697a666a8852c76e6c394a0ea7d56 (MD5) Previous issue date: 2015-09-04 / Não recebi financiamento / This dissertation project aims at the Teaching of Linear Interpolation logarithmic, with the target group students from High school and their teachers to facilitate the meeting of the logarithms of values without the use of board existing common logarithms or the calculating machines. The concepts, the properties of logarithms and interpolations inserted in activities based on the Didactic Engineering are the brand and the engine of development of this work. The materials handling, visualization of results and activities developed by high school students from public schools ensured the originality of the teaching relationship / learning of mathematics, especially in the Linear Interpolation logarithmic, circumscribing on purpose that objective. / Este projeto de dissertação tem por objetivo o Ensino da Interpolação Linear Logarítmica, tendo como público-alvo estudantes do Ensino médio bem como seus educadores visando facilitar o encontro dos valores de logaritmos sem o uso da tábua de logaritmos decimais existentes nem das máquinas de calcular. Os conceitos, as propriedades dos logaritmos e as interpolações inseridas nas atividades baseadas na Engenharia Didática são a marca e o propulsor do desenvolvimento desta dissertação. A manipulação de materiais, a visualização dos resultados e as atividades desenvolvidas pelos alunos do Ensino Médio da escola pública garantiu a originalidade da relação ensino/aprendizagem da Matemática, em especial na Interpolação Linear Logarítmica, circunscrevendo de forma proposital o referido objetivo.
14

Developing and Utilizing the Concept of Affine Linear Neighborhoods in Flow Visualization

Koch, Stefan 07 May 2021 (has links)
In vielen Forschungsbereichen wie Medizin, Natur- oder Ingenieurwissenschaften spielt die wissenschaftliche Visualisierung eine wichtige Rolle und hilft Wissenschaftlern neue Erkenntnisse zu gewinnen. Der Hauptgrund hierfür ist, dass Visualisierungen das Unsichtbare sichtbar machen können. So können Visualisierungen beispielsweise den Verlauf von Nervenfasern im Gehirn von Probanden oder den Luftstrom um Hindernisse herum darstellen. Diese Arbeit trägt insbesondere zum Teilgebiet der Strömungsvisualisierung bei, welche sich mit der Untersuchung von Prozessen in Flüssigkeiten und Gasen beschäftigt. Eine beliebte Methode, um Einblicke in komplexe Datensätze zu erhalten, besteht darin, einfache und bekannte Strukturen innerhalb eines Datensatzes aufzuspüren. In der Strömungsvisualisierung führt dies zum Konzept der lokalen Linearisierung und Linearität im Allgemeinen. Dies liegt daran, dass lineare Vektorfelder die einfachste Form von nicht-trivialen Feldern darstellen und diese sehr gut verstanden sind. In der Regel werden simulierte Datensätze in einzelne Zellen diskretisiert, welche auf linearer Interpolation basieren. Beispielsweise können auch stationäre Punkte in der Vektorfeldtopologie mittels linearen Strömungsverhaltens charakterisiert werden. Daher ist Linearität allgegenwärtig. Durch das Verständnis von lokalen linearen Strömungsverhalten in Vektorfeldern konnten verschiedene Visualisierungsmethoden erheblich verbessert werden. Ähnliche Erfolge sind auch für andere Methoden zu erwarten. In dieser Arbeit wird das Konzept der Linearität in der Visualisierung weiterentwickelt. Zunächst wird eine bestehende Definition von linearen Nachbarschaften hin zu affin-linearen Nachbarschaften erweitert. Affin-lineare Nachbarschaften sind Regionen mit einem überwiegend linearem Strömungsverhalten. Es wird eine detaillierte Diskussion über die Definition sowie die gewählten Fehlermaße durchgeführt. Weiterhin wird ein Region Growing-Verfahren vorgestellt, welches affin-lineare Nachbarschaften um beliebige Positionen bis zu einem bestimmten, benutzerdefinierten Fehlerschwellwert extrahiert. Um die lokale Linearität in Vektorfeldern zu messen, wird ein komplementärer Ansatz, welcher die Qualität der bestmöglichen linearen Näherung für eine gegebene n-Ring-Nachbarschaft berechnet, diskutiert. In einer ersten Anwendung werden affin-lineare Nachbarschaften an stationären Punkten verwendet, um deren Einflussbereich sowie ihre Wechselwirkung mit der sie umgebenden, nichtlinearen Strömung, aber auch mit sehr nah benachbarten stationären Punkten zu visualisieren. Insbesondere bei sehr großen Datensätzen kann die analytische Beschreibung der Strömung innerhalb eines linearisierten Bereichs verwendet werden, um Vektorfelder zu komprimieren und vorhandene Visualisierungsansätze zu beschleunigen. Insbesondere sollen eine Reihe von Komprimierungsalgorithmen für gitterbasierte Vektorfelder verbessert werden, welche auf der sukzessiven Entfernung einzelner Gitterkanten basieren. Im Gegensatz zu vorherigen Arbeiten sollen affin-lineare Nachbarschaften als Grundlage für eine Segmentierung verwendet werden, um eine obere Fehlergrenze bereitzustellen und somit eine hohe Qualität der Komprimierungsergebnisse zu gewährleisten. Um verschiedene Komprimierungsansätze zu bewerten, werden die Auswirkungen ihrer jeweiligen Approximationsfehler auf die Stromlinienintegration sowie auf integrationsbasierte Visualisierungsmethoden am Beispiel der numerischen Berechnung von Lyapunov-Exponenten diskutiert. Zum Abschluss dieser Arbeit wird eine mögliche Erweiterung des Linearitätbegriffs für Vektorfelder auf zweidimensionalen Mannigfaltigkeiten vorgestellt, welche auf einer adaptiven, atlasbasierten Vektorfeldzerlegung basiert. / In many research areas, such as medicine, natural sciences or engineering, scientific visualization plays an important role and helps scientists to gain new insights. This is because visualizations can make the invisible visible. For example, visualizations can reveal the course of nerve fibers in the brain of test persons or the air flow around obstacles. This thesis in particular contributes to the subfield of flow visualization, which targets the investigation of processes in fluids and gases. A popular way to gain insights into complex datasets is to identify simple and known structures within a dataset. In case of flow visualization, this leads to the concept of local linearizations and linearity in general. This is because linear vector fields represent the most simple class of non-trivial fields and they are extremely well understood. Typically, simulated datasets are discretized into individual cells that are based on linear interpolation. Also, in vector field topology, stationary points can be characterized by considering the local linear flow behavior in their vicinity. Therefore, linearity is ubiquitous. Through the understanding of local linear flow behavior in vector fields by applying the concept of local linearity, some visualization methods have been improved significantly. Similar successes can be expected for other methods. In this thesis, the use of linearity in visualization is investigated. First, an existing definition of linear neighborhoods is extended towards the affine linear neighborhoods. Affine linear neighborhoods are regions of mostly linear flow behavior. A detailed discussion of the definition and of the chosen error measures is provided. Also a region growing algorithm that extracts affine linear neighborhoods around arbitrary positions up to a certain user-defined approximation error threshold is introduced. To measure the local linearity in vector fields, a complementary approach that computes the quality of the best possible linear approximation for a given n-ring neighborhood is discussed. As a first application, the affine linear neighborhoods around stationary points are used to visualize their region of influence, their interaction with the non-linear flow around them as well as their interaction with closely neighbored stationary points. The analytic description of the flow within a linearized region can be used to compress vector fields and accelerate existing visualization approaches, especially in case of very large datasets. In particular, the presented method aims at improving over a series of compression algorithms for grid-based vector fields that are based on edge collapse. In contrast to previous approaches, affine linear neighborhoods serve as the basis for a segmentation in order to provide an upper error bound and also to ensure a high quality of the compression results. To evaluate different compression approaches, the impact of their particular approximation errors on streamline integration as well as on integration-based visualization methods is discussed on the example of Finite-Time Lyapunov Exponent computations. To conclude the thesis, a first possible extension of linearity to fields on two-dimensional manifolds, based on an adaptive atlas-based vector field decomposition, is given.
15

Fundamental Limits of Communication Channels under Non-Gaussian Interference

Le, Anh Duc 04 October 2016 (has links)
No description available.
16

Reliable Communications under Limited Knowledge of the Channel

Yazdani, Raman Unknown Date
No description available.
17

Proje??o diam?trica com base em dados observados antes e ap?s o desbaste em povoamentos de eucalipto

Lacerda, Talles Hudson Souza 16 February 2017 (has links)
?rea de concentra??o: Manejo florestal e silvicultura. / Submitted by Jos? Henrique Henrique (jose.neves@ufvjm.edu.br) on 2017-06-09T22:52:32Z No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Rodrigo Martins Cruz (rodrigo.cruz@ufvjm.edu.br) on 2017-06-14T19:22:36Z (GMT) No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-06-14T19:22:36Z (GMT). No. of bitstreams: 2 talles_hudson_souza_lacerda.pdf: 1852089 bytes, checksum: 5f25d81aee4d02d93913bfc83196ecb3 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior (CAPES) / O objetivo desse trabalho foi avaliar, do ponto de vista estat?stico e biol?gico, simula??es realizadas por dois modelos de distribui??o diam?trica, ajustados pelos m?todos de aproxima??o linear e m?xima verossimilhan?a, em planta??es de eucalipto submetidos a desbaste. Os dados foram provenientes de um povoamento h?brido de Eucalyptus grandis x Eucalyptus urophylla, sob regime de desbaste, localizado no nordeste da Bahia, vinculados ? empresa BAHIA SPECIALTY CELLULOSE. Os dados utilizados neste estudo foram obtidos nas idades 27, 40, 50, 61, 76, 87, 101, 112, 122, 137, 147, 158 e 165 meses. Esse povoamento foi submetido a tratamentos de remo??o seletiva de 20%, 35% e 50%, nas idades 58 e 142 meses. Utilizou-se dois modelos de distribui??o diam?trica, empregando bases de dados observadas aos 27 meses (antes do primeiro desbaste), aos 61 meses (ap?s o primeiro desbaste) e aos 147 meses (ap?s o segundo desbaste). Por meio dos modelos gerou-se tr?s sistemas, os quais se diferiram no m?todo de ajuste da fun??o Weibull. No sistema 1 os par?metros da fun??o Weibull foram ajustados pelo m?todo de aproxima??o linear. No sistema 2 e no sistema 3, os par?metros foram ajustados pelo m?todo da m?xima verossimilhan?a. As proje??es realizadas pelos sistemas foram confrontadas com as distribui??es diam?tricas observadas, por meio do teste de ader?ncia Kolmogorov-Smirnov a 1% de signific?ncia, e pelo teste F de Graybill, com n?vel de signific?ncia de 5%. Os tr?s sistemas proporcionaram distribui??es diam?tricas projetadas estatisticamente semelhantes ?s observadas, antes e ap?s o desbastes. O sistema 2 apresentou um maior percentual de proje??es n?o significativas para os dois testes estat?sticos empregados. As simula??es realizadas pelos modelos apresentaram realismo estat?stico e tend?ncia do crescimento da distribui??o de di?metros para diferentes porcentagens de desbaste. Houve maior efici?ncia dos modelos ao se utilizar distribui??es diam?tricas observadas em idades imediatamente antes do desbaste. As proje??es das distribui??es diam?tricas, empregando-se como base inicial as distribui??es observadas antes do primeiro desbaste e imediatamente ap?s os desbastes (simula??es 1, 2 e 3), foram mais precisas do que as proje??es obtidas quando foram utilizadas somente as distribui??es diam?tricas observadas antes do primeiro desbaste como base inicial para as proje??es e, em seguida, simulados os desbastes nas idades previstas e, por ?ltimo, realizadas as proje??es empregando-se a distribui??o estimada remanescente do desbaste como base inicial para projetar as distribui??es para idades subsequentes (simula??es 4, 5 e 6). / Disserta??o (Mestrado) ? Programa de P?s-Gradua??o em Ci?ncia Florestal, Universidade Federal dos Vales do Jequitinhonha e Mucuri, 2017. / The objective of the study was evaluated from the statistical and biological point of view, simulations performed by two models of diametric distribution, adjusted by linear approximation and maximum likelihood methods, in eucalyptus plantations submitted to thinning. The data were found in a hybrid settlement of Eucalyptus grandis x Eucalyptus urophylla, under thinning regime, located in the northeast region of Bahia, linked to the company BAHIA ESPECIALIDADE CELULOSE. The data used in this study 27, 40, 50, 61, 76, 87, 101, 112, 122, 137, 147, 158 and 165 months. This population was submitted to treatments of selective removal of 20%, 35% and 50%, in the ages 58 and 142 months. Two diametric distribution models were used, using data bases observed at 27 months (before the first thinning), at 61 months (after the first thinning) and at 147 months (after the second thinning). By means of the models three systems were generated, the channels did not differ any method of adjustment of the Weibull function. No system 1 of the Weibull function parameters were adjusted by the linear approximation method. In system 2 and in system 3 the parameters were adjusted by the maximum likelihood method. As the projections performed by the systems were compared with the observed diametric distributions, using the Kolmogorov-Smirnov test at 1% significance, by the Graybill F test, with a significance level of 5%. The three systems provided by the statistically projected diametric distributions for observations, before and after the deviations. System 2 presents a higher percentage of nonsignificant projections for the two statistical tests used. As simulations of model execution demonstrated statistical realism and tendency of growth of the distribution of diameters for different percentages of thinning. There was greater efficiency of the models of use of diametric distributions observed in ages before thinning. As the projections of the diametric distributions, using as an initial basis as distributions observed before the first thinning and after the slabs (simulations 1, 2 and 3), were more accurate than the projections obtained when only diametric distributions observed before the first Thinning as the initial basis for the projections and then simulated the lagging at the predicted ages and finally performed as projections using an estimated remnant distribution of the thinning as the initial basis for designing as distributions for subsequent ages (simulations 4, 5 and 6).
18

Borcení časové osy v oblasti biosignálů / Dynamic Time Warping in Biosignal Processing

Kubát, Milan January 2014 (has links)
This work is dedicated to dynamic time warping in biosignal processing, especially it´s application for ECG signals. On the beginning the theoretical notes about cardiography are summarized. Then, the DTW analysis follows along with conditions and demands assessments for it’s successful application. Next, several variants and application possibilities are described. The practical part covers the design of this method, the outputs comprehension, settings optimization and realization of methods related with DTW
19

Výpočet optimálního skluzového kmitočtu asynchronního motoru pro minimalizaci ztrát / Calculation of optimum slip frequency of induction motor for minimisation of its losses

Bednařík, Václav January 2014 (has links)
This master’s thesis focuses on the minimisation of losses by calculation of optimum slip frequency of induction motor. The next point of this master‘s thesis is supersaturation. Supersaturation must be solved for the size of losses, because of the effect that is cause of the losses, when current increases with saturation. However current is not increase proportionally with increasing saturation, but increases several times more. This problem is included in the calculation of the slip frequency. Optimum of slip frequency is solve for modified gamma model of induction machine. In the main point of this thesis is outlined the process, how the optimum can be found. With same process were already were found the equations, but they were too extensit. In the end is solved the optimum of slip frequency be minimum of the flux density.
20

Stochastic Combinatorial Optimization / Optimisation combinatoire stochastique

Cheng, Jianqiang 08 November 2013 (has links)
Dans cette thèse, nous étudions trois types de problèmes stochastiques : les problèmes avec contraintes probabilistes, les problèmes distributionnellement robustes et les problèmes avec recours. Les difficultés des problèmes stochastiques sont essentiellement liées aux problèmes de convexité du domaine des solutions, et du calcul de l’espérance mathématique ou des probabilités qui nécessitent le calcul complexe d’intégrales multiples. A cause de ces difficultés majeures, nous avons résolu les problèmes étudiées à l’aide d’approximations efficaces.Nous avons étudié deux types de problèmes stochastiques avec des contraintes en probabilités, i.e., les problèmes linéaires avec contraintes en probabilité jointes (LLPC) et les problèmes de maximisation de probabilités (MPP). Dans les deux cas, nous avons supposé que les variables aléatoires sont normalement distribués et les vecteurs lignes des matrices aléatoires sont indépendants. Nous avons résolu LLPC, qui est un problème généralement non convexe, à l’aide de deux approximations basée sur les problèmes coniques de second ordre (SOCP). Sous certaines hypothèses faibles, les solutions optimales des deux SOCP sont respectivement les bornes inférieures et supérieures du problème du départ. En ce qui concerne MPP, nous avons étudié une variante du problème du plus court chemin stochastique contraint (SRCSP) qui consiste à maximiser la probabilité de la contrainte de ressources. Pour résoudre ce problème, nous avons proposé un algorithme de Branch and Bound pour calculer la solution optimale. Comme la relaxation linéaire n’est pas convexe, nous avons proposé une approximation convexe efficace. Nous avons par la suite testé nos algorithmes pour tous les problèmes étudiés sur des instances aléatoires. Pour LLPC, notre approche est plus performante que celles de Bonferroni et de Jaganathan. Pour MPP, nos résultats numériques montrent que notre approche est là encore plus performante que l’approximation des contraintes probabilistes individuellement.La deuxième famille de problèmes étudiés est celle relative aux problèmes distributionnellement robustes où une partie seulement de l’information sur les variables aléatoires est connue à savoir les deux premiers moments. Nous avons montré que le problème de sac à dos stochastique (SKP) est un problème semi-défini positif (SDP) après relaxation SDP des contraintes binaires. Bien que ce résultat ne puisse être étendu au cas du problème multi-sac-à-dos (MKP), nous avons proposé deux approximations qui permettent d’obtenir des bornes de bonne qualité pour la plupart des instances testées. Nos résultats numériques montrent que nos approximations sont là encore plus performantes que celles basées sur les inégalités de Bonferroni et celles plus récentes de Zymler. Ces résultats ont aussi montré la robustesse des solutions obtenues face aux fluctuations des distributions de probabilités. Nous avons aussi étudié une variante du problème du plus court chemin stochastique. Nous avons prouvé que ce problème peut se ramener au problème de plus court chemin déterministe sous certaine hypothèses. Pour résoudre ce problème, nous avons proposé une méthode de B&B où les bornes inférieures sont calculées à l’aide de la méthode du gradient projeté stochastique. Des résultats numériques ont montré l’efficacité de notre approche. Enfin, l’ensemble des méthodes que nous avons proposées dans cette thèse peuvent s’appliquer à une large famille de problèmes d’optimisation stochastique avec variables entières. / In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs.

Page generated in 0.1389 seconds