• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 513
  • 85
  • 53
  • 49
  • 12
  • 9
  • 9
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • Tagged with
  • 864
  • 322
  • 133
  • 94
  • 90
  • 88
  • 86
  • 79
  • 76
  • 68
  • 68
  • 67
  • 66
  • 66
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Kooperativní hry s částečnou informací / Cooperative games with partial information

Černý, Martin January 2021 (has links)
(english) May 21, 2021 Partially defined cooperative games are a generalisation of classical coopera- tive games in which the worth of some of the coalitions is not known. Therefore, they are one of the possible approaches to uncertainty in cooperative game the- ory. The main focus of this thesis is to collect and extend the existing results in this theory. We present results on superadditivity, convexity, positivity and 1-convexity of incomplete games. For all the aforementioned properties, a de- scription of the set of all possible extensions (complete games extending the incomplete game) is studied. Different subclasses of incomplete games are con- sidered, among others incomplete games with minimal information, incomplete games with defined upper vector or symmetric incomplete games. Some of the results also apply to fully generalised games. For superadditivity and 1-convexity, solution concepts (considering only par- tial information) are introduced and studied. Especially for 1-convexity, a thor- ough investigation of the defined solution concepts consisting of different char- acterisations is provided. 1
272

Deep Learning-based Regularizers for Cone Beam Computed Tomography Reconstruction / Djupinlärningsbaserade regulariserare för rekonstruktion inom volymtomografi

Syed, Sabina, Stenberg, Josefin January 2023 (has links)
Cone Beam Computed Tomography is a technology to visualize the 3D interior anatomy of a patient. It is important for image-guided radiation therapy in cancer treatment. During a scan, iterative methods are often used for the image reconstruction step. A key challenge is the ill-posedness of the resulting inversion problem, causing the images to become noisy. To combat this, regularizers can be introduced, which help stabilize the problem. This thesis focuses on Adversarial Convex Regularization that with deep learning regularize the scans according to a target image quality. It can be interpreted in a Bayesian setting by letting the regularizer be the prior, approximating the likelihood with the measurement error, and obtaining the patient image through the maximum-a-posteriori estimate. Adversarial Convex Regularization has previously shown promising results in regular Computed Tomography, and this study aims to investigate its potential in Cone Beam Computed Tomography.  Three different learned regularization methods have been developed, all based on Convolutional Neural Network architectures. One model is based on three-dimensional convolutional layers, while the remaining two rely on 2D layers. These two are in a later stage crafted to be applicable to 3D reconstruction by either stacking a 2D model or by averaging 2D models trained in three orthogonal planes. All neural networks are trained on simulated male pelvis data provided by Elekta. The 3D convolutional neural network model has proven to be heavily memory-consuming, while not performing better than current reconstruction methods with respect to image quality. The two architectures based on merging multiple 2D neural network gradients for 3D reconstruction are novel contributions that avoid memory issues. These two models outperform current methods in terms of multiple image quality metrics, such as Peak Signal-to-Noise Ratio and Structural Similarity Index Measure, and they also generalize well for real Cone Beam Computed Tomography data. Additionally, the architecture based on a weighted average of 2D neural networks is able to capture spatial interactions to a larger extent and is adjustable to favor the plane that best shows the field of interest, a possibly desirable feature in medical practice. / Volymtomografi kan användas inom cancerbehandling för att skapa bilder av patientens inre anatomi i 3D som sedan används vid stråldosplanering. Under den rekonstruerande fasen i en skanning används ofta iterativa metoder. En utmaning är att det resulterande inversionsproblemet är illa ställt, vilket leder till att bilderna blir brusiga. För att motverka detta kan regularisering introduceras som bidrar till att stabilisera problemet. Fokus för denna uppsats är Adversarial Convex Regularization som baserat på djupinlärning regulariserar bilderna enligt en målbildskvalitet. Detta kan även tolkas ur ett Bayesianskt perspektiv genom att betrakta regulariseraren som apriorifördelningen, approximera likelihoodfördelningen med mätfelet samt erhålla patientbilden genom maximum-a-posteriori-skattningen. Adversarial Convex Regularization har tidigare visat lovande resultat för data från Datortomografi och syftet med denna uppsats är att undersöka dess potential för Volymtomografi.  Tre olika inlärda regulariseringsmetoder har utvecklats med hjälp av faltningsnätverk. En av modellerna bygger på faltning av tredimensionella lager, medan de återstående två är baserade på 2D-lager. Dessa två sammanförs i ett senare skede för att kunna appliceras vid 3D-rekonstruktion, antingen genom att stapla 2D modeller eller genom att beräkna ett viktat medelvärde av tre 2D-modeller som tränats i tre ortogonala plan. Samtliga modeller är tränade på simulerad manlig bäckendata från Elekta. 3D-faltningsnätverket har visat sig vara minneskrävande samtidigt som det inte presterar bättre än nuvarande rekonstruktionsmetoder med avseende på bildkvalitet. De andra två metoderna som bygger på att stapla flera gradienter av 2D-nätverk vid 3D-rekonstruktion är ett nytt vetenskapligt bidrag och undviker minnesproblemen. Dessa två modeller överträffar nuvarande metoder gällande flera bildkvalitetsmått och generaliserar även väl för data från verklig Volymtomografi. Dessutom lyckas modellen som bygger på ett viktat medelvärde av 2D-nätverk i större utsträckning fånga spatiala interaktioner. Den kan även anpassas till att gynna det plan som bäst visar intresseområdet i kroppen, vilket möjligtvis är en önskvärd egenskap i medicinska sammanhang.
273

Solving support vector machine classification problems and their applications to supplier selection

Kim, Gitae January 1900 (has links)
Doctor of Philosophy / Department of Industrial & Manufacturing Systems Engineering / Chih-Hang Wu / Recently, interdisciplinary (management, engineering, science, and economics) collaboration research has been growing to achieve the synergy and to reinforce the weakness of each discipline. Along this trend, this research combines three topics: mathematical programming, data mining, and supply chain management. A new pegging algorithm is developed for solving the continuous nonlinear knapsack problem. An efficient solving approach is proposed for solving the ν-support vector machine for classification problem in the field of data mining. The new pegging algorithm is used to solve the subproblem of the support vector machine problem. For the supply chain management, this research proposes an efficient integrated solving approach for the supplier selection problem. The support vector machine is applied to solve the problem of selecting potential supplies in the procedure of the integrated solving approach. In the first part of this research, a new pegging algorithm solves the continuous nonlinear knapsack problem with box constraints. The problem is to minimize a convex and differentiable nonlinear function with one equality constraint and box constraints. Pegging algorithm needs to calculate primal variables to check bounds on variables at each iteration, which frequently is a time-consuming task. The newly proposed dual bound algorithm checks the bounds of Lagrange multipliers without calculating primal variables explicitly at each iteration. In addition, the calculation of the dual solution at each iteration can be reduced by a proposed new method for updating the solution. In the second part, this research proposes several streamlined solution procedures of ν-support vector machine for the classification. The main solving procedure is the matrix splitting method. The proposed method in this research is a specified matrix splitting method combined with the gradient projection method, line search technique, and the incomplete Cholesky decomposition method. The method proposed can use a variety of methods for line search and parameter updating. Moreover, large scale problems are solved with the incomplete Cholesky decomposition and some efficient implementation techniques. To apply the research findings in real-world problems, this research developed an efficient integrated approach for supplier selection problems using the support vector machine and the mixed integer programming. Supplier selection is an essential step in the procurement processes. For companies considering maximizing their profits and reducing costs, supplier selection requires seeking satisfactory suppliers and allocating proper orders to the selected suppliers. In the early stage of supplier selection, a company can use the support vector machine classification to choose potential qualified suppliers using specific criteria. However, the company may not need to purchase from all qualified suppliers. Once the company determines the amount of raw materials and components to purchase, the company then selects final suppliers from which to order optimal order quantities at the final stage of the process. Mixed integer programming model is then used to determine final suppliers and allocates optimal orders at this stage.
274

Portfolio optimization problems : a martingale and a convex duality approach

Tchamga, Nicole Flaure Kouemo 12 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The first approach initiated by Merton [Mer69, Mer71] to solve utility maximization portfolio problems in continuous time is based on stochastic control theory. The idea of Merton was to interpret the maximization portfolio problem as a stochastic control problem where the trading strategies are considered as a control process and the portfolio wealth as the controlled process. Merton derived the Hamilton-Jacobi-Bellman (HJB) equation and for the special case of power, logarithm and exponential utility functions he produced a closedform solution. A principal disadvantage of this approach is the requirement of the Markov property for the stocks prices. The so-called martingale method represents the second approach for solving utility maximization portfolio problems in continuous time. It was introduced by Pliska [Pli86], Cox and Huang [CH89, CH91] and Karatzas et al. [KLS87] in di erent variant. It is constructed upon convex duality arguments and allows one to transform the initial dynamic portfolio optimization problem into a static one and to resolve it without requiring any \Markov" assumption. A de nitive answer (necessary and su cient conditions) to the utility maximization portfolio problem for terminal wealth has been obtained by Kramkov and Schachermayer [KS99]. In this thesis, we study the convex duality approach to the expected utility maximization problem (from terminal wealth) in continuous time stochastic markets, which as already mentioned above can be traced back to the seminal work by Merton [Mer69, Mer71]. Before we detail the structure of our thesis, we would like to emphasize that the starting point of our work is based on Chapter 7 in Pham [P09] a recent textbook. However, as the careful reader will notice, we have deepened and added important notions and results (such as the study of the upper (lower) hedge, the characterization of the essential supremum of all the possible prices, compare Theorem 7.2.2 in Pham [P09] with our stated Theorem 2.4.9, the dynamic programming equation 2.31, the superhedging theorem 2.6.1...) and we have made a considerable e ort in the proofs. Indeed, several proofs of theorems in Pham [P09] have serious gaps (not to mention typos) and even aws (for example see the proof of Proposition 7.3.2 in Pham [P09] and our proof of Proposition 3.4.8). In the rst chapter, we state the expected utility maximization problem and motivate the convex dual approach following an illustrative example by Rogers [KR07, R03]. We also brie y review the von Neumann - Morgenstern Expected Utility Theory. In the second chapter, we begin by formulating the superreplication problem as introduced by El Karoui and Quenez [KQ95]. The fundamental result in the literature on super-hedging is the dual characterization of the set of all initial endowments leading to a super-hedge of a European contingent claim. El Karoui and Quenez [KQ95] rst proved the superhedging theorem 2.6.1 in an It^o di usion setting and Delbaen and Schachermayer [DS95, DS98] generalized it to, respectively, a locally bounded and unbounded semimartingale model, using a Hahn-Banach separation argument. The superreplication problem inspired a very nice result, called the optional decomposition theorem for supermartingales 2.4.1, in stochastic analysis theory. This important theorem introduced by El Karoui and Quenez [KQ95], and extended in full generality by Kramkov [Kra96] is stated in Section 2.4 and proved at the end of Section 2.7. The third chapter forms the theoretical core of this thesis and it contains the statement and detailed proof of the famous Kramkov-Schachermayer Theorem that addresses the duality of utility maximization portfolio problems. Firstly, we show in Lemma 3.2.1 how to transform the dynamic utility maximization problem into a static maximization problem. This is done thanks to the dual representation of the set of European contingent claims, which can be dominated (or super-hedged) almost surely from an initial endowment x and an admissible self- nancing portfolio strategy given in Corollary 2.5 and obtained as a consequence of the optional decomposition of supermartingale. Secondly, under some assumptions on the utility function, the existence and uniqueness of the solution to the static problem is given in Theorem 3.2.3. Because the solution of the static problem is not easy to nd, we will look at it in its dual form. We therefore synthesize the dual problem from the primal problem using convex conjugate functions. Before we state the Kramkov-Schachermayer Theorem 3.4.1, we present the Inada Condition and the Asymptotic Elasticity Condition for Utility functions. For the sake of clarity, we divide the long and technical proof of Kramkov-Schachermayer Theorem 3.4.1 into several lemmas and propositions of independent interest, where the required assumptions are clearly indicate for each step of the proof. The key argument in the proof of Kramkov-Schachermayer Theorem is an in nitedimensional version of the minimax theorem (the classical method of nding a saddlepoint for the Lagrangian is not enough in our situation), which is central in the theory of Lagrange multipliers. For this, we have stated and proved the technical Lemmata 3.4.5 and 3.4.6. The main steps in the proof of the the Kramkov-Schachermayer Theorem 3.4.1 are: We show in Proposition 3.4.9 that the solution to the dual problem exists and we characterize it in Proposition 3.4.12. From the construction of the dual problem, we nd a set of necessary and su cient conditions (3.1.1), (3.1.2), (3.3.1) and (3.3.7) for the primal and dual problems to each have a solution. Using these conditions, we can show the existence of the solution to the given problem and characterize it in terms of the market parameters and the solution to the dual problem. In the last chapter we will present and study concrete examples of the utility maximization portfolio problem in speci c markets. First, we consider the complete markets case, where closed-form solutions are easily obtained. The detailed solution to the classical Merton problem with power utility function is provided. Lastly, we deal with incomplete markets under It^o processes and the Brownian ltration framework. The solution to the logarithmic utility function as well as to the power utility function is presented. / AFRIKAANSE OPSOMMING: Die eerste benadering, begin deur Merton [Mer69, Mer71], om nutsmaksimering portefeulje probleme op te los in kontinue tyd is gebaseer op stogastiese beheerteorie. Merton se idee is om die maksimering portefeulje probleem te interpreteer as 'n stogastiese beheer probleem waar die handelstrategi e as 'n beheer-proses beskou word en die portefeulje waarde as die gereguleerde proses. Merton het die Hamilton-Jacobi-Bellman (HJB) vergelyking afgelei en vir die spesiale geval van die mags, logaritmies en eksponensi ele nutsfunksies het hy 'n oplossing in geslote-vorm gevind. 'n Groot nadeel van hierdie benadering is die vereiste van die Markov eienskap vir die aandele pryse. Die sogenaamde martingale metode verteenwoordig die tweede benadering vir die oplossing van nutsmaksimering portefeulje probleme in kontinue tyd. Dit was voorgestel deur Pliska [Pli86], Cox en Huang [CH89, CH91] en Karatzas et al. [KLS87] in verskillende wisselvorme. Dit word aangevoer deur argumente van konvekse dualiteit, waar dit in staat stel om die aanvanklike dinamiese portefeulje optimalisering probleem te omvorm na 'n statiese een en dit op te los sonder dat' n \Markov" aanname gemaak hoef te word. 'n Bepalende antwoord (met die nodige en voldoende voorwaardes) tot die nutsmaksimering portefeulje probleem vir terminale vermo e is verkry deur Kramkov en Schachermayer [KS99]. In hierdie proefskrif bestudeer ons die konveks dualiteit benadering tot die verwagte nuts maksimering probleem (van terminale vermo e) in kontinue tyd stogastiese markte, wat soos reeds vermeld is teruggevoer kan word na die seminale werk van Merton [Mer69, Mer71]. Voordat ons die struktuur van ons tesis uitl^e, wil ons graag beklemtoon dat die beginpunt van ons werk gebaseer is op Hoofstuk 7 van Pham [P09] se onlangse handboek. Die noukeurige leser sal egter opmerk, dat ons belangrike begrippe en resultate verdiep en bygelas het (soos die studie van die boonste (onderste) verskansing, die karakterisering van die noodsaaklike supremum van alle moontlike pryse, vergelyk Stelling 7.2.2 in Pham [P09] met ons verklaarde Stelling 2.4.9, die dinamiese programerings vergelyking 2.31, die superverskansing stelling 2.6.1...) en ons het 'n aansienlike inspanning in die bewyse gemaak. Trouens, verskeie bewyse van stellings in Pham cite (P09) het ernstige gapings (nie te praat van setfoute nie) en selfs foute (kyk byvoorbeeld die bewys van Stelling 7.3.2 in Pham [P09] en ons bewys van Stelling 3.4.8). In die eerste hoofstuk, sit ons die verwagte nutsmaksimering probleem uit een en motiveer ons die konveks duaale benadering gebaseer op 'n voorbeeld van Rogers [KR07, R03]. Ons gee ook 'n kort oorsig van die von Neumann - Morgenstern Verwagte Nutsteorie. In die tweede hoofstuk, begin ons met die formulering van die superreplikasie probleem soos voorgestel deur El Karoui en Quenez [KQ95]. Die fundamentele resultaat in die literatuur oor super-verskansing is die duaale karakterisering van die versameling van alle eerste skenkings wat lei tot 'n super-verskans van' n Europese voorwaardelike eis. El Karoui en Quenez [KQ95] het eers die super-verskansing stelling 2.6.1 bewys in 'n It^o di usie raamwerk en Delbaen en Schachermayer [DS95, DS98] het dit veralgemeen na, onderskeidelik, 'n plaaslik begrensde en onbegrensde semimartingale model, met 'n Hahn-Banach skeidings argument. Die superreplikasie probleem het 'n prag resultaat ge nspireer, genaamd die opsionele ontbinding stelling vir supermartingales 2.4.1 in stogastiese ontledings teorie. Hierdie belangrike stelling wat deur El Karoui en Quenez [KQ95] voorgestel is en tot volle veralgemening uitgebrei is deur Kramkov [Kra96] is uiteengesit in Afdeling 2.4 en bewys aan die einde van Afdeling 2.7. Die derde hoofstuk vorm die teoretiese basis van hierdie proefskrif en bevat die verklaring en gedetailleerde bewys van die beroemde Kramkov-Schachermayer stelling wat die dualiteit van nutsmaksimering portefeulje probleme adresseer. Eerstens, wys ons in Lemma 3.2.1 hoe om die dinamiese nutsmaksimering probleem te omskep in 'n statiese maksimerings probleem. Dit kan gedoen word te danke aan die duaale voorstelling van die versameling Europese voorwaardelike eise, wat oorheers (of super-verskans) kan word byna seker van 'n aanvanklike skenking x en 'n toelaatbare self- nansierings portefeulje strategie wat in Gevolgtrekking 2.5 gegee word en verkry is as gevolg van die opsionele ontbinding van supermartingale. In die tweede plek, met sekere aannames oor die nutsfunksie, is die bestaan en uniekheid van die oplossing van die statiese probleem gegee in Stelling 3.2.3. Omdat die oplossing van die statiese probleem nie maklik verkrygbaar is nie, sal ons kyk na die duaale vorm. Ons sintetiseer dan die duale probleem van die prim^ere probleem met konvekse toegevoegde funksies. Voordat ons die Kramkov-Schachermayer Stelling 3.4.1 beskryf, gee ons die Inada voorwaardes en die Asimptotiese Elastisiteits Voorwaarde vir Nutsfunksies. Ter wille van duidelikheid, verdeel ons die lang en tegniese bewys van die Kramkov-Schachermayer Stelling ref in verskeie lemmas en proposisies op, elk van onafhanklike belang waar die nodige aannames duidelik uiteengesit is vir elke stap van die bewys. Die belangrikste argument in die bewys van die Kramkov-Schachermayer Stelling is 'n oneindig-dimensionele weergawe van die minimax stelling (die klassieke metode om 'n saalpunt vir die Lagrange-funksie te bekom is nie genoeg in die geval nie), wat noodsaaklik is in die teorie van Lagrange-multiplikators. Vir die, meld en bewys ons die tegniese Lemmata 3.4.5 en 3.4.6. Die belangrikste stappe in die bewys van die die Kramkov-Schachermayer Stelling 3.4.1 is: Ons wys in Proposisie 3.4.9 dat die oplossing vir die duale probleem bestaan en ons karaktiriseer dit in Proposisie 3.4.12. Uit die konstruksie van die duale probleem vind ons 'n versameling nodige en voldoende voorwaardes (3.1.1), (3.1.2), (3.3.1) en (3.3.7) wat die prim^ere en duale probleem oplossings elk moet aan voldoen. Deur hierdie voorwaardes te gebruik, kan ons die bestaan van die oplossing vir die gegewe probleem wys en dit karakteriseer in terme van die mark parameters en die oplossing vir die duale probleem. In die laaste hoofstuk sal ons konkrete voorbeelde van die nutsmaksimering portefeulje probleem bestudeer vir spesi eke markte. Ons kyk eers na die volledige markte geval waar geslote-vorm oplossings maklik verkrygbaar is. Die gedetailleerde oplossing vir die klassieke Merton probleem met mags nutsfunksie word voorsien. Ten slotte, hanteer ons onvolledige markte onderhewig aan It^o prosesse en die Brown ltrering raamwerk. Die oplossing vir die logaritmiese nutsfunksie, sowel as die mags nutsfunksie word aangebied.
275

Cross-layer design for OFDMA wireless networks with finite queue length based on game theory

Nikolaros, Ilias G. January 2014 (has links)
In next generation wireless networks such as 4G- LTE and WiMax, the demand for high data rates, the scarcity of wireless resources and the time varying channel conditions has led to the adoption of more sophisticated and robust techniques in PHY such as orthogonal frequency division multiplexing (OFDM) and the corresponding access technique known as orthogonal frequency division multiplexing access (OFDMA). Cross-layer schedulers have been developed in order to describe the procedure of resource allocation in OFDMA wireless networks. The resource allocation in OFDMA wireless networks has received great attention in research, by proposing many different ways for frequency diversity exploitation and system’s optimization. Many cross-layer proposals for dynamic resource allocation have been investigated in literature approaching the optimization problem from different viewpoints i.e. maximizing total data rate, minimizing total transmit power, satisfying minimum users’ requirements or providing fairness amongst users. The design of a cross-layer scheduler for OFDMA wireless networks is the topic of this research. The scheduler utilizes game theory in order to make decisions for subcarrier and power allocation to the users with the main concern being to maintain fairness as well as to maximize overall system’s performance. A very well known theorem in cooperative game theory, the Nash Bargaining Solution (NBS), is employed and solved in a close form way, resulting in a Pareto optimal solution. Two different cases are proposed. The first one is the symmetric NBS (S-NBS) where all users have the same weight and therefore all users have the same opportunity for resources and the second one, is the asymmetric NBS (A-NBS), where users have different weights, hence different priorities where the scheduler favours users with higher priorities at expense of lower priority users. As MAC layer is vital for cross-layer, the scheduler is combined with a queuing model based on Markov chain in order to describe more realistically the incoming procedure from the higher layers.
276

Supervised Descent Method

Xiong, Xuehan 01 September 2015 (has links)
In this dissertation, we focus on solving Nonlinear Least Squares problems using a supervised approach. In particular, we developed a Supervised Descent Method (SDM), performed thorough theoretical analysis, and demonstrated its effectiveness on optimizing analytic functions, and four other real-world applications: Inverse Kinematics, Rigid Tracking, Face Alignment (frontal and multi-view), and 3D Object Pose Estimation. In Rigid Tracking, SDM was able to take advantage of more robust features, such as, HoG and SIFT. Those non-differentiable image features were out of consideration of previous work because they relied on gradient-based methods for optimization. In Inverse Kinematics where we minimize a non-convex function, SDM achieved significantly better convergence than gradient-based approaches. In Face Alignment, SDM achieved state-of-the-arts results. Moreover, it was extremely computationally efficient, which makes it applicable for many mobile applications. In addition, we provided a unified view of several popular methods including SDM on sequential prediction, and reformulated them as a sequence of function compositions. Finally, we suggested some future research directions on SDM and sequential prediction.
277

Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme

Visagie, Stephan E. 03 1900 (has links)
Thesis (PhD (Logistics))--University of Stellenbosch, 2007. / In this dissertation original algorithms are introduced to solve separable resource allocation problems (RAPs) with increasing nonlinear functions in the objective function, and lower and upper bounds on each variable. Algorithms are introduced in three special cases. The first case arises when the objective function of the RAP consists of the sum of convex functions and all the variables for these functions range over the same interval. In the second case RAPs with the sum of convex functions in the objective function are considered, but the variables of these functions can range over different intervals. In the last special case RAPs with an objective function comprising the sum of convex and concave functions are considered. In this case the intervals of the variables can range over different values. In the first case two new algorithms, namely the fraction and the slope algorithm are presented to solve the RAPs adhering to the conditions of the case. Both these algorithms yield far better solution times than the existing branch and bound algorithm. A new heuristic and three new algorithms are presented to solve RAPs falling into the second case. The iso-bound heuristic yields, on average, good solutions relative to the optimal objective function value in faster times than exact algorithms. The three algorithms, namely the iso-bound algorithm, the branch and cut algorithm and the iso-bound branch and cut algorithm also yield considerably beter solution times than the existing branch and bound algorithm. It is shown that, on average, the iso-bound branch and cut algorithm yields the fastest solution times, followed by the iso-bound algorithm and then by die branch and cut algorithm. In the third case the necessary and sufficient conditions for optimality are considered. From this, the conclusion is drawn that search techniques for points complying with the necessary conditions will take too long relative to branch and bound techniques. Thus three new algorithms, namely the KL, SKL and IKL algorithms are introduced to solve RAPs falling into this case. These algorithms are generalisations of the branch and bound, branch and cut, and iso-bound algorithms respectively. The KL algorithm was then used as a benchmark. Only the IKL algorithm yields a considerable improvement on the KL algorithm.
278

The design of transmitter/receiver and high speed analog to digital converters in wireless communication systems: a convex programming approach

Zhao, Shaohua, 趙少華 January 2008 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
279

Decentralized probabilistic density control of swarm of autonomous agents with conflict avoidance constraints

Demir, Nazlı 01 October 2014 (has links)
This report describes a method to control the density distribution of a large number of autonomous agents. The approach is based on the fact that there are a large number of agents in the system, and hence the time evolution of the probabilistic density distribution of agents can be described as a Markov chain. The main contribution of this paper is the synthesis of a Markov matrix which will guide the multi-agent system density to a desired steady-state density distribution, in a probabilistic sense, while satisfying some motion and safety constraints. Also, an adaptive density control method based on real time density feedback is introduced to synthesize a time-varying Markov ma- trix, which leads to better convergence to the desired density distribution. Finally, a decentralized density computation method is described. This method guarantees that all agents will have a best, and common, density estimate in a finite, with an explicit bound, number of communication updates. / text
280

FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMS

Chen, Binyuan January 2011 (has links)
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.

Page generated in 0.1254 seconds