Spelling suggestions: "subject:"convex"" "subject:"konvex""
361 |
Adaptive Load Management: Multi-Layered And Multi-Temporal Optimization Of The Demand Side In Electric Energy SystemsJoo, Jhi-Young 01 September 2013 (has links)
Well-designed demand response is expected to play a vital role in operatingpower systems by reducing economic and environmental costs. However,the current system is operated without much information on the benefits ofend-users, especially the small ones, who use electricity. This thesis proposes aframework of operating power systems with demand models including the diversityof end-users’ benefits, namely adaptive load management (ALM). Sincethere are a large number of end-users having different preferences and conditionsin energy consumption, the information on the end-users’ benefits needsto be aggregated at the system level. This leads us to model the system ina multi-layered way, including end-users, load serving entities, and a systemoperator. On the other hand, the information of the end-users’ benefits can beuncertain even to the end-users themselves ahead of time. This information isdiscovered incrementally as the actual consumption approaches and occurs. Forthis reason ALM requires a multi-temporal model of a system operation andend-users’ benefits within. Due to the different levels of uncertainty along thedecision-making time horizons, the risks from the uncertainty of informationon both the system and the end-users need to be managed. The methodologyof ALM is based on Lagrange dual decomposition that utilizes interactive communicationbetween the system, load serving entities, and end-users. We showthat under certain conditions, a power system with a large number of end-userscan balance at its optimum efficiently over the horizon of a day ahead of operationto near real time. Numerical examples include designing ALM for theright types of loads over different time horizons, and balancing a system with a large number of different loads on a congested network. We conclude thatwith the right information exchange by each entity in the system over differenttime horizons, a power system can reach its optimum including a variety ofend-users’ preferences and their values of consuming electricity.
|
362 |
Modélisation du langage à l'aide de pénalités structuréesNelakanti, Anil Kumar 11 February 2014 (has links) (PDF)
Modeling natural language is among fundamental challenges of artificial intelligence and the design of interactive machines, with applications spanning across various domains, such as dialogue systems, text generation and machine translation. We propose a discriminatively trained log-linear model to learn the distribution of words following a given context. Due to data sparsity, it is necessary to appropriately regularize the model using a penalty term. We design a penalty term that properly encodes the structure of the feature space to avoid overfitting and improve generalization while appropriately capturing long range dependencies. Some nice properties of specific structured penalties can be used to reduce the number of parameters required to encode the model. The outcome is an efficient model that suitably captures long dependencies in language without a significant increase in time or space requirements. In a log-linear model, both training and testing become increasingly expensive with growing number of classes. The number of classes in a language model is the size of the vocabulary which is typically very large. A common trick is to cluster classes and apply the model in two-steps; the first step picks the most probable cluster and the second picks the most probable word from the chosen cluster. This idea can be generalized to a hierarchy of larger depth with multiple levels of clustering. However, the performance of the resulting hierarchical classifier depends on the suitability of the clustering to the problem. We study different strategies to build the hierarchy of categories from their observations.
|
363 |
Effects of GPS Error on Animal Home Range EstimatesHyzer, Garrett 01 January 2012 (has links)
This study examined how variables related to habitat cover types can affect the positional accuracy of Global Positioning System (GPS) data and, subsequently, how wildlife home range analysis can be influenced when utilizing this inaccurate data. This study focused on measuring GPS accuracy relative to five habitat variables: open canopy, sparse canopy, dense canopy, open water, and building proximity. The study took place in Hillsborough County, in residential areas that contain all of these habitat types. Five GPS devices, designed for wildlife tracking purposes, were used to collect the data needed for this study. GPS data was collected under the aforementioned scenarios in order to induce error into the data sets. Each data set was defined as a 1-hour data collecting period, with a fix rate of 60 seconds, which resulted in 60 points per sample. The samples were analyzed to determine the magnitude of effect the five variables have on the positional accuracy of the data. Thirty samples were collected for each of the following scenarios: (1) open grassland with uninhibited canopy closure, (2) sparse vegetation canopy closure, (3) dense vegetation canopy closure, (4) close proximity to buildings (<2 m), and (5) open water with uninhibited canopy closure. Then, GPS errors (in terms of mean and maximum distance from the mean center of each sample) were calculated for each sample using a geographic information system (GIS). Confidence intervals were calculated for each scenario in order to evaluate and compare the levels of error. Finally, this data was used to assess the effect of positional uncertainty on home range estimation through the use of a minimum convex polygon home range estimation technique. Open grassland and open water cover types were found to introduce the least amount of positional uncertainty into the data sets. The sparse coverage cover type introduces a higher degree of error into data sets, while the dense coverage and building proximity cover types introduce the greatest amount of positional uncertainty into the data sets. When used to create minimum convex polygon home range estimates, these data sets show that the home range estimates are significantly larger when the positional error is unaccounted for as opposed to when it is factored into the home range estimate.
|
364 |
Convex Optimization Methods for System IdentificationDautbegovic, Dino January 2014 (has links)
The extensive use of a least-squares problem formulation in many fields is partly motivated by the existence of an analytic solution formula which makes the theory comprehensible and readily applicable, but also easily embedded in computer-aided design or analysis tools. While the mathematics behind convex optimization has been studied for about a century, several recent researches have stimulated a new interest in the topic. Convex optimization, being a special class of mathematical optimization problems, can be considered as generalization of both least-squares and linear programming. As in the case of a linear programming problem there is in general no simple analytical formula that can be used to find the solution of a convex optimization problem. There exists however efficient methods or software implementations for solving a large class of convex problems. The challenge and the state of the art in using convex optimization comes from the difficulty in recognizing and formulating the problem. The main goal of this thesis is to investigate the potential advantages and benefits of convex optimization techniques in the field of system identification. The primary work focuses on parametric discrete-time system identification models in which we assume or choose a specific model structure and try to estimate the model parameters for best fit using experimental input-output (IO) data. By developing a working knowledge of convex optimization and treating the system identification problem as a convex optimization problem will allow us to reduce the uncertainties in the parameter estimation. This is achieved by reecting prior knowledge about the system in terms of constraint functions in the least-squares formulation.
|
365 |
BEAMFORMING TECHNIQUES USING CONVEX OPTIMIZATION / Beamforming using CVXJangam, Ravindra nath vijay kumar January 2014 (has links)
The thesis analyses and validates Beamforming methods using Convex Optimization. CVX which is a Matlab supported tool for convex optimization has been used to develop this concept. An algorithm is designed by which an appropriate system has been identified by varying parameters such as number of antennas, passband width, and stopbands widths of a beamformer. We have observed the beamformer by minimizing the error for Least-square and Infinity norms. A graph obtained by the optimum values between least-square and infinity norms shows us a trade-off between these two norms. We have observed convex optimization for double passband of a beamformer which has proven the flexibility of convex optimization. On extension for this, we designed a filter in which stopband is arbitrary. A constraint is used by which the stopband would be varying depending upon the upper boundary (limiting) line which varies w.r.t y-axis (dB). The beamformer has been observed for feasibility by varying parameters such as number of antennas, arbitrary upper boundaries, stopbands and passband. This proves that there is flexibility for designing a beamformer as desired.
|
366 |
Heuristics for Inventory Systems Based on Quadratic Approximation of L-Natural-Convex Value FunctionsWang, Kai January 2014 (has links)
<p>We propose an approximation scheme for single-product periodic-review inventory systems with L-natural-convex structure. We lay out three well-studied inventory models, namely the lost-sales system, the perishable inventory system, and the joint inventory-pricing problem. We approximate the value functions for these models by the class of L-natural-convex quadratic functions, through the technique of linear programming approach to approximate dynamic programming. A series of heuristics are derived based on the quadratic approximation, and their performances are evaluated by comparison with existing heuristics. We present the numerical results and show that our heuristics outperform the benchmarks for majority of cases and scale well with long lead times. In this dissertation we also discuss the alternative strategies we have tried but with unsatisfactory result.</p> / Dissertation
|
367 |
Numerical Modelling of van der Waals FluidsOdeyemi, Tinuade A. 19 March 2012 (has links)
Many problems in fluid mechanics and material sciences deal with liquid-vapour flows. In these flows, the ideal gas assumption is not accurate and the van der Waals equation of state is usually used. This equation of state is non-convex and causes the solution domain to have two hyperbolic regions separated by an elliptic region. Therefore, the governing equations of these flows have a mixed elliptic-hyperbolic nature.
Numerical oscillations usually appear with standard finite-difference space discretization schemes, and they persist when the order of accuracy of the semi-discrete scheme is increased. In this study, we propose to use a Chebyshev pseudospectral method for solving the governing equations. A comparison of the results of this method with very high-order (up to tenth-order accurate) finite difference schemes is presented, which shows that the proposed method leads to a lower level of numerical oscillations than other high-order finite difference schemes, and also does not exhibit fast-traveling packages of short waves which are usually observed in high-order finite difference methods. The proposed method can thus successfully capture various complex regimes of waves and phase transitions in both elliptic and hyperbolic regimes
|
368 |
Developing Parsimonious and Efficient Algorithms for Water Resources Optimization ProblemsAsadzadeh Esfahani, Masoud 13 November 2012 (has links)
In the current water resources scientific literature, a wide variety of engineering design problems are solved in a simulation-optimization framework. These problems can have single or multiple objective functions and their decision variables can have discrete or continuous values. The majority of current literature in the field of water resources systems optimization report using heuristic global optimization algorithms, including evolutionary algorithms, with great success. These algorithms have multiple parameters that control their behavior both in terms of computational efficiency and the ability to find near globally optimal solutions. Values of these parameters are generally obtained by trial and error and are case study dependent. On the other hand, water resources simulation-optimization problems often have computationally intensive simulation models that can require seconds to hours for a single simulation. Furthermore, analysts may have limited computational budget to solve these problems, as such, the analyst may not be able to spend some of the computational budget to fine-tune the algorithm settings and parameter values. So, in general, algorithm parsimony in the number of parameters is an important factor in the applicability and performance of optimization algorithms for solving computationally intensive problems.
A major contribution of this thesis is the development of a highly efficient, single objective, parsimonious optimization algorithm for solving problems with discrete decision variables. The algorithm is called Hybrid Discrete Dynamically Dimensioned Search, HD-DDS, and is designed based on Dynamically Dimensioned Search (DDS) that was developed by Tolson and Shoemaker (2007) for solving single objective hydrologic model calibration problems with continuous decision variables. The motivation for developing HD-DDS comes from the parsimony and high performance of original version of DDS. Similar to DDS, HD-DDS has a single parameter with a robust default value. HD-DDS is successfully applied to several benchmark water distribution system design problems where decision variables are pipe sizes among the available pipe size options. Results show that HD-DDS exhibits superior performance in specific comparisons to state-of-the-art optimization algorithms.
The parsimony and efficiency of the original and discrete versions of DDS and their successful application to single objective water resources optimization problems with discrete and continuous decision variables motivated the development of a multi-objective optimization algorithm based on DDS. This algorithm is called Pareto Archived Dynamically Dimensioned Search (PA-DDS). The algorithm parsimony is a major factor in the design of PA-DDS. PA-DDS has a single parameter from its search engine DDS. In each iteration, PA-DDS selects one archived non-dominated solution and perturbs it to search for new solutions. The solution perturbation scheme of PA-DDS is similar to the original and discrete versions of DDS depending on whether the decision variable is discrete or continuous. So, PA-DDS can handle both types of decision variables. PA-DDS is applied to several benchmark mathematical problems, water distribution system design problems, and water resources model calibration problems with great success.
It is shown that hypervolume contribution, HVC1, as defined in Knowles et al. (2003) is the superior selection metric for PA-DDS when solving multi-objective optimization problems with Pareto fronts that have a general (unknown) shape. However, one of the main contributions of this thesis is the development of a selection metric specifically designed for solving multi-objective optimization problems with a known or expected convex Pareto front such as water resources model calibration problems. The selection metric is called convex hull contribution (CHC) and makes the optimization algorithm sample solely from a subset of archived solutions that form the convex approximation of the Pareto front. Although CHC is generally applicable to any stochastic search optimization algorithm, it is applied to PA-DDS for solving six water resources calibration case studies with two or three objective functions. These case studies are solved by PA-DDS with CHC and HVC1 selections using 1,000 solution evaluations and by PA-DDS with CHC selection and two popular multi-objective optimization algorithms, AMALGAM and ε-NSGAII, using 10,000 solution evaluations. Results are compared based on the best case and worst case performances (out of multiple optimization trials) from each algorithm to measure the expected performance range for each algorithm. Comparing the best case performance of these algorithms shows that, PA-DDS with CHC selection using 1,000 solution evaluations perform very well in five out of six case studies. Comparing the worst case performance of the algorithms shows that with 1,000 solution evaluations, PA-DDS with CHC selection perform well in four out of six case studies. Furthermore, PA-DDS with CHC selection using 10,000 solution evaluations perform comparable to AMALGAM and ε-NSGAII. Therefore, it is concluded that PA-DDS with CHC selection is a powerful optimization algorithm for finding high quality solutions of multi-objective water resources model calibration problems with convex Pareto front especially when the computational budget is limited.
|
369 |
Isometry and convexity in dimensionality reductionVasiloglou, Nikolaos 30 March 2009 (has links)
The size of data generated every year follows an exponential growth. The number of data points as well as the dimensions have increased dramatically the past 15 years. The gap between the demand from the industry in data processing and the solutions provided by the machine learning community is increasing. Despite the growth in memory and computational power, advanced statistical processing on the order of gigabytes is beyond any possibility. Most sophisticated Machine Learning algorithms require at least quadratic complexity. With the current computer model architecture, algorithms with higher complexity than linear O(N) or O(N logN) are not considered practical. Dimensionality reduction is a challenging problem in machine learning. Often data represented as multidimensional points happen to have high dimensionality. It turns out that the information they carry can be expressed with much less dimensions. Moreover the reduced dimensions of the data can have better interpretability than the original ones. There is a great variety of dimensionality reduction algorithms under the theory of Manifold Learning. Most of the methods such as Isomap, Local Linear Embedding, Local Tangent Space Alignment, Diffusion Maps etc. have been extensively studied under the framework of Kernel Principal Component Analysis (KPCA). In this dissertation we study two current state of the art dimensionality reduction methods, Maximum Variance Unfolding (MVU) and Non-Negative Matrix Factorization (NMF). These two dimensionality reduction methods do not fit under the umbrella of Kernel PCA. MVU is cast as a Semidefinite Program, a modern convex nonlinear optimization algorithm, that offers more flexibility and power compared to iv KPCA. Although MVU and NMF seem to be two disconnected problems, we show that there is a connection between them. Both are special cases of a general nonlinear factorization algorithm that we developed. Two aspects of the algorithms are of particular interest: computational complexity and interpretability. In other words computational complexity answers the question of how fast we can find the best solution of MVU/NMF for large data volumes. Since we are dealing with optimization programs, we need to find the global optimum. Global optimum is strongly connected with the convexity of the problem. Interpretability is strongly connected with local isometry1 that gives meaning in relationships between data points. Another aspect of interpretability is association of data with labeled information. The contributions of this thesis are the following:
1. MVU is modified so that it can scale more efficient. Results are shown on 1 million speech datasets. Limitations of the method are highlighted.
2. An algorithm for fast computations for the furthest neighbors is presented for the first time in the literature.
3. Construction of optimal kernels for Kernel Density Estimation with modern convex programming is presented. For the first time we show that the Leave One Cross Validation (LOOCV) function is quasi-concave.
4. For the first time NMF is formulated as a convex optimization problem
5. An algorithm for the problem of Completely Positive Matrix Factorization is presented.
6. A hybrid algorithm of MVU and NMF the isoNMF is presented combining advantages of both methods.
7. The Isometric Separation Maps (ISM) a variation of MVU that contains classification information is presented.
8. Large scale nonlinear dimensional analysis on the TIMIT speech database is performed.
9. A general nonlinear factorization algorithm is presented based on sequential convex programming. Despite the efforts to scale the proposed methods up to 1 million data points in reasonable time, the gap between the industrial demand and the current state of the art is still orders of magnitude wide.
|
370 |
Convex analysis and flows in infinite networksWattanataweekul, Hathaikarn. January 2006 (has links)
Thesis (Ph.D.) -- Mississippi State University. Department of Mathematics and Statistics. / Title from title screen. Includes bibliographical references.
|
Page generated in 0.0382 seconds