Spelling suggestions: "subject:"aptimization techniques"" "subject:"anoptimization techniques""
11 |
Adaptive Fractionally-Spaced Equalization with Explicit Sidelobe Control Using Interior Point Optimization TechniquesMittal, Ashish 07 1900 (has links)
<p> This thesis addresses the design of fractionally-spaced equalizers for a digital communication system which is susceptible to Adjacent Channel Interference (ACI). ACI can render an otherwise well designed system prone to excess bit errors. Algorithms for a trained adaptive FIR linear fractionally-spaced equalizer (FSE) with explicit sidelobe control are developed in order to provide robustness to ACI. The explicit sidelobe control is achieved by imposing a quadratic inequality constraint on the frequency response of the equalizer at a discrete set of frequency points in the sidelobe region.</p> <p> Algorithms are developed for both block adaptive and symbol-by-symbol adaptive modes. These algorithms use interior point optimization techniques to find the optimal equalizer coefficients. In the block adaptive mode, the problem is reformulated as a Second Order Cone Program (SOCP). In the symbol-by-symbol adaptive mode, the philosophy of the barrier approach to interior point methods is adopted. The concept of a central path and the Method of Analytic Centers (MAC) are used to develop two practically implementable algorithms, namely IPM2 and SBM, for performing symbol-by-symbol adaptive, fractionally-spaced equalization, with multiple quadratic inequality constraints.</p> <p> The performance of the proposed algorithms is compared to that of the Wiener filter, and the standard RLS algorithm with explicit diagonal loading. In the computer simulations, the proposed algorithms perform better in the sense that they provide the desired robustness when the communication model is prone to intermittent interferers in the sidelobe region of the frequency response of the FSE. Although the proposed algorithms have a moderately higher computational cost, their insensitivity to the deleterious effects of ACI make them an attractive choice in certain applications.</p> / Thesis / Master of Applied Science (MASc)
|
12 |
Morphologically simplified conductance based neuron models: principles of construction and use in parameter optimizationHendrickson, Eric B. 02 April 2010 (has links)
The dynamics of biological neural networks are of great interest to neuroscientists and are frequently studied using conductance-based compartmental neuron models. For speed and ease of use, neuron models are often reduced in morphological complexity. This reduction may affect input processing and prevent the accurate reproduction of neural dynamics. However, such effects are not yet well understood. Therefore, for my first aim I analyzed the processing capabilities of 'branched' or 'unbranched' reduced models by collapsing the dendritic tree of a morphologically realistic 'full' globus pallidus neuron model while maintaining all other model parameters. Branched models maintained the original detailed branching structure of the full model while the unbranched models did not. I found that full model responses to somatic inputs were generally preserved by both types of reduced model but that branched reduced models were better able to maintain responses to dendritic inputs. However, inputs that caused dendritic sodium spikes, for instance, could not be accurately reproduced by any reduced model. Based on my analyses, I provide recommendations on how to construct reduced models and indicate suitable applications for different levels of reduction. In particular, I recommend that unbranched reduced models be used for fast searches of parameter space given somatic input output data.
The intrinsic electrical properties of neurons depend on the modifiable behavior of their ion channels. Obtaining a quality match between recorded voltage traces and the output of a conductance based compartmental neuron model depends on accurate estimates of the kinetic parameters of the channels in the biological neuron. Indeed, mismatches in channel kinetics may be detectable as failures to match somatic neural recordings when tuning model conductance densities. In my first aim, I showed that this is a task for which unbranched reduced models are ideally suited. Therefore, for my second aim I optimized unbranched reduced model parameters to match three experimentally characterized globus pallidus neurons by performing two stages of automated searches. In the first stage, I set conductance densities free and found that even the best matches to experimental data exhibited unavoidable problems. I hypothesized that these mismatches were due to limitations in channel model kinetics. To test this hypothesis, I performed a second stage of searches with free channel kinetics and observed decreases in the mismatches from the first stage. Additionally, some kinetic parameters consistently shifted to new values in multiple cells, suggesting the possibility for tailored improvements to channel models. Given my results and the potential for cell specific modulation of channel kinetics, I recommend that experimental kinetic data be considered as a starting point rather than as a gold standard for the development of neuron models.
|
13 |
Méthodes analytiques d'étude pour la diminution des pertes de puissance dans les réseaux électriques maillés en utilisant des techniques d'optimisation pour le dimensionnement et l'emplacement des générateurs décentralisés / Analytical study methods for reducing power losses in meshed electrical networks using optimization techniques for the sizing and location of decentralized generatorsAl Ameri, Ahmed 04 April 2017 (has links)
Les travaux de recherche présentés dans ce mémoire ont pour objet d’apporter une vision stratégique d’intégration des productions distribuées (PD) dans les réseaux électriques. Ces travaux concernent la localisation optimale du point de raccordement, le dimensionnement et le type de production dans l’objectif de maximiser les bénéfices de la PD et de minimiser les pertes dans les réseaux. Les travaux de cette thèse concernent également la prise en compte de la variabilité de la charge et de la production dans la planification et la gestion opérationnelle des réseaux électriques. Tout d’abord, des algorithmes ont été développés pour les études des flux de puissance dans les systèmes d'alimentation en utilisant la méthode du complément Schur et la méthode « Run Length Encoding ». Ensuite, les pertes ont été estimées dans le calcul de la production réelle en développant un modèle linéaire simple, efficace et flexible. Par la suite, des productions décentralisées connectées aux réseaux électriques ont été modélisées en utilisant une méthode qui fusionne les filtres de Kalman et la théorie des graphes dans le but d'estimer la taille optimale de la production décentralisée. Une méthode qui comporte deux étapes est proposée. Dans la première étape, la méthode graphique est utilisée pour générer la matrice incidente pour construire le modèle linéaire et dans la deuxième étape, un algorithme Kalman est appliqué pour obtenir la taille optimale de production décentralisée à chaque jeu de barres. Les défis de l'utilisation de productions décentralisées ont été abordés pour minimiser la fonction objective (pertes de puissance réelle) en tenant compte de la capacité des productions décentralisées, de la capacité de la ligne de transmission et des contraintes de profil de tension. L’algorithme génétique et de techniques d'optimisations comme la méthode de points intérieurs ont été proposés pour déterminer localement et globalement le dimensionnement optimal et l'emplacement optimal des productions décentralisées dans les réseaux électriques. Enfin, un modèle de charge active a été conçu pour étudier différents types de courbe de charge (résidentielle, commerciale et industrielle). Nous avons développé également des algorithmes de simulation pour étudier l'intégration des parcs éoliens dans les réseaux électriques. Nous avons conçu des méthodes analytiques pour sélectionner la taille et l’emplacement d’une ferme éolienne, basé sur la réduction des pertes de puissance active. Nous avons montré que les variations de la vitesse moyenne annuelle du vent pourraient avoir un effet important sur les calculs de pertes de puissance active. Les méthodes analytiques et les algorithmes de simulation ont été développés sous Matlab/Simulink. / The research presented in this thesis aims at providing a strategic vision for the integration of distributed generators (DGs) into grid networks. This work focuses the optimal location of the connection point, dimensioning and type of production in order to maximize the benefits of DGs and minimize power losses in the networks. The work also concerns the impact of the variability of the load and the production in the planning and the operational management of the networks. First, algorithms have been developed for power flow studies in power systems using the Schur complement method and the "Run Length Encoding" method. Then, losses were estimated in the calculation of power output by developing a simple, efficient and flexible linear model. Subsequently, decentralized outputs connected to the electrical networks were modeled using a method that merges Kalman filters and graph theory in order to estimate the optimal size of decentralized production. A method which consists of two steps is proposed. In the first step, the graphical method is used to generate the incident matrix to construct the linear model and in the second step a Kalman algorithm is applied to obtain the optimal decentralized production size for each busbar. The challenges of using decentralized production have been addressed to minimize the objective function (real power losses) by taking into account the capacity of the decentralized productions, transmission line capacity and voltage profile constraints. The genetic algorithms and optimization techniques such as the method of interior points have been proposed to determine locally and globally the optimal dimensioning and the optimal location of the decentralized productions in the electrical networks. Finally, an active load model was designed to study different types of load curves (residential, commercial and industrial). We have also developed simulation algorithms to study the integration of wind farms in power grids. We have designed analytical methods to select the size and location of a wind farm, based on the reduction of active power losses. We have shown that variations in the mean annual wind speed could have a significant effect on the calculations of active power losses. Analytical methods and simulation algorithms were developed under Matlab / Simulink.
|
14 |
Optimization in an Error Backpropagation Neural Network Environment with a Performance Test on a Pattern Classification ProblemFischer, Manfred M., Staufer-Steinnocher, Petra 03 1900 (has links) (PDF)
Various techniques of optimizing the multiple class cross-entropy error function
to train single hidden layer neural network classifiers with softmax output transfer
functions are investigated on a real-world multispectral pixel-by-pixel classification
problem that is of fundamental importance in remote sensing. These techniques
include epoch-based and batch versions of backpropagation of gradient descent,
PR-conjugate gradient and BFGS quasi-Newton errors. The method of choice
depends upon the nature of the learning task and whether one wants to optimize
learning for speed or generalization performance. It was found that, comparatively
considered, gradient descent error backpropagation provided the best and most stable
out-of-sample performance results across batch and epoch-based modes of operation.
If the goal is to maximize learning speed and a sacrifice in generalisation is acceptable,
then PR-conjugate gradient error backpropagation tends to be superior. If the
training set is very large, stochastic epoch-based versions of local optimizers should
be chosen utilizing a larger rather than a smaller epoch size to avoid inacceptable
instabilities in the generalization results. (authors' abstract) / Series: Discussion Papers of the Institute for Economic Geography and GIScience
|
15 |
Offline Task Scheduling in a Three-layer Edge-Cloud ArchitectureMahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
|
16 |
Parallel optimization based operational planning to enhance the resilience of large-scale power systemsGong, Lin 01 May 2020 (has links)
The resilience of power systems is attracting extensive attention in recent years and needs to be further enhanced in the future, as potential threats from severe events such as extreme weather, geomagnetic storm, as well as extended fuel disruption, which are not easy to be quantified, predicted, or anticipated, are still challenging the modern power industry. To increase the resilience, proper operational planning considering potential impacts of severe events could effectively enable power systems to prepare for, operate through, and recover from those events and mitigate their negative economic, social, and humanitarian consequences by fully deploying existing system resources and operational measures. In this dissertation, operational planning problems in the bulk power system considering potential threats from severe events are focused, including the co-optimization of security-constrained unit commitment and transmission switching with consideration of transmission line outages probably caused by severe weather events, the security-constrained optimal power flow under potential impacts from geomagnetic storms, and the optimal operational planning to prevent electricity-natural gas systems from possible risks of natural gas supply disruptions. Notice that systematic, comprehensive, and consistent operational strategies should be conducted across the entire system to achieve superior resilience enhancement solution, which, along with increased size and complexity of modern energy systems, makes the proposed operational planning problems mathematically large-size and computationally complex optimization problems, and practically difficult to solve, especially when comprehensive operational measures and resourceful components are incorporated. In order to tackle such a challenge, the parallel optimization based approaches are developed in the proposed research, which fully decompose an originally large and complex problem into multiple independent small subproblems, simultaneously solve them in a fully parallel manner on scalable multiple-core computing platforms, and iteratively coordinate their results by using mathematical programming methods to achieve optimal solutions that satisfy engineering requirements of power system operations in practice. As a result, by efficiently solving optimal operational planning problems of large-scale power systems, their secure and economic operations in the presence of severe events like hurricanes, geomagnetic storms, and natural gas supply disruptions can be ensured, which indicates the resilience of power systems is effectively enhanced.
|
17 |
Learning in Stochastic Stackelberg GamesPranoy Das (18369306) 19 April 2024 (has links)
<p dir="ltr">The original definition of Nash Equilibrium applied to normal form games, but the notion has now been extended to various other forms of games including leader-follower games (Stackelberg games), extensive form games, stochastic games, games of incomplete information, cooperative games, and so on. We focus on general-sum stochastic Stackelberg games in this work. An example where such games would be natural to consider is in security games where a defender wishes to protect some targets through deployment of limited resources and an attacker wishes to strategically attack the targets to benefit themselves. The hierarchical order of play arises naturally since the defender typically acts first and deploys a strategy, while the attacker observes the strategy ofthe defender before attacking. Another example where this framework fits is in testing during epidemics, where the leader (the government) sets testing policies and the follower (the citizens) decide at every time step whether to get tested. The government wishes to minimize the number of infected people in the population while the follower wishes to minimize the cost of getting sick and testing. This thesis presents a learning algorithm for players to converge to their stationary policies in a general sum stochastic sequential Stackelberg game. The algorithm is a two time scale implicit policy gradient algorithm that provably converges to stationary points of the optimization problems of the two players. Our analysis allows us to move beyond the assumptions of zero-sum or static Stackelberg games made in the existing literature for learning algorithms to converge.</p><p dir="ltr"><br></p>
|
18 |
Regression models with an interval-censored covariateLangohr, Klaus 16 June 2004 (has links)
El análisis de supervivencia trata de la evaluación estadística de variables que miden el tiempo transcurrido hasta un evento de interés. Una particularidad que ha de considerar el análisis de supervivencia son datos censurados. Éstos aparecen cuando el tiempo de interés no puede ser observado exactamente y la información al respecto es parcial. Se distinguen diferentes tipos de censura: un tiempo censurado por la derecha está presente si el tiempo de supervivencia es sabido mayor a un tiempo observado; censura por izquierda está dada si la supervivencia es menor que un tiempo observado. En el caso de censura en un intervalo, el tiempo está en un intervalo de tiempo observado, y el caso de doble censura aparece cuando, también, el origen del tiempo de supervivencia está censurado.La primera parte del Capítulo 1 contiene un resumen de la metodología estadística para datos censurados en un intervalo, incluyendo tanto métodos paramétricos como no-paramétricos. En la Sección 1.2 abordamos el tema de censura noinformativa que se supone cumplida para todos los métodos presentados. Dada la importancia de métodos de optimización en los demás capítulos, la Sección 1.3 trata de la teoría de optimización. Esto incluye varios algoritmos de optimización y la presentación de herramientas de optimización. Se ha utilizado el lenguaje de programación matemática AMPL para resolver los problemas de maximización que han surgido. Una de las características más importantes de AMPL es la posibilidad de enviar problemas de optimización al servidor 'NEOS: Server for Optimization' en Internet para que sean solucionados por ese servidor.En el Capítulo 2, se presentan los conjuntos de datos que han sido analizados. El primer estudio es sobre la supervivencia de pacientes de tuberculosis co-infectados por el VIH en Barcelona, mientras el siguiente, también del área de VIH/SIDA, trata de usuarios de drogas intra-venosas de Badalona y alrededores que fueron admitidos a la unidad de desintoxicación del Hospital Trias i Pujol. Un área completamente diferente son los estudios sobre la vida útil de alimentos. Se presenta la aplicación de la metodología para datos censurados en un intervalo en esta área. El Capítulo 3 trata del marco teórico de un modelo de vida acelerada con una covariante censurada en un intervalo. Puntos importantes a tratar son el desarrollo de la función de verosimilitud y el procedimiento de estimación de parámetros con métodos del área de optimización. Su uso puede ser una herramienta importante en la estadística. Estos métodos se aplican también a otros modelos con una covariante censurada en un intervalo como se demuestra en el Capítulo 4.Otros métodos que se podrían aplicar son descritos en el Capítulo 5. Se trata sobre todo de métodos basados en técnicas de imputación para datos censurados en un intervalo. Consisten en dos pasos: primero, se imputa el valor desconocido de la covariante, después, se pueden estimar los parámetros con procedimientos estadísticos estándares disponibles en cualquier paquete de software estadístico.El método de maximización simultánea ha sido implementado por el autor con el código de AMPL y ha sido aplicado al conjunto de datos de Badalona. Presentamos los resultados de diferentes modelos y sus respectivas interpretaciones en el Capítulo 6. Se ha llevado a cabo un estudio de simulación cuyos resultados se dan en el Capítulo 7. Ha sido el objetivo comparar la maximización simultánea con dos procedimientos basados en la imputación para el modelo de vida acelerada. Finalmente, en el último capítulo se resumen los resultados y se abordan diferentes aspectos que aún permanecen sin ser resueltos o podrían ser aproximados de manera diferente. / Survival analysis deals with the evaluation of variables which measure the elapsed time until an event of interest. One particularity survival analysis has to account for are censored data, which arise whenever the time of interest cannot be measured exactly, but partial information is available. Four types of censoring are distinguished: right-censoring occurs when the unobserved survival time is bigger, left-censoring when it is less than an observed time, and in case of interval-censoring, the survival time is observed within a time interval. We speak of doubly-censored data if also the time origin is censored.In Chapter 1 of the thesis, we first give a survey on statistical methods for interval-censored data, including both parametric and nonparametric approaches. In the second part of Chapter 1, we address the important issue of noninformative censoring, which is assumed in all the methods presented. Given the importance of optimization procedures in the further chapters of the thesis, the final section of Chapter 1 is about optimization theory. This includes some optimization algorithms, as well as the presentation of optimization tools, which have played an important role in the elaboration of this work. We have used the mathematical programming language AMPL to solve the maximization problems arisen. One of its main features is that optimization problems written in the AMPL code can be sent to the internet facility 'NEOS: Server for Optimization' and be solved by its available solvers.In Chapter 2, we present the three data sets analyzed for the elaboration of this dissertation. Two correspond to studies on HIV/AIDS: one is on the survival of Tuberculosis patients co-infected with HIV in Barcelona, the other on injecting drug users from Badalona and surroundings, most of whom became infected with HIV as a result of their drug addiction. The complex censoring patterns in the variables of interest of the latter study have motivated the development of estimation procedures for regression models with interval-censored covariates. The third data set comes from a study on the shelf life of yogurt. We present a new approach to estimate the shelf lives of food products taking advantage of the existing methodology for interval-censored data.Chapter 3 deals with the theoretical background of an accelerated failure time model with an interval-censored covariate, putting emphasize on the development of the likelihood functions and the estimation procedure by means of optimization techniques and tools. Their use in statistics can be an attractive alternative to established methods such as the EM algorithm. In Chapter 4 we present further regression models such as linear and logistic regression with the same type of covariate, for the parameter estimation of which the same techniques are applied as in Chapter 3. Other possible estimation procedures are described in Chapter 5. These comprise mainly imputation methods, which consist of two steps: first, the observed intervals of the covariate are replaced by an imputed value, for example, the interval midpoint, then, standard procedures are applied to estimate the parameters.The application of the proposed estimation procedure for the accelerated failure time model with an interval-censored covariate to the data set on injecting drug users is addressed in Chapter 6. Different distributions and covariates are considered and the corresponding results are presented and discussed. To compare the estimation procedure with the imputation based methods of Chapter 5, a simulation study is carried out, whose design and results are the contents of Chapter 7. Finally, in the closing Chapter 8, the main results are summarized and several aspects which remain unsolved or might be approximated in another way are addressed.
|
19 |
Efektivní techniky pro měření výkonu programů / Efficient Techniques for Program Performance AnalysisPavela, Jiří January 2020 (has links)
Tato práce představuje optimalizační techniky zaměřené na proces sběru výkonnostních dat v rámci výkonnostní analýzy a profilování programů v nástroji Perun. Rozšíření architektury a implementace těchto nových optimalizačních technik v nástroji Perun (a převážně pak v jeho modulu Tracer) zlepšuje jeho škálovatelnost a umožňuje tak provádět výkonnostní analýzu i nad rozsáhlými projekty. Zaměřujeme se především na zvýšení přesnosti sběru dat, redukci množství instrumentovaných bodů programu, omezení časové režie procesu sběru dat a výkonnostního profilování, snížení objemu sbíraných dat a velikosti výsledného výkonnostního profilu. Optimalizace je dosažena pomocí aplikace statistických metod, množství technik statické a dynamické analýzy (případně jejich kombinací) a využitím pokročilých možností a schopností nástrojů SystemTap a eBPF. Na základě vyhodnocení provedeného na dvou vybraných projektech a množství experimentů můžeme konstatovat, že se nám úspěšně podařilo dosáhnout značné optimalizace u téměř všech sledovaných metrik a kritérií.
|
20 |
Vysokoúrovňové objektově orientované genetické programování pro optimalizaci logistických skladů / High-Level Object Oriented Genetic Programming in Logistic Warehouse OptimizationKarásek, Jan January 2014 (has links)
Disertační práce je zaměřena na optimalizaci průběhu pracovních operací v logistických skladech a distribučních centrech. Hlavním cílem je optimalizovat procesy plánování, rozvrhování a odbavování. Jelikož jde o problém patřící do třídy složitosti NP-težký, je výpočetně velmi náročné nalézt optimální řešení. Motivací pro řešení této práce je vyplnění pomyslné mezery mezi metodami zkoumanými na vědecké a akademické půdě a metodami používanými v produkčních komerčních prostředích. Jádro optimalizačního algoritmu je založeno na základě genetického programování řízeného bezkontextovou gramatikou. Hlavním přínosem této práce je a) navrhnout nový optimalizační algoritmus, který respektuje následující optimalizační podmínky: celkový čas zpracování, využití zdrojů, a zahlcení skladových uliček, které může nastat během zpracování úkolů, b) analyzovat historická data z provozu skladu a vyvinout sadu testovacích příkladů, které mohou sloužit jako referenční výsledky pro další výzkum, a dále c) pokusit se předčit stanovené referenční výsledky dosažené kvalifikovaným a trénovaným operačním manažerem jednoho z největších skladů ve střední Evropě.
|
Page generated in 0.0887 seconds