Spelling suggestions: "subject:"nonconvex optimization"" "subject:"nonconvexe optimization""
181 |
Farkas - type results for convex and non - convex inequality systemsHodrea, Ioan Bogdan 22 January 2008 (has links) (PDF)
As the title already suggests the aim of the present work is to present Farkas -
type results for inequality systems involving convex and/or non - convex functions.
To be able to give the desired results, we treat optimization problems which involve
convex and composed convex functions or non - convex functions like DC functions
or fractions.
To be able to use the fruitful Fenchel - Lagrange duality approach, to the primal
problem we attach an equivalent problem which is a convex optimization problem.
After giving a dual problem to the problem we initially treat, we provide weak
necessary conditions which secure strong duality, i.e., the case when the optimal
objective value of the primal problem coincides with the optimal objective value of
the dual problem and, moreover, the dual problem has an optimal solution.
Further, two ideas are followed. Firstly, using the weak and strong duality
between the primal problem and the dual problem, we are able to give necessary
and sufficient optimality conditions for the optimal solutions of the primal problem.
Secondly, provided that no duality gap lies between the primal problem and its
Fenchel - Lagrange - type dual we are able to demonstrate some Farkas - type
results and thus to underline once more the connections between the theorems of
the alternative and the theory of duality. One statement of the above mentioned
Farkas - type results is characterized using only epigraphs of functions.
We conclude our investigations by providing necessary and sufficient optimality
conditions for a multiobjective programming problem involving composed convex
functions. Using the well-known linear scalarization to the primal multiobjective
program a family of scalar optimization problems is attached. Further to each of
these scalar problems the Fenchel - Lagrange dual problem is determined. Making
use of the weak and strong duality between the scalarized problem and its dual the
desired optimality conditions are proved. Moreover, the way the dual problem of
the scalarized problem looks like gives us an idea about how to construct a vector
dual problem to the initial one. Further weak and strong vector duality assertions
are provided.
|
182 |
Nouvelles méthodes de calcul pour la prédiction des interactions protéine-protéine au niveau structural / Novel computational methods to predict protein-protein interactions on the structural levelPopov, Petr 28 January 2015 (has links)
Le docking moléculaire est une méthode permettant de prédire l'orientation d'une molécule donnée relativement à une autre lorsque celles-ci forment un complexe. Le premier algorithme de docking moléculaire a vu jour en 1990 afin de trouver de nouveaux candidats face à la protéase du VIH-1. Depuis, l'utilisation de protocoles de docking est devenue une pratique standard dans le domaine de la conception de nouveaux médicaments. Typiquement, un protocole de docking comporte plusieurs phases. Il requiert l'échantillonnage exhaustif du site d'interaction où les éléments impliqués sont considérées rigides. Des algorithmes de clustering sont utilisés afin de regrouper les candidats à l'appariement similaires. Des méthodes d'affinage sont appliquées pour prendre en compte la flexibilité au sein complexe moléculaire et afin d'éliminer de possibles artefacts de docking. Enfin, des algorithmes d'évaluation sont utilisés pour sélectionner les meilleurs candidats pour le docking. Cette thèse présente de nouveaux algorithmes de protocoles de docking qui facilitent la prédiction des structures de complexes protéinaires, une des cibles les plus importantes parmi les cibles visées par les méthodes de conception de médicaments. Une première contribution concerne l‘algorithme Docktrina qui permet de prédire les conformations de trimères protéinaires triangulaires. Celui-ci prend en entrée des prédictions de contacts paire-à-paire à partir d'hypothèse de corps rigides. Ensuite toutes les combinaisons possibles de paires de monomères sont évalués à l'aide d'un test de distance RMSD efficace. Cette méthode à la fois rapide et efficace améliore l'état de l'art sur les protéines trimères. Deuxièmement, nous présentons RigidRMSD une librairie C++ qui évalue en temps constant les distances RMSD entre conformations moléculaires correspondant à des transformations rigides. Cette librairie est en pratique utile lors du clustering de positions de docking, conduisant à des temps de calcul améliorés d'un facteur dix, comparé aux temps de calcul des algorithmes standards. Une troisième contribution concerne KSENIA, une fonction d'évaluation à base de connaissance pour l'étude des interactions protéine-protéine. Le problème de la reconstruction de fonction d'évaluation est alors formulé et résolu comme un problème d'optimisation convexe. Quatrièmement, CARBON, un nouvel algorithme pour l'affinage des candidats au docking basés sur des modèles corps-rigides est proposé. Le problème d'optimisation de corps-rigides est vu comme le calcul de trajectoires quasi-statiques de corps rigides influencés par la fonction énergie. CARBON fonctionne aussi bien avec un champ de force classique qu'avec une fonction d'évaluation à base de connaissance. CARBON est aussi utile pour l'affinage de complexes moléculaires qui comportent des clashes stériques modérés à importants. Finalement, une nouvelle méthode permet d'estimer les capacités de prédiction des fonctions d'évaluation. Celle-ci permet d‘évaluer de façon rigoureuse la performance de la fonction d'évaluation concernée sur des benchmarks de complexes moléculaires. La méthode manipule la distribution des scores attribués et non pas directement les scores de conformations particulières, ce qui la rend avantageuse au regard des critères standard basés sur le score le plus élevé. Les méthodes décrites au sein de la thèse sont testées et validées sur différents benchmarks protéines-protéines. Les algorithmes implémentés ont été utilisés avec succès pour la compétition CAPRI concernant la prédiction de complexes protéine-protéine. La méthodologie développée peut facilement être adaptée pour de la reconnaissance d'autres types d'interactions moléculaires impliquant par exemple des ligands, de l'ARN… Les implémentations en C++ des différents algorithmes présentés seront mises à disposition comme SAMSON Elements de la plateforme logicielle SAMSON sur http://www.samson-connect.net ou sur http://nano-d.inrialpes.fr/software. / Molecular docking is a method that predicts orientation of one molecule with respect to another one when forming a complex. The first computational method of molecular docking was applied to find new candidates against HIV-1 protease in 1990. Since then, using of docking pipelines has become a standard practice in drug discovery. Typically, a docking protocol comprises different phases. The exhaustive sampling of the binding site upon rigid-body approximation of the docking subunits is required. Clustering algorithms are used to group similar binding candidates. Refinement methods are applied to take into account flexibility of the molecular complex and to eliminate possible docking artefacts. Finally, scoring algorithms are employed to select the best binding candidates. The current thesis presents novel algorithms of docking protocols that facilitate structure prediction of protein complexes, which belong to one of the most important target classes in the structure-based drug design. First, DockTrina - a new algorithm to predict conformations of triangular protein trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) is presented. The method takes as input pair-wise contact predictions from a rigid-body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation (RMSD) test. Being fast and efficient, DockTrina outperforms state-of-the-art computational methods dedicated to predict structure of protein oligomers on the collected benchmark of protein trimers. Second, RigidRMSD - a C++ library that in constant time computes RMSDs between molecular poses corresponding to rigid-body transformations is presented. The library is practically useful for clustering docking poses, resulting in ten times speed up compared to standard RMSD-based clustering algorithms. Third, KSENIA - a novel knowledge-based scoring function for protein-protein interactions is developed. The problem of scoring function reconstruction is formulated and solved as a convex optimization problem. As a result, KSENIA is a smooth function and, thus, is suitable for the gradient-base refinement of molecular structures. Remarkably, it is shown that native interfaces of protein complexes provide sufficient information to reconstruct a well-discriminative scoring function. Fourth, CARBON - a new algorithm for the rigid-body refinement of docking candidates is proposed. The rigid-body optimization problem is viewed as the calculation of quasi-static trajectories of rigid bodies influenced by the energy function. To circumvent the typical problem of incorrect stepsizes for rotation and translation movements of molecular complexes, the concept of controlled advancement is introduced. CARBON works well both in combination with a classical force-field and a knowledge-based scoring function. CARBON is also suitable for refinement of molecular complexes with moderate and large steric clashes between its subunits. Finally, a novel method to evaluate prediction capability of scoring functions is introduced. It allows to rigorously assess the performance of the scoring function of interest on benchmarks of molecular complexes. The method manipulates with the score distributions rather than with scores of particular conformations, which makes it advantageous compared to the standard hit-rate criteria. The methods described in the thesis are tested and validated on various protein-protein benchmarks. The implemented algorithms are successfully used in the CAPRI contest for structure prediction of protein-protein complexes. The developed methodology can be easily adapted to the recognition of other types of molecular interactions, involving ligands, polysaccharides, RNAs, etc. The C++ versions of the presented algorithms will be made available as SAMSON Elements for the SAMSON software platform at http://www.samson-connect.net or at http://nano-d.inrialpes.fr/software.
|
183 |
Integrality Gaps for Strong Linear Programming and Semidefinite Programming RelaxationsGeorgiou, Konstantinos 17 February 2011 (has links)
The inapproximability for NP-hard combinatorial optimization problems lies in the heart of theoretical computer science. A negative result can be either conditional, where the starting point is a complexity assumption, or unconditional, where the inapproximability holds for a restricted model of computation.
Algorithms based on Linear Programming (LP) and Semidefinite Programming (SDP) relaxations are among the most prominent models of computation. The related and common measure of efficiency is the integrality gap, which sets the limitations of the associated algorithmic schemes. A number of systematic procedures, known as lift-and-project systems, have been proposed to improve the integrality gap of standard relaxations. These systems build strong hierarchies of either LP relaxations, such as the Lovasz-Schrijver (LS) and the Sherali-Adams (SA) systems, or SDP relaxations, such as the Lovasz-Schrijver SDP (LS+), the Sherali-Adams SDP (SA+) and the Lasserre (La) systems.
In this thesis we prove integrality gap lower bounds for the aforementioned lift-and-project systems and for a number of combinatorial optimization problems, whose inapproximability is yet unresolved. Given that lift-and-project systems produce relaxations that have given the best algorithms known for a series of combinatorial problems, the lower bounds can be read as strong evidence of the inapproximability of the corresponding optimization problems. From the results found in the thesis we highlight the following:
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) LS+ relaxation of the Vertex Cover polytope has integrality gap 2-epsilon.
The integrality gap of the standard SDP for Vertex Cover remains 2-o(1) even if all hypermetric inequalities are added to the relaxation. The resulting relaxations are incomparable to the SDP relaxations derived by the LS+ system. Finally, the addition of all ell1 inequalities eliminates all solutions not in the integral hull.
For every epsilon>0, the level-Omega(sqrt(log n/ log log n)) SA relaxation of Vertex Cover has integrality gap 2-epsilon. The integrality gap remains tight even for superconstant-level SA+ relaxations.
We prove a tight lower bound for the number of tightenings that the SA system needs in order to prove the Pigeonhole Principle. We also prove sublinear and linear rank bounds for the La and SA systems respectively for the Tseitin tautology.
Linear levels of the SA+ system treat highly unsatisfiable instances of fixed predicate-P constraint satisfaction problems over q-ary alphabets as fully satisfiable, when the satisfying assignments of the predicates P can be equipped with a balanced and pairwise independent distribution.
We study the performance of the Lasserre system on the cut polytope. When the input is the complete graph on 2d+1 vertices, we show that the integrality gap is at least 1+1/(4d(d+1)) for the level-d SDP relaxation.
|
184 |
Contribution à l'Étude des Techniques Hybrides de Localisation dans les Réseaux Sans-fil HétérogènesLaaraiedh, Mohamed 15 December 2010 (has links) (PDF)
Les avancements récents dans les technologies sans fil ont vu l'émergence de techniques de localisation qui constitue une base utile et rentable pour offrir des nouveaux services. Ces services topo-dépendants ont été de plus en plus bénéfiques pour les opérateurs et les entreprises de télécommunications. Divers services topo-dépendants peuvent être offerts 'a l'utilisateur tels que le suivi, la publicité, la sécurité, et la gestion. Les réseaux sans fil eux-mêmes peuvent bénéficier de l'information de localisation pour améliorer les performances de leurs différentes couches. Le routage, la synchronisation et l'annulation d'interférences sont quelques exemples o'u l'information de localisation peut être fructueuse. Un système de localisation doit être capable d'exécuter deux tâches principales : la mesure des paramètres topo-dépendants (RSSI, TOA, et TDOA) et l'estimation de la position en utilisant des estimateurs appropriés. L'objectif principal de cette thèse est l'étude de différentes techniques d'estimation de la position: algébriques et géométriques. Les techniques algébriques étudiées sont les moindres carrés, le maximum de vraisemblance, et la programmation semi-définie. La technique géométrique RGPA proposée est basée sur l'analyse par intervalles et la représentation géométrique des paramètres topo-dépendants. L'accent est mis sur la fusion de différents paramètres topo-dépendants et son influence sur la précision de positionnement. L'estimation et la mesure des paramètres topo-dépendants sont également étudiées en utilisant une campagne de mesures ULB afin d'avoir une compréhension complète du domaine de localisation.
|
185 |
Performance evaluation and enhancement for AF two-way relaying in the presence of channel estimation errorWang, Chenyuan 30 April 2012 (has links)
Cooperative relaying is a promising diversity achieving technique to provide reliable transmission, high throughput and extensive coverage for wireless networks in a variety of applications. Two-way relaying is a spectrally efficient protocol, providing one solution to overcome the half-duplex loss in one-way relay channels. Moreover, incorporating the multiple-input-multiple-output (MIMO) technology can further improve the spectral efficiency and diversity gain. A lot of related work has been performed on the two-way relay network (TWRN), but most of them assume perfect channel state information (CSI). In a realistic scenario, however, the channel is estimated and the estimation error exists. So in this thesis, we explicitly take into account the CSI error, and investigate its impact on the performance of amplify-and-forward (AF) TWRN where either multiple distributed single-antenna relays or a single multiple-antenna relay station is exploited.
For the distributed relay network, we consider imperfect self-interference cancellation at both sources that exchange information with the help of multiple relays, and maximal ratio combining (MRC) is then applied to improve the decision statistics under imperfect signal detection. The system performance degradation in terms of outage probability and average bit-error rate (BER) are analyzed, as well as their asymptotic trend. To further improve the spectral efficiency while maintain the spatial diversity, we utilize the maximum minimum (Max-Min) relay selection (RS), and examine the impact of imperfect CSI on this single RS scheme. To mitigate the negative effect of imperfect CSI, we resort to adaptive power allocation (PA) by minimizing either the outage probability or the average BER, which can be cast as a Geometric Programming (GP) problem. Numerical results verify the correctness of our analysis and show that the adaptive PA scheme outperforms the equal PA scheme under the aggregated effect of imperfect CSI.
When employing a single MIMO relay, the problem of robust MIMO relay design has been dealt with by considering the fact that only imperfect CSI is available. We design the MIMO relay based upon the CSI estimates, where the estimation errors are included to attain the robust design under the worst-case philosophy. The optimization problem corresponding to the robust MIMO relay design is shown to be nonconvex. This motivates the pursuit of semidefinite relaxation (SDR) coupled with the randomization technique to obtain computationally efficient high-quality approximate solutions. Numerical simulations compare the proposed MIMO relay with the existing nonrobust method, and therefore validate its robustness against the channel uncertainty. / Graduate
|
186 |
Nouvelles méthodes de calcul pour la prédiction des interactions protéine-protéine au niveau structural / Novel computational methods to predict protein-protein interactions on the structural levelPopov, Petr 28 January 2015 (has links)
Le docking moléculaire est une méthode permettant de prédire l'orientation d'une molécule donnée relativement à une autre lorsque celles-ci forment un complexe. Le premier algorithme de docking moléculaire a vu jour en 1990 afin de trouver de nouveaux candidats face à la protéase du VIH-1. Depuis, l'utilisation de protocoles de docking est devenue une pratique standard dans le domaine de la conception de nouveaux médicaments. Typiquement, un protocole de docking comporte plusieurs phases. Il requiert l'échantillonnage exhaustif du site d'interaction où les éléments impliqués sont considérées rigides. Des algorithmes de clustering sont utilisés afin de regrouper les candidats à l'appariement similaires. Des méthodes d'affinage sont appliquées pour prendre en compte la flexibilité au sein complexe moléculaire et afin d'éliminer de possibles artefacts de docking. Enfin, des algorithmes d'évaluation sont utilisés pour sélectionner les meilleurs candidats pour le docking. Cette thèse présente de nouveaux algorithmes de protocoles de docking qui facilitent la prédiction des structures de complexes protéinaires, une des cibles les plus importantes parmi les cibles visées par les méthodes de conception de médicaments. Une première contribution concerne l‘algorithme Docktrina qui permet de prédire les conformations de trimères protéinaires triangulaires. Celui-ci prend en entrée des prédictions de contacts paire-à-paire à partir d'hypothèse de corps rigides. Ensuite toutes les combinaisons possibles de paires de monomères sont évalués à l'aide d'un test de distance RMSD efficace. Cette méthode à la fois rapide et efficace améliore l'état de l'art sur les protéines trimères. Deuxièmement, nous présentons RigidRMSD une librairie C++ qui évalue en temps constant les distances RMSD entre conformations moléculaires correspondant à des transformations rigides. Cette librairie est en pratique utile lors du clustering de positions de docking, conduisant à des temps de calcul améliorés d'un facteur dix, comparé aux temps de calcul des algorithmes standards. Une troisième contribution concerne KSENIA, une fonction d'évaluation à base de connaissance pour l'étude des interactions protéine-protéine. Le problème de la reconstruction de fonction d'évaluation est alors formulé et résolu comme un problème d'optimisation convexe. Quatrièmement, CARBON, un nouvel algorithme pour l'affinage des candidats au docking basés sur des modèles corps-rigides est proposé. Le problème d'optimisation de corps-rigides est vu comme le calcul de trajectoires quasi-statiques de corps rigides influencés par la fonction énergie. CARBON fonctionne aussi bien avec un champ de force classique qu'avec une fonction d'évaluation à base de connaissance. CARBON est aussi utile pour l'affinage de complexes moléculaires qui comportent des clashes stériques modérés à importants. Finalement, une nouvelle méthode permet d'estimer les capacités de prédiction des fonctions d'évaluation. Celle-ci permet d‘évaluer de façon rigoureuse la performance de la fonction d'évaluation concernée sur des benchmarks de complexes moléculaires. La méthode manipule la distribution des scores attribués et non pas directement les scores de conformations particulières, ce qui la rend avantageuse au regard des critères standard basés sur le score le plus élevé. Les méthodes décrites au sein de la thèse sont testées et validées sur différents benchmarks protéines-protéines. Les algorithmes implémentés ont été utilisés avec succès pour la compétition CAPRI concernant la prédiction de complexes protéine-protéine. La méthodologie développée peut facilement être adaptée pour de la reconnaissance d'autres types d'interactions moléculaires impliquant par exemple des ligands, de l'ARN… Les implémentations en C++ des différents algorithmes présentés seront mises à disposition comme SAMSON Elements de la plateforme logicielle SAMSON sur http://www.samson-connect.net ou sur http://nano-d.inrialpes.fr/software. / Molecular docking is a method that predicts orientation of one molecule with respect to another one when forming a complex. The first computational method of molecular docking was applied to find new candidates against HIV-1 protease in 1990. Since then, using of docking pipelines has become a standard practice in drug discovery. Typically, a docking protocol comprises different phases. The exhaustive sampling of the binding site upon rigid-body approximation of the docking subunits is required. Clustering algorithms are used to group similar binding candidates. Refinement methods are applied to take into account flexibility of the molecular complex and to eliminate possible docking artefacts. Finally, scoring algorithms are employed to select the best binding candidates. The current thesis presents novel algorithms of docking protocols that facilitate structure prediction of protein complexes, which belong to one of the most important target classes in the structure-based drug design. First, DockTrina - a new algorithm to predict conformations of triangular protein trimers (i.e. trimers with pair-wise contacts between all three pairs of proteins) is presented. The method takes as input pair-wise contact predictions from a rigid-body docking program. It then scans and scores all possible combinations of pairs of monomers using a very fast root mean square deviation (RMSD) test. Being fast and efficient, DockTrina outperforms state-of-the-art computational methods dedicated to predict structure of protein oligomers on the collected benchmark of protein trimers. Second, RigidRMSD - a C++ library that in constant time computes RMSDs between molecular poses corresponding to rigid-body transformations is presented. The library is practically useful for clustering docking poses, resulting in ten times speed up compared to standard RMSD-based clustering algorithms. Third, KSENIA - a novel knowledge-based scoring function for protein-protein interactions is developed. The problem of scoring function reconstruction is formulated and solved as a convex optimization problem. As a result, KSENIA is a smooth function and, thus, is suitable for the gradient-base refinement of molecular structures. Remarkably, it is shown that native interfaces of protein complexes provide sufficient information to reconstruct a well-discriminative scoring function. Fourth, CARBON - a new algorithm for the rigid-body refinement of docking candidates is proposed. The rigid-body optimization problem is viewed as the calculation of quasi-static trajectories of rigid bodies influenced by the energy function. To circumvent the typical problem of incorrect stepsizes for rotation and translation movements of molecular complexes, the concept of controlled advancement is introduced. CARBON works well both in combination with a classical force-field and a knowledge-based scoring function. CARBON is also suitable for refinement of molecular complexes with moderate and large steric clashes between its subunits. Finally, a novel method to evaluate prediction capability of scoring functions is introduced. It allows to rigorously assess the performance of the scoring function of interest on benchmarks of molecular complexes. The method manipulates with the score distributions rather than with scores of particular conformations, which makes it advantageous compared to the standard hit-rate criteria. The methods described in the thesis are tested and validated on various protein-protein benchmarks. The implemented algorithms are successfully used in the CAPRI contest for structure prediction of protein-protein complexes. The developed methodology can be easily adapted to the recognition of other types of molecular interactions, involving ligands, polysaccharides, RNAs, etc. The C++ versions of the presented algorithms will be made available as SAMSON Elements for the SAMSON software platform at http://www.samson-connect.net or at http://nano-d.inrialpes.fr/software.
|
187 |
[en] A NEURAL NETWORK FOR ONLINE PORTFOLIO SELECTION WITH SIDE INFORMATION / [pt] UMA REDE NEURAL PARA O PROBLEMA DE SELEÇÃO ONLINE DE PORTFÓLIO COM INFORMAÇÃO LATERALGUILHERME AUGUSTO SCHUTZ 15 January 2019 (has links)
[pt] O mercado financeiro é essencial na economia, trazendo estabilidade, acesso a novos tipos de investimentos, e aumentando a capacidade das empresas no acesso ao crédito. A constante busca por reduzir o papel de especialistas humanos na tomada de decisão, visa reduzir o risco inerente as emoções intrínsecas do ser humano, do qual a máquina não compartilha. Como consequência, reduzindo efeitos especulativos no mercado, e aumentando a precisão nas decisões tomadas. Neste trabalho é discutido o problema de seleção de portfólios online, onde um vetor de alocações de ativos é requerido em cada passo. O algoritmo proposto é o multilayer perceptron with side information - MLPi. Este algoritmo utiliza redes neurais para a solução do problema quando o investidor tem acesso a informações futuras sobre o preço
dos ativos. Para avaliar o uso da informação lateral na seleção de portfolio, testamos empiricamente o MLPi em contraste com dois algoritmos, um baseline e o estado-da-arte. Como baseline é utilizado o buy-and-hold. O estado-da-arte é o algoritmo online moving average mean reversion proposto por Li e Hoi
(2012). Para avaliar a utilização de informação lateral no algoritmo MLPi é definido um benchmark baseado numa solução ótima simples utilizando a informação lateral, mas sem considerar a acurácia da informação futura. Para os experimentos, utilizamos informações a nível de minuto do mercado de ações brasileiro, operados na bolsa de valores B3. É simulado um preditor de preço com 7 níveis de acurácia diferentes para 200 portfólios. Os resultados apontam que tanto o benchmark quanto o MLPi superam os dois algoritmos selecionados, para níveis de acurácia de um ativo maiores que 50 por cento, e na média, o MLPi supera o benchmark em todos os níveis de acurácia simulados. / [en] The financial market is essential in the economy, bringing stability, access to new types of investments, and increasing the ability of companies to access credit. The constant search for reducing the role of human specialists in decision making aims to reduce the risk inherent in the intrinsic emotions of the human being, which the machine does not share. As a consequence, reducing speculative effects in the market, and increasing the precision in the decisions taken. In this paper, we discuss the problem of selecting portfolios online, where a vector of asset allocations is required in each step. The proposed algorithm is the multilayer perceptron with side information - MLPi. This algorithm uses neural networks to solve the problem when the investor has access to future information on the price of the assets. To evaluate the use of side information in portfolio selection, we empirically tested MLPi in contrast to two algorithms, a baseline and the state-of-the-art. As a baseline, buy-andhold is used. The state-of-the-art is the online moving average mean reversion algorithm proposed by Li and Hoi (2012). To evaluate the use of side information in the algorithm MLPi a benchmark based on a simple optimal solution using the side information is defined, but without considering the accuracy of the future information. For the experiments, we use minute-level information from the Brazilian stock market, traded on the B3 stock exchange. A price predictor is simulated with 7 different accuracy levels for 200 portfolios. The results show that both the benchmark and MLPi outperform the two algorithms selected, for asset accuracy levels greater than 50 percent, and on average, MLPi outperforms the benchmark at all levels of simulated accuracy.
|
188 |
Traffic aware resource allocation for multi-antenna OFDM systemsVenkatraman, G. (Ganesh) 14 September 2018 (has links)
Abstract
This thesis focuses on two important challenges in wireless downlink transmission: multi-user (MU) precoder design and scheduling of users over time, frequency, and spatial resources at any given instant. Data streams intended for different users are transmitted by a multiple-input multiple-output (MIMO) multi-antenna orthogonal frequency division multiplexing (OFDM) system. The transmit precoders are designed jointly across space-frequency resources to minimize the number of backlogged packets waiting at the coordinating base stations (BSs), thereby implicitly performing user scheduling.
Then the problem of multicast beamformer design is considered wherein a subset of users belonging to a multicasting group are served by a common group-specific data. The design objective is to either minimize the transmit power for a guaranteed quality-of-service, or to maximize the minimum achievable rate among users for a given transmit power. Unlike existing techniques, the proposed design utilizes both the spatial and frequency resources jointly while designing multi-group beamformers.
As an extension to coordinated precoding, the problem of beamformer design for cloud radio access network is considered wherein beamformers are designed centrally, quantized and sent along with data to the respective BSs via backhaul. Since the users can be served by multiple BSs, beamformer design becomes a nonconvex combinatorial problem. Unlike existing solutions, beamformer overhead is also included in the backhaul utilization along with the associated data. As the number of antennas increases, backhaul utilization is dominated by the beamformers. Thus, to reduce the overhead, two techniques are proposed: varying the quantization precision, and reducing the number of active antennas used for transmission.
Finally, to reduce the complexity involved in the design of joint space- frequency approach, a two-step procedure is proposed, where a MU-MIMO scheduling algorithm is employed to find a subset of users for each scheduling block. The precoders are then designed only for the chosen users, thus reducing the complexity without compromising much on the throughput. In contrast to the null-space-based existing techniques, a low-complexity scheduling algorithm is proposed based on vector projections. The real-time performance of all the schedulers are evaluated by implementing them on both Xilinx ZYNQ-ZC702 system-on-chip (SoC) and TI TCI6636K2H multi-core SoC. / Tiivistelmä
Tässä väitöskirjassa keskitytään kahteen tärkeään langattoman tiedonsiirron haasteeseen alalinkkilähetyksissä: usean käyttäjän (MU) esikooderisuunnitteluun ja käyttäjien skedulointiin aika-, taajuus- ja tilaresurssien yli. Eri käyttäjille tarkoitettuja datavirtoja lähetetään käyttämällä monitulo-monilähtötekniikkaa (MIMO) yhdistettynä monikantoaaltomodulointiin (OFDM). Lähettimien esikooderit suunnitellaan yhteisesti tila- ja taajuusresurssien yli, jotta keskenään yhteistoiminnallisten tukiasemien jonossa olevien pakettien määrää voitaisiin minimoida samalla kun tehdään epäsuorasti käyttäjien skedulointia.
Tämän jälkeen työssä paneudutaan monilähetysten (multicast) keilanmuodostussuunnitteluun, jossa monilähetysryhmään kuuluvien käyttäjien alijoukolle lähetetään yhteistä ryhmäspesifistä dataa. Suunnittelun päämääränä on joko minimoida kokonaislähetysteho tietyllä palvelunlaatuvaatimuksella tai maksimoida pienin saavutettavissa oleva siirtonopeus käyttäjien joukossa tietyllä lähetysteholla. Toisin kuin olemassa olevat menetelmät, ehdotetussa mallissa käytetään yhteisesti sekä aika- että taajuusresursseja usean ryhmän keilanmuodostusta suunniteltaessa.
Laajennuksena yhteistoiminnalliselle esikoodaukselle, väitöskirjassa käsitellään myös keilanmuodostusta pilvipohjaisessa radioliityntäverkkoarkkitehtuurissa. Keilanmuodostajat suunnitellaan keskitetysti, kvantisoidaan ja lähetetään datan mukana tukiasemille käyttäen runkoverkkoyhteyttä. Koska käyttäjiä voidaan palvella usealta tukiasemalta, keilanmuodostussuunnittelu muuttuu ei-konveksiksi kombinatoriseksi ongelmaksi. Toisin kuin olemassa olevissa ratkaisuissa, ehdotettu malli sisällyttää käyttäjien datan lisäksi keilanmuodostajien resursoinnin tarpeen runkoverkkoon. Tukiaseman antennien määrän lisääntyessä, keilanmuodostajien osuus runkoverkon käyttöasteesta kasvaa suureksi. Jotta keilanmuodostajien aiheuttamaa ylimääräistä tiedonsiirtotarvetta voitaisiin minimoida, esitellään kaksi tekniikkaa: kvantisointitarkkuuden muunteleminen sekä lähetykseen käytettävien aktiivisten antennien määrän vähentäminen.
Lopuksi, jotta yhdistetyn tila-taajuussuunnittelun aiheuttamaa kompleksisuutta saataisiin vähennettyä, ehdotetaan kaksivaiheista menetelmää. MU-MIMO skedulointialgoritmin avulla etsitään ensin alijoukko käyttäjiä jokaiselle skedulointilohkolle. Esikooderit suunnitellaan vain valituille käyttäjille, mikä vähentää kompleksisuutta, heikentämättä suorituskykyä kuitenkaan olennaisesti. Poiketen nolla-avaruuteen perustuvista tekniikoista, esitetään yksinkertainen vektoriprojektioihin perustuva skeduleri. Kaikkien skedulerien reaaliaikasuorituskykyä on arvioitu toteuttamalla ne ohjelmoitavilla Xilinx ZYNQ-ZC702 system-on-chip (SoC) ja TI TCI6636K2H moniydinalustoilla.
|
189 |
Commande linéaire à paramètres variants des robots manipulateurs flexibles / Linear Parameter Varying (LPV) control of flexible robotic manipulatorsHalalchi, Houssem 13 September 2012 (has links)
Les robots flexibles sont de plus en plus utilisés dans les applications pratiques. Ces robots sont caractérisés par une conception mécanique légère, réduisant ainsi leur encombrement, leur consommation d’énergie et améliorant leur sécurité. Cependant, la présence de vibrations transitoires rend difficile un contrôle précis de la trajectoire de ces systèmes. Cette thèse est précisément consacrée à l’asservissement en position des manipulateurs flexibles dans les espaces articulaire et opérationnel. Des méthodes de commande avancées, basées sur des outils de la commande robuste et de l’optimisation convexe, ont été proposées. Ces méthodes font en particulier appel à la théorie des systèmes linéaires à paramètres variants (LPV) et aux inégalités matricielles linéaires (LMI). En comparaison avec des lois de commande non-linéaires disponibles dans la littérature, les lois de commande LPV proposées permettent de considérerdes contraintes de performance et de robustesse de manière simple et systématique. L’accent est porté dans notre travail sur la gestion appropriée de la dépendance paramétrique du modèle LPV, en particulier les dépendances polynomiale et rationnelle. Des simulations numériques effectuées dans des conditions réalistes, ont permis d’observer une meilleure robustesse de la commande LPV par rapport à la commande non-linéaire par inversion de modèle face aux bruits de mesure, aux excitations de haute fréquence et aux incertitudes de modèle. / Flexible robots are becoming more and more common in practical applications. This type of robots is characterized by the use of lightweight materials, which allows reducing their size, their power consumption and improves their safety. However, an accurate trajectory tracking of these systems is difficult to achieve because of the transient vibrations they undergo. This PhD thesis work is particularly devoted to the position control of flexible robotic manipulators at the joint and end-effector levels. Advanced control methods, based on some tools of the robust control theory and convex optimization, have been proposed. These methods are based on the theory of Linear Parameter Varying (LPV) systems and Linear Matrix Inequalities (LMI). Compared to some nonlinear control laws available in the literature that involve model inversion, theproposed LPV control laws make it possible to consider performance and robustness constraints in a simple and systematic manner. Our work particularly emphasizes on the appropriate management of the parametric dependence of the LPV model, especially the polynomial and rational dependences. Numerical simulations carried out in realistic operating conditions have shown a better robustness of the LPV control compared to the inversion-based nonlinear control withrespect to measurement noise, high frequency inputs and model uncertainties.
|
190 |
Reconstruction adaptative des signaux par optimisation convexe / Adaptive signals recovery by convex optimizationOstrovskii, Dmitrii 11 January 2018 (has links)
Nous considérons le problème de débruitage d'un signal ou d'une image observés dans le bruit gaussien. Dans ce problème les estimateurs linéaires classiques sont quasi-optimaux quand l'ensemble des signaux, qui doit être convexe et compact, est connu a priori. Si cet ensemble n'est pas spécifié, la conception d'un estimateur adaptatif qui ``ne connait pas'' la structure cachée du signal reste un problème difficile. Dans cette thèse, nous étudions une nouvelle famille d'estimateurs des signaux satisfaisant certains propriétés d'invariance dans le temps. De tels signaux sont caractérisés par leur structure harmonique, qui est généralement inconnu dans la pratique.Nous proposons des nouveaux estimateurs capables d'exploiter la structure harmonique inconnue du signal è reconstruire. Nous démontrons que ces estimateurs obéissent aux divers "inégalités d'oracle," et nous proposons une implémentation algorithmique numériquement efficace de ces estimateurs basée sur des algorithmes d'optimisation de "premier ordre." Nous évaluons ces estimateurs sur des données synthétiques et sur des signaux et images réelles. / We consider the problem of denoising a signal observed in Gaussian noise.In this problem, classical linear estimators are quasi-optimal provided that the set of possible signals is convex, compact, and known a priori. However, when the set is unspecified, designing an estimator which does not ``know'' the underlying structure of a signal yet has favorable theoretical guarantees of statistical performance remains a challenging problem. In this thesis, we study a new family of estimators for statistical recovery of signals satisfying certain time-invariance properties. Such signals are characterized by their harmonic structure, which is usually unknown in practice. We propose new estimators which are capable to exploit the unknown harmonic structure of a signal to reconstruct. We demonstrate that these estimators admit theoretical performance guarantees, in the form of oracle inequalities, in a variety of settings.We provide efficient algorithmic implementations of these estimators via first-order optimization algorithm with non-Euclidean geometry, and evaluate them on synthetic data, as well as some real-world signals and images.
|
Page generated in 0.1168 seconds