Spelling suggestions: "subject:"könspermutation"" "subject:"npermutation""
191 |
Forêts aléatoires et sélection de variables : analyse des données des enregistreurs de vol pour la sécurité aérienne / Random forests and variable selection : analysis of the flight data recorders for aviation safetyGregorutti, Baptiste 11 March 2015 (has links)
De nouvelles réglementations imposent désormais aux compagnies aériennes d'établir une stratégie de gestion des risques pour réduire encore davantage le nombre d'accidents. Les données des enregistreurs de vol, très peu exploitées à ce jour, doivent être analysées de façon systématique pour identifier, mesurer et suivre l'évolution des risques. L'objectif de cette thèse est de proposer un ensemble d'outils méthodologiques pour répondre à la problématique de l'analyse des données de vol. Les travaux présentés dans ce manuscrit s'articulent autour de deux thèmes statistiques : la sélection de variables en apprentissage supervisé d'une part et l'analyse des données fonctionnelles d'autre part. Nous utilisons l'algorithme des forêts aléatoires car il intègre des mesures d'importance pouvant être employées dans des procédures de sélection de variables. Dans un premier temps, la mesure d'importance par permutation est étudiée dans le cas où les variables sont corrélées. Nous étendons ensuite ce critère pour des groupes de variables et proposons une nouvelle procédure de sélection de variables fonctionnelles. Ces méthodes sont appliquées aux risques d'atterrissage long et d'atterrissage dur, deux questions importantes pour les compagnies aériennes. Nous présentons enfin l'intégration des méthodes proposées dans le produit FlightScanner développé par Safety Line. Cette solution innovante dans le transport aérien permet à la fois le monitoring des risques et le suivi des facteurs qui les influencent. / New recommendations require airlines to establish a safety management strategy to keep reducing the number of accidents. The flight data recorders have to be systematically analysed in order to identify, measure and monitor the risk evolution. The aim of this thesis is to propose methodological tools to answer the issue of flight data analysis. Our work revolves around two statistical topics: variable selection in supervised learning and functional data analysis. The random forests are used as they implement importance measures which can be embedded in selection procedures. First, we study the permutation importance measure when the variables are correlated. This criterion is extended for groups of variables and a new selection algorithm for functional variables is introduced. These methods are applied to the risks of long landing and hard landing which are two important questions for airlines. Finally, we present the integration of the proposed methods in the software FlightScanner implemented by Safety Line. This new solution in the air transport helps safety managers to monitor the risks and identify the contributed factors.
|
192 |
Source-channel coding for wireless networksWernersson, Niklas January 2006 (has links)
The aim of source coding is to represent information as accurately as possible using as few bits as possible and in order to do so redundancy from the source needs to be removed. The aim of channel coding is in some sense the contrary, namely to introduce redundancy that can be exploited to protect the information when being transmitted over a nonideal channel. Combining these two techniques leads to the area of joint source–channel coding which in general makes it possible to achieve a better performance when designing a communication system than in the case when source and channel codes are designed separately. In this thesis two particular areas in joint source–channel coding are studied: multiple description coding (MDC) and soft decoding. Two new MDC schemes are proposed and investigated. The first is based on sorting a frame of samples and transmitting, as side-information/redundancy, an index that describes the resulting permutation. In case that some of the transmitted descriptors are lost during transmission this side information (if received) can be used to estimate the lost descriptors based on the received ones. The second scheme uses permutation codes to produce different descriptions of a block of source data. These descriptions can be used jointly to estimate the original source data. Finally, also the MDC method multiple description coding using pairwise correlating transforms as introduced by Wang et al is studied. A modification of the quantization in this method is proposed which yields a performance gain. A well known result in joint source–channel coding is that the performance of a communication system can be improved by using soft decoding of the channel output at the cost of a higher decoding complexity. An alternative to this is to quantize the soft information and store the pre-calculated soft decision values in a lookup table. In this thesis we propose new methods for quantizing soft channel information, to be used in conjunction with soft-decision source decoding. The issue on how to best construct finite-bandwidth representations of soft information is also studied. / QC 20101124
|
193 |
The Impact of the COVID-19 Lockdown on the Urban Air Quality: A Machine Learning Approach.Bobba, Srinivas January 2021 (has links)
‘‘SARS-CoV-2’’ which is responsible for the current pandemic of COVID-19 disease was first reported from Wuhan, China, on 31 December 2019. Since then, to prevent its propagation around the world, a set of rapid and strict countermeasures have been taken. While most of the researchers around the world initiated their studies on the Covid-19 lockdown effect on air quality and concluded pollution reduction, the most reliable methods that can be used to find out the reduction of the pollutants in the air are still in debate. In this study, we performed an analysis on how Covid-19 lockdown procedures impacted the air quality in selected cities i.e. New Delhi, Diepkloof, Wuhan, and London around the world. The results show that the air quality index (AQI) improved by 43% in New Delhi,18% in Wuhan,15% in Diepkloof, and 12% in London during the initial lockdown from the 19th of March 2020 to 31st May 2020 compared to that of four-year pre-lockdown. Furthermore, the concentrations of four main pollutants, i.e., NO2, CO, SO2, and PM2.5 were analyzed before and during the lockdown in India. The quantification of pollution drop is supported by statistical measurements like the AVOVA Test and the Permutation Test. Overall, 58%, 61%,18% and 55% decrease is observed in NO2, CO,SO2, and PM2.5 concentrations, respectively. To check if the change in weather has played any role in pollution level reduction or not we analyzed how weather factors are correlated with pollutants using a correlation matrix. Finally, machine learning regression models are constructed to assess the lockdown impact on air quality in India by incorporating weather data. Gradient Boosting is performed well in the Prediction of drop-in PM2.5 concentration on individual cities in India. By comparing the feature importance ranking by regression models supported by correlation factors with PM2.5.This study concludes that COVID-19 lockdown has a significant effect on the natural environment and air quality improvement.
|
194 |
Sémantický parsing nezávislý na uspořádání vrcholů / Permutation-Invariant Semantic ParsingSamuel, David January 2021 (has links)
Deep learning has been successfully applied to semantic graph parsing in recent years. However, to our best knowledge, all graph-based parsers depend on a strong assumption about the ordering of graph nodes. This work explores a permutation-invariant approach to sentence-to-graph semantic parsing. We present a versatile, cross-framework, and language-independent architecture for universal modeling of semantic structures. To empirically validate our method, we participated in the CoNLL 2020 shared task, Cross- Framework Meaning Representation Parsing (MRP 2020), which evaluated the competing systems on five different frameworks (AMR, DRG, EDS, PTG, and UCCA) across four languages. Our parsing system, called PERIN, was one of the winners of this shared task. Thus, we believe that permutation invariance is a promising new direction in the field of semantic parsing. 1
|
195 |
Propriétés différentielles des permutations et application en cryptographie symétrique / Differential properties of permutations and application to symmetric cryptographySuder, Valentin 05 November 2014 (has links)
Les travaux exposés dans cette thèse se situent à l’interface des mathématiques discrètes, des corps finis et de la cryptographie symétrique.Les 'boîtes-S’ sont des fonctions non-linéaires de petites tailles qui constituent souvent la partie de confusion, indispensable, des chiffrements par blocs ou des fonctions de hachages.Dans la première partie de cette thèse, nous nous intéressons à la construction de boîtes-S bijectives résistantes aux attaques différentielle. Nous étudions l’inverse pour la composition des monômes de permutations optimaux vis-à-vis du critère différentiel. Nous explorons ensuite des classes spécifiques de polynômes creux. Enfin, nous construisons des boîtes-S à partir de leurs dérivées discrètes.Dans la deuxième partie, nous portons notre attention sur la cryptanalyse différentielle impossible. Cette cryptanalyse à clairs choisis très performante pour attaquer des chiffrements par blocs itératifs, exploite la connaissance d’une différentielle de probabilité zéro pour écarter les clés candidates. Elle est très technique, et de nombreuses erreurs ont été repérées dans des travaux passés, invalidant certaines attaques. Le but de ces travaux est de formaliser et d’automatiser l’évaluation des complexités d’une telle attaque afin d’unifier et d’optimiser les résultats obtenus. Nous proposons aussi de nouvelles techniques réduisant les complexités cette cryptanalyse. Nous démontrons enfin l’efficacité de notre approche en fournissant les meilleures cryptanalyses différentielles impossibles contre les chiffrements CLEFIA, Camellia, LBlock et Simon. / The work I have carried out in this thesis lie between discrete mathematics, finite fields theory and symmetric cryptography. In block ciphers, as well as in hash functions, SBoxes are small non-linear and necessary functions working as confusion layer.In the first part of this document, we are interesting in the design of bijective SBoxes that have the best resistance to differential attacks. We study the compositional inverse of the so-called Almost Perfect Nonlinear power functions. Then, we extensively study a class of sparse permutation polynomials with low differential uniformity. Finally, we build functions, over finite fields, from their discrete derivatives.In the second part, we realize an automatic study of a certain class of differential attacks: impossible differential cryptanalysis. This known plaintexts attack has been shown to be very efficient against iterative block ciphers. It exploits the knowledge of a differential with probability zero to occur. However this cryptanalysis is very technical and many flaws have been discovered, thus invalidating many attacks realized in the past. Our goal is to formalize, to improve and to automatize the complexity evaluation in order to optimize the results one can obtain. We also propose new techniques that aims at reducing necessary data and time complexities. We finally prove the efficiency of our method by providing some of the best impossible differential cryptanalysis against Feistel oriented block ciphers CLEFIA, Camellia, LBlock and Simon.
|
196 |
Addressing nonlinear systems with information-theoretical techniquesCastelluzzo, Michele 07 July 2023 (has links)
The study of experimental recording of dynamical systems often consists in the analysis of signals produced by that system. Time series analysis consists of a wide range of methodologies ultimately aiming at characterizing the signals and, eventually, gaining insights on the underlying processes that govern the evolution of the system. A standard way to tackle this issue is spectrum analysis, which uses Fourier or Laplace transforms to convert time-domain data into a more useful frequency space. These analytical methods allow to highlight periodic patterns in the signal and to reveal essential characteristics of linear systems. Most experimental signals, however, exhibit strange and apparently unpredictable behavior which require more sophisticated analytical tools in order to gain insights into the nature of the underlying processes generating those signals. This is the case when nonlinearity enters into the dynamics of a system. Nonlinearity gives rise to unexpected and fascinating behavior, among which the emergence of deterministic chaos. In the last decades, chaos theory has become a thriving field of research for its potential to explain complex and seemingly inexplicable natural phenomena. The peculiarity of chaotic systems is that, despite being created by deterministic principles, their evolution shows unpredictable behavior and a lack of regularity. These characteristics make standard techniques, like spectrum analysis, ineffective when trying to study said systems. Furthermore, the irregular behavior gives the appearance of these signals being governed by stochastic processes, even more so when dealing with experimental signals that are inevitably affected by noise. Nonlinear time series analysis comprises a set of methods which aim at overcoming the strange and irregular evolution of these systems, by measuring some characteristic invariant quantities that describe the nature of the underlying dynamics. Among those quantities, the most notable are possibly the Lyapunov ex- ponents, that quantify the unpredictability of the system, and measure of dimension, like correlation dimension, that unravel the peculiar geometry of a chaotic system’s state space. These methods are ultimately analytical techniques, which can often be exactly estimated in the case of simulated systems, where the differential equations governing the system’s evolution are known, but can nonetheless prove difficult or even impossible to compute on experimental recordings. A different approach to signal analysis is provided by information theory. Despite being initially developed in the context of communication theory, by the seminal work of Claude Shannon in 1948, information theory has since become a multidisciplinary field, finding applications in biology and neuroscience, as well as in social sciences and economics. From the physical point of view, the most phenomenal contribution from Shannon’s work was to discover that entropy is a measure of information and that computing the entropy of a sequence, or a signal, can answer to the question of how much information is contained in the sequence. Or, alternatively, considering the source, i.e. the system, that generates the sequence, entropy gives an estimate of how much information the source is able to produce. Information theory comprehends a set of techniques which can be applied to study, among others, dynamical systems, offering a complementary framework to the standard signal analysis techniques. The concept of entropy, however, was not new in physics, since it had actually been defined first in the deeply physical context of heat exchange in thermodynamics in the 19th century. Half a century later, in the context of statistical mechanics, Boltzmann reveals the probabilistic nature of entropy, expressing it in terms of statistical properties of the particles’ motion in a thermodynamic system. A first link between entropy and the dynamical evolution of a system is made. In the coming years, following Shannon’s works, the concept of entropy has been further developed through the works of, to only cite a few, Von Neumann and Kolmogorov, being used as a tool for computer science and complexity theory. It is in particular in Kolmogorov’s work, that information theory and entropy are revisited from an algorithmic perspective: given an input sequence and a universal Turing machine, Kolmogorov found that the length of the shortest set of instructions, i.e. the program, that enables the machine to compute the input sequence was related to the sequence’s entropy. This definition of the complexity of a sequence already gives hint of the differences between random and deterministic signals, in the fact that a truly random sequence would require as many instructions for the machine as the size of the input sequence to compute, as there is no other option than programming the machine to copy the sequence point by point. On the other hand, a sequence generated by a deterministic system would simply require knowing the rules governing its evolution, for example the equations of motion in the case of a dynamical system. It is therefore through the work of Kolmogorov, and also independently by Sinai, that entropy is directly applied to the study of dynamical systems and, in particular, deterministic chaos. The so-called Kolmogorov-Sinai entropy, in fact, is a well-established measure of how complex and unpredictable a dynamical system can be, based on the analysis of trajectories in its state space. In the last decades, the use of information theory on signal analysis has contributed to the elaboration of many entropy-based measures, such as sample entropy, transfer entropy, mutual information and permutation entropy, among others. These quantities allow to characterize not only single dynamical systems, but also highlight the correlations between systems and even more complex interactions like synchronization and chaos transfer. The wide spectrum of applications of these methods, as well as the need for theoretical studies to provide them a sound mathematical background, make information theory still a thriving topic of research. In this thesis, I will approach the use of information theory on dynamical systems starting from fundamental issues, such as estimating the uncertainty of Shannon’s entropy measures on a sequence of data, in the case of an underlying memoryless stochastic process. This result, beside giving insights on sensitive and still-unsolved aspects when using entropy-based measures, provides a relation between the maximum uncertainty on Shannon’s entropy estimations and the size of the available sequences, thus serving as a practical rule for experiment design. Furthermore, I will investigate the relation between entropy and some characteristic quantities in nonlinear time series analysis, namely Lyapunov exponents. Some examples of this analysis on recordings of a nonlinear chaotic system are also provided. Finally, I will discuss other entropy-based measures, among them mutual information, and how they compare to analytical techniques aimed at characterizing nonlinear correlations between experimental recordings. In particular, the complementarity between information-theoretical tools and analytical ones is shown on experimental data from the field of neuroscience, namely magnetoencefalography and electroencephalography recordings, as well as mete- orological data.
|
197 |
Permutation als kompositions- und analysetechnische Aufgabe: Jörg Herchets komposition für vier klaviere (2001)Herchet, Jörg, Weißgerber, Lydia 28 October 2024 (has links)
No description available.
|
198 |
Inteligencia computacional en la programación de la producción con recursos adicionalesAlfaro Fernández, Pedro 26 October 2023 (has links)
[ES] En esta Tesis Doctoral se aborda el problema del taller de flujo de permutación considerando recursos adicionales renovables, que es una versión más realista del clásico problema de taller de flujo de permutación, muy estudiado en la literatura. La inclusión de los recursos ayuda a acercar el mundo académico-científico al mundo real de la industria. Se ha realizado una completa revisión bibliográfica que no se ha limitado a problemas del taller de flujo, sino que han revisado problemas similares del ámbito de scheduling que consideren recursos. En esta revisión, no se han encontrado en la literatura artículos para el problema concreto que se estudia en esta tesis. Por ello, la aportación principal de esta Tesis Doctoral es el estudio por primera vez de este problema y la propuesta y adaptación de métodos para su resolución. Inicialmente, el problema se modeliza a través de un modelo de programación lineal entera mixta (MILP). Dada la complejidad del problema, el MILP es capaz de resolver instancias de un tamaño muy pequeño. Por ello, es necesario adaptar, diseñar e implementar heurísticas constructivas y metaheurísticas para obtener buenas soluciones en un tiempo de computación razonable. Para evaluar la eficacia y eficiencia de los métodos propuestos, se generan instancias de problemas partiendo de los conjuntos más utilizados en la literatura para el taller de flujo de permutación. Se utilizan estas instancias propuestas tanto para calibrar los distintos métodos como para evaluar su rendimiento a través de experimentos computacionales masivos. Los experimentos muestran que las heurísticas propuestas son métodos sencillos que consiguen soluciones factibles de una forma muy rápida. Para mejorar las soluciones obtenidas con las heurísticas y facilitar el movimiento a otros espacios de soluciones, se proponen tres metaheurísticas: un método basado en búsqueda local iterativa (ILS), un método voraz iterativo (IG) y un algoritmo genético con búsqueda local (HGA). Todos ellos utilizan las heurísticas propuestas más eficaces como solución o soluciones iniciales. Las metaheurísticas obtienen las mejores soluciones utilizando tiempos de computación razonables, incluso para las instancias de mayor tamaño. Todos los métodos han sido implementados dentro de la plataforma FACOP (Framework for Applied Combinatorial Optimization Problems). Dicha plataforma es capaz de incorporar nuevos algoritmos de optimización para problemas de investigación operativa relacionados con la toma de decisiones de las organizaciones y está diseñada para abordar casos reales en empresas. El incorporar en esta plataforma todas las metodologías propuestas en esta Tesis Doctoral, acerca el mundo académico al mundo empresarial. / [CA] En aquesta Tesi Doctoral s'aborda el problema del taller de flux de permutació considerant recursos addicionals renovables, que és una versió més realista del clàssic problema de taller de flux de permutació, molt estudiat a la literatura. La inclusió dels recursos ajuda a apropar el món acadèmic-científic al món real de la indústria. S'ha realitzat una revisió bibliogràfica completa que no s'ha limitat a problemes del taller de flux, sinó que ha revisat problemes similars de l'àmbit de scheduling que considerin recursos. En aquesta revisió, no s'ha trobat a la literatura articles per al problema concret que s'estudia en aquesta tesi. Per això, l'aportació principal d'aquesta Tesi Doctoral és l'estudi per primera vegada d'aquest problema i la proposta i l'adaptació de mètodes per resoldre'ls. Inicialment, el problema es modelitza mitjançant un model de programació lineal sencera mixta (MILP). Donada la complexitat del problema, el MILP és capaç de resoldre instàncies d'un tamany molt petita. Per això, cal adaptar, dissenyar i implementar heurístiques constructives i metaheurístiques per obtenir bones solucions en un temps de computació raonable. Per avaluar l'eficàcia i l'eficiència dels mètodes proposats, es generen instàncies de problemes partint dels conjunts més utilitzats a la literatura per al taller de flux de permutació. S'utilitzen aquestes instàncies proposades tant per calibrar els diferents mètodes com per avaluar-ne el rendiment a través d'experiments computacionals massius. Els experiments mostren que les heurístiques proposades són mètodes senzills que aconsegueixen solucions factibles de manera molt ràpida. Per millorar les solucions obtingudes amb les heurístiques i facilitar el moviment a altres espais de solucions, es proposen tres metaheurístiques: un mètode basat en cerca local iterativa (ILS), un mètode voraç iteratiu (IG) i un algorisme genètic híbrid (HGA). Tots ells utilitzen les heurístiques proposades més eficaces com a solució o solucions inicials. Les metaheurístiques obtenen les millors solucions utilitzant temps de computació raonables, fins i tot per a les instàncies més grans. Tots els mètodes han estat implementats dins de la plataforma FACOP (Framework for Applied Combinatorial Optimization Problems). Aquesta plataforma és capaç d'incorporar nous algorismes d'optimització per a problemes de recerca operativa relacionats amb la presa de decisions de les organitzacions i està dissenyada per abordar casos reals a empreses. El fet d'incorporar en aquesta plataforma totes les metodologies proposades en aquesta Tesi Doctoral, apropa el món acadèmic al món empresarial. / [EN] In this Doctoral Thesis, the permutation flowshop problem is addressed considering additional renewable resources, which is a more realistic version of the classic permutation flowshop problem, widely studied in the literature. The inclusion of resources helps to bring the academic-scientific world closer to the real world of industry. A complete bibliographic review has been carried out that has not been limited to flow shop problems, but has reviewed similar problems in the scheduling field that consider resources. In this review, no articles have been found in the literature for the specific problem studied in this thesis. Therefore, the main contribution of this Doctoral Thesis is the study for the first time of this problem and the proposal and adaptation of methods for its resolution. Initially, the problem is modeled through a mixed integer linear programming (MILP) model. Given the complexity of the problem, the MILP is capable of solving very small instances. Therefore, it is necessary to adapt, design and implement constructive heuristics and metaheuristics to obtain good solutions in a reasonable computation time. In order to evaluate the effectiveness and efficiency of the proposed methods, problem instances are generated starting from the sets most used in the literature for the permutation flowshop. These proposed instances are used both to calibrate the different methods and to evaluate their performance through massive computational experiments. Experiments show that proposed heuristics are simple methods that achieve feasible solutions very quickly. To improve the solutions obtained with the heuristics and facilitate movement to other solution spaces, three metaheuristics are proposed: a method based on iterated local search (ILS), an iterative greedy method (IG) and a hybrid genetic algorithm (HGA). All of them use the most effective proposed heuristics as initial solution or solutions. Metaheuristics get the best solutions using reasonable computation times, even for the largest instances. All the methods have been implemented within the FACOP platform (Framework for Applied Combinatorial Optimization Problems). Said platform is capable of incorporating new optimization algorithms for operational research problems related to decision-making in organizations and it is designed to address real cases in companies. Incorporating in this platform all the methodologies proposed in this Doctoral Thesis, brings the academic world closer to the business world. / Alfaro Fernández, P. (2023). Inteligencia computacional en la programación de la producción con recursos adicionales [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/198891
|
199 |
Pattern posets: enumerative, algebraic and algorithmic issuesCervetti, Matteo 22 March 2021 (has links)
The study of patterns in combinatorial structures has grown up in the past few decades to one of the most active trends of research in combinatorics. Historically, the study of permutations which are constrained by not containing subsequences ordered in various prescribed ways has been motivated by the problem of sorting permutations with certain devices. However, the richness of this notion
became especially evident from its plentiful appearances in several very different disciplines, such as pure mathematics, mathematical physics, computer science,biology, and many others. In the last decades, similar notions of patterns have been considered on discrete structures other than permutations, such as integer sequences, lattice paths, graphs, matchings and set partitions. In the first part of
this talk I will introduce the general framework of pattern posets and some classical problems about patterns. In the second part of this talk I will present some enumerative results obtained in my PhD thesis about patterns in permutations, lattice paths and matchings. In particular I will describe a generating tree with a single label for permutations avoiding the vincular pattern 1 - 32 - 4, a finite automata approach to enumerate lattice excursions avoiding a single pattern and some results about matchings avoiding juxtapositions and liftings of patterns.
|
200 |
Carleman Linearization for Nonlinear Systems and Their Explicit Error BoundsChen, Panpan 01 January 2024 (has links) (PDF)
Carleman linearization has been widely employed in mathematical modeling and control theory and it could be used to investigate the stability of nonlinear systems, particularly in situations requiring higher-order accuracy. The essence of Carleman linearization is to lift a finite-dimensional nonlinear system to an infinite-dimensional linear system.
We consider nonlinear dynamical systems with periodic vector fields with multiple fundamental frequencies. We employ Fourier basis functions for Carleman-Fourier linearization, a method that transforms these systems into an infinite-dimensional linear model with an unbounded operator. This approach provides more accurate approximations than classical Carleman linearization, particularly in larger regions around the equilibrium and over extended time periods. For certain classes of systems, we demonstrate that exponential convergence is achievable across the entire time horizon. Our results are practically significant, as our proposed error bound estimates can guide in computing proper truncation lengths for various applications, such as determining the appropriate sampling period for model predictive control, conducting reachability analysis for safety verifications, and developing efficient algorithms for quantum computing.
To apply Carleman linearization to engineering applications, we must address both its accuracy and efficiency, as the latter poses a significant challenge. We have proposed a permutation-equivariant Carleman linearization, called PeCaL, to reduce the dimension of the finite-section approximation of Carleman Linearization when the nonlinear system is permutation-equivariant. We compare the time costs between classical Carleman Linearization and PeCaL with the same truncation order through simulations, alongside the explicit error bounds as previous works.
|
Page generated in 0.0688 seconds