• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1306
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3140
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Expected Complexity and Gradients of Deep Maxout Neural Networks and Implications to Parameter Initialization

Tseran, Hanna 10 November 2023 (has links)
Learning with neural networks depends on the particular parametrization of the functions represented by the network, that is, the assignment of parameters to functions. It also depends on the identity of the functions, which get assigned typical parameters at initialization, and, later, the parameters that arise during training. The choice of the activation function is a critical aspect of the network design that influences these function properties and requires investigation. This thesis focuses on analyzing the expected behavior of networks with maxout (multi-argument) activation functions. On top of enhancing the practical applicability of maxout networks, these findings add to the theoretical exploration of activation functions beyond the common choices. We believe this work can advance the study of activation functions and complicated neural network architectures. We begin by taking the number of activation regions as a complexity measure and showing that the practical complexity of deep networks with maxout activation functions is often far from the theoretical maximum. This analysis extends the previous results that were valid for deep neural networks with single-argument activation functions such as ReLU. Additionally, we demonstrate that a similar phenomenon occurs when considering the decision boundaries in classification tasks. We also show that the parameter space has a multitude of full-dimensional regions with widely different complexity and obtain nontrivial lower bounds on the expected complexity. Finally, we investigate different parameter initialization procedures and show that they can increase the speed of the gradient descent convergence in training. Further, continuing the investigation of the expected behavior, we study the gradients of a maxout network with respect to inputs and parameters and obtain bounds for the moments depending on the architecture and the parameter distribution. We observe that the distribution of the input-output Jacobian depends on the input, which complicates a stable parameter initialization. Based on the moments of the gradients, we formulate parameter initialization strategies that avoid vanishing and exploding gradients in wide networks. Experiments with deep fully-connected and convolutional networks show that this strategy improves SGD and Adam training of deep maxout networks. In addition, we obtain refined bounds on the expected number of linear regions, results on the expected curve length distortion, and results on the NTK. As the result of the research in this thesis, we develop multiple experiments and helpful components and make the code for them publicly available.
632

Factors Affecting the Perceived Rhythmic Complexity of Auditory Rhythms

Vinke, Louis Nicholas 26 April 2010 (has links)
No description available.
633

Complexity management in variant-rich product development

Vogel, Wolfgang 10 December 2019 (has links)
Complexity is the paradigm of the 21st century and has been discussed in several fields of research. During the last years, increasing complexity in manufacturing companies has been one of the biggest issues in science and practice. Companies in high-technology marketplaces are confronted with technology innovations, dynamic environmental conditions, changing customer requirements, globalization of markets and competitions, as well as market uncertainty, inducing an increasing amount of complexity. Manufacturing companies cannot escape these trends. In today’s highly competitive environment, it is fundamental for company’s success to develop and launch new products quickly and with customer’s individual settings to the market. The companies cope with these trends by developing new product variants, which lead to an increased complexity in the company and in product development. Complexity is influenced by internal and external sources of complexity, so-called complexity drivers. Complexity drivers have an influence on companies and the total value chain. Managing a system’s complexity requires an optimum fit between internal and external complexity. Identifying, analyzing and understanding complexity drivers is the first step for complexity management’s development and implementation. For managing and optimizing company’s complexity, a vast number of different single approaches is applied for different purposes. Complexity management is a strategic issue for companies to be competitive. The main important strategies for single approaches’ application are complexity reduction, mastering and avoidance. Complexity management requires approaches for complexity’s understanding, simplification, transformation and evaluation. A successful complexity management approach enables a balance between external market’s complexity and internal company’s complexity. The purpose of this dissertation is to close existing gaps in scientific literature by providing a complexity management in variant-rich product development. Therefor, a systematic literature review was performed regarding the issues ‘complexity drivers in manufacturing companies and along the value chain and their effects on company’s complexity’, ‘application of specific single approaches and their targeted strategy’, as well as ‘approaches for complexity management and especially for resource planning’. An empirical research was conducted to document the current state in the German manufacturing industry regarding the issues ‘complexity drivers in product development and their effects on company’s complexity’ and ‘application of specific single approaches for complexity management’. The empirical data was collected through questionnaires between 2015 and 2016. The empirical findings are compared with literature to identify commonalities and differences. Based on literature’s results, a new general approach for managing complexity in variant-rich product development was developed to bring the relevant steps for complexity handling in a sequence. In this approach, complexity in product development is systematically analyzed and evaluated to create conditions for target-oriented managing and controlling of complexity. Furthermore, the general complexity management approach is modified and structurally optimized to establish a target-oriented approach for resource planning in variant-rich product development. The new approaches are applied in the automotive industry to verify the results and approach’s applicability.
634

Complexité de la communication sur un canal avec délai

Lapointe, Rébecca 02 1900 (has links)
Nous introduisons un nouveau modèle de la communication à deux parties dans lequel nous nous intéressons au temps que prennent deux participants à effectuer une tâche à travers un canal avec délai d. Nous établissons quelques bornes supérieures et inférieures et comparons ce nouveau modèle aux modèles de communication classiques et quantiques étudiés dans la littérature. Nous montrons que la complexité de la communication d’une fonction sur un canal avec délai est bornée supérieurement par sa complexité de la communication modulo un facteur multiplicatif d/ lg d. Nous présentons ensuite quelques exemples de fonctions pour lesquelles une stratégie astucieuse se servant du temps mort confère un avantage sur une implémentation naïve d’un protocole de communication optimal en terme de complexité de la communication. Finalement, nous montrons qu’un canal avec délai permet de réaliser un échange de bit cryptographique, mais que, par lui-même, est insuffisant pour réaliser la primitive cryptographique de transfert équivoque. / We introduce a new communication complexity model in which we want to determine how much time of communication is needed by two players in order to execute arbitrary tasks on a channel with delay d. We establish a few basic lower and upper bounds and compare this new model to existing models such as the classical and quantum two-party models of communication. We show that the standard communication complexity of a function, modulo a factor of d/ lg d, constitutes an upper bound to its communication complexity on a delayed channel. We introduce a few examples on which a clever strategy depending on the delay procures a significant advantage over the naïve implementation of an optimal communication protocol. We then show that a delayed channel can be used to implement a cryptographic bit swap, but is insufficient on its own to implement an oblivious transfer scheme.
635

Complexité de la communication sur un canal avec délai

Lapointe, Rébecca 02 1900 (has links)
Nous introduisons un nouveau modèle de la communication à deux parties dans lequel nous nous intéressons au temps que prennent deux participants à effectuer une tâche à travers un canal avec délai d. Nous établissons quelques bornes supérieures et inférieures et comparons ce nouveau modèle aux modèles de communication classiques et quantiques étudiés dans la littérature. Nous montrons que la complexité de la communication d’une fonction sur un canal avec délai est bornée supérieurement par sa complexité de la communication modulo un facteur multiplicatif d/ lg d. Nous présentons ensuite quelques exemples de fonctions pour lesquelles une stratégie astucieuse se servant du temps mort confère un avantage sur une implémentation naïve d’un protocole de communication optimal en terme de complexité de la communication. Finalement, nous montrons qu’un canal avec délai permet de réaliser un échange de bit cryptographique, mais que, par lui-même, est insuffisant pour réaliser la primitive cryptographique de transfert équivoque. / We introduce a new communication complexity model in which we want to determine how much time of communication is needed by two players in order to execute arbitrary tasks on a channel with delay d. We establish a few basic lower and upper bounds and compare this new model to existing models such as the classical and quantum two-party models of communication. We show that the standard communication complexity of a function, modulo a factor of d/ lg d, constitutes an upper bound to its communication complexity on a delayed channel. We introduce a few examples on which a clever strategy depending on the delay procures a significant advantage over the naïve implementation of an optimal communication protocol. We then show that a delayed channel can be used to implement a cryptographic bit swap, but is insufficient on its own to implement an oblivious transfer scheme.
636

Exploring complexity metrics for artifact-centric business process models

Marin, Mike A. 02 1900 (has links)
This study explores complexity metrics for business artifact process models described by Case Management Model and Notation (CMMN). Process models are usually described using Business Process Management (BPM), which is a relatively mature discipline with a large number of practitioners. Over the last few decades a new way of describing data intensive business processes has emerged in BPM literature, for which traditional BPM is no longer adequate. This emerging method, used to describe more flexible processes, is called business artifacts with Guard-Stage-Milestone (GSM). The work on GSM influenced CMMN, which was created to fill a market need for more flexible case management processes for knowledge workers. Complexity metrics have been developed for traditional BPM models, such as the Business Process Model and Notation (BPMN). However, traditional BPM is not suitable for describing GSM or CMMN process models. Therefore, complexity metrics developed for traditional process models may not be applicable to business artifact process models such as CMMN. This study addresses this gap by exploring complexity metrics for business artifact process models using CMMN. The findings of this study have practical implications for the CMMN standard and for the commercial products implementing CMMN. This research makes the following contributions: • The development of a formal description of CMMN using first-order logic. • An exploration of the relationship between CMMN and GSM and the development of transformation procedures between them. • A comparison between the method complexity of CMMN and other popular process methods, including BPMN, Unified Modeling Language (UML) Activity diagrams, and Event-driven Process Charts (EPC). • The creation of a systematic literature review of complexity metrics for process models, which was conducted in order to inform the creation of CMMN metrics. • The identification of a set of complexity metrics for the CMMN standard, which underwent theoretical and empirical validation. This research advances literature in the areas of method complexity, complexity metrics for process models, declarative processes, and research on CMMN by characterizing CMMN method complexity, identifying complexity metrics for CMMN, and exploring the relationship between CMMN and GSM. / Ph.D. (Computer Science)
637

Resolving the Complexity of Some Fundamental Problems in Computational Social Choice

Dey, Palash January 2016 (has links) (PDF)
In many real world situations, especially involving multiagent systems and artificial intelligence, participating agents often need to agree upon a common alternative even if they have differing preferences over the available alternatives. Voting is one of the tools of choice in these situations. Common and classic applications of voting in modern applications include collaborative filtering and recommender systems, metasearch engines, coordination and planning among multiple automated agents etc. Agents in these applications usually have computational power at their disposal. This makes the study of computational aspects of voting crucial. This thesis is devoted to a study of computational complexity of several fundamental algorithmic and complexity-theoretic problems arising in the context of voting theory. The typical setting for our work is an “election”; an election consists of a set of voters or agents, a set of alternatives, and a voting rule. The vote of any agent can be thought of as a ranking (more precisely, a complete order) of the set of alternatives. A voting profile comprises a collection of votes of all the agents. Finally, a voting rule is a mapping that takes as input a voting profile and outputs an alternative, which is called the “winner” or “outcome” of the election. Our contributions in this thesis can be categorized into three parts and are described below. Part I: Preference Elicitation. In the first part of the thesis, we study the problem of eliciting the preferences of a set of voters by asking a small number of comparison queries (such as who a voter prefers between two given alternatives) for various interesting domains of preferences. We commence with considering the domain of single peaked preferences on trees in Chapter 3. This domain is a significant generalization of the classical well studied domain of single peaked preferences. The domain of single peaked preferences and its generalizations are hugely popular among political and social scientists. We show tight dependencies between query complexity of preference elicitation and various parameters of the single peaked tree, for example, number of leaves, diameter, path width, maximum degree of a node etc. We next consider preference elicitation for the domain of single crossing preference profiles in Chapter 4. This domain has also been studied extensively by political scientists, social choice theorists, and computer scientists. We establish that the query complexity of preference elicitation in this domain crucially depends on how the votes are accessed and on whether or not any single crossing ordering is a priori known. Part II: Winner Determination. In the second part of the thesis, we undertake a study of the computational complexity of several important problems related to determining winner of an election. We begin with a study of the following problem: Given an election, predict the winners of the election under some fixed voting rule by sampling as few votes as possible. We establish optimal or almost optimal bounds on the number of votes that one needs to sample for many commonly used voting rules when the margin of victory is at least n (n is the number of voters and is a parameter). We next study efficient sampling based algorithms for estimating the margin of victory of a given election for many common voting rules. The margin of victory of an election is a useful measure that captures the robustness of an election outcome. The above two works are presented in Chapter 5. In Chapter 6, we design an optimal algorithm for determining the plurality winner of an election when the votes are arriving one-by-one in a streaming fashion. This resolves an intriguing question on finding heavy hitters in a stream of items, that has remained open for more than 35 years in the data stream literature. We also provide near optimal algorithms for determining the winner of a stream of votes for other popular voting rules, for example, veto, Borda, maximin etc. Voters’ preferences are often partial orders instead of complete orders. This is known as the incomplete information setting in computational social choice theory. In an incomplete information setting, an extension of the winner determination problem which has been studied extensively is the problem of determining possible winners. We study the kernelization complexity (under the complexity-theoretic framework of parameterized complexity) of the possible winner problem in Chapter 7. We show that there do not exist kernels of size that is polynomial in the number of alternatives for this problem for commonly used voting rules under a plausible complexity theoretic assumption. However, we also show that the problem of coalitional manipulation which is an important special case of the possible winner problem admits a kernel whose size is polynomial bounded in the number of alternatives for common voting rules. \Part III: Election Control. In the final part of the thesis, we study the computational complexity of various interesting aspects of strategic behaviour in voting. First, we consider the impact of partial information in the context of strategic manipulation in Chapter 8. We show that lack of complete information makes the computational problem of manipulation intractable for many commonly used voting rules. In Chapter 9, we initiate the study of the computational problem of detecting possible instances of election manipulation. We show that detecting manipulation may be computationally easy under certain scenarios even when manipulation is intractable. The computational problem of bribery is an extensively studied problem in computational social choice theory. We study computational complexity of bribery when the briber is “frugal” in nature. We show for many common voting rules that the bribery problem remains intractable even when the briber’s behaviour is restricted to be frugal, thereby strengthening the intractability results from the literature. This forms the subject of Chapter 10.
638

Application of Complexity Measures to Stratospheric Dynamics

Krützmann, Nikolai Christian January 2008 (has links)
This thesis examines the utility of mathematical complexity measures for the analysis of stratospheric dynamics. Through theoretical considerations and tests with artificial data sets, e.g., the iteration of the logistic map, suitable parameters are determined for the application of the statistical entropy measures sample entropy (SE) and Rényi entropy (RE) to methane (a long-lived stratospheric tracer) data from simulations of the SOCOL chemistry-climate model. The SE is shown to be useful for quantifying the variability of recurring patterns in a time series and is able to identify tropical patterns similar to those reported by previous studies of the ``tropical pipe'' region. However, the SE is found to be unsuitable for use in polar regions, due to the non-stationarity of the methane data at extra-tropical latitudes. It is concluded that the SE cannot be used to analyse climate complexity on a global scale. The focus is turned to the RE, which is a complexity measure of probability distribution functions (PDFs). Using the second order RE and a normalisation factor, zonal PDFs of ten consecutive days of methane data are created with a Bayesian optimal binning technique. From these, the RE is calculated for every day (moving 10-day window). The results indicate that the RE is a promising tool for identifying stratospheric mixing barriers. In Southern Hemisphere winter and early spring, RE produces patterns similar to those found in other studies of stratospheric mixing. High values of RE are found to be indicative of the strong fluctuations in tracer distributions associated with relatively unmixed air in general, and with gradients in the vicinity of mixing barriers, in particular. Lower values suggest more thoroughly mixed air masses. The analysis is extended to eleven years of model data. Realistic inter-annual variability of some of the RE structures is observed, particularly in the Southern Hemisphere. By calculating a climatological mean of the RE for this period, additional mixing patterns are identified in the Northern Hemisphere. The validity of the RE analysis and its interpretation is underlined by showing that qualitatively similar patterns can be seen when using observational satellite data of a different tracer. Compared to previous techniques, the RE has the advantage that it requires significantly less computational effort, as it can be used to derive dynamical information from model or measurement tracer data without relying on any additional input such as wind fields. The results presented in this thesis strongly suggest that the RE is a useful new metric for analysing stratospheric mixing and its variability from climate model data. Furthermore, it is shown that the RE measure is very robust with respect to data gaps, which makes it ideal for application to observations. Hence, using the RE for comparing observations of tracer distributions with those from model simulations potentially presents a novel approach for analysing mixing in the stratosphere.
639

Computational Intelligence and Complexity Measures for Chaotic Information Processing

Arasteh, Davoud 16 May 2008 (has links)
This dissertation investigates the application of computational intelligence methods in the analysis of nonlinear chaotic systems in the framework of many known and newly designed complex systems. Parallel comparisons are made between these methods. This provides insight into the difficult challenges facing nonlinear systems characterization and aids in developing a generalized algorithm in computing algorithmic complexity measures, Lyapunov exponents, information dimension and topological entropy. These metrics are implemented to characterize the dynamic patterns of discrete and continuous systems. These metrics make it possible to distinguish order from disorder in these systems. Steps required for computing Lyapunov exponents with a reorthonormalization method and a group theory approach are formalized. Procedures for implementing computational algorithms are designed and numerical results for each system are presented. The advance-time sampling technique is designed to overcome the scarcity of phase space samples and the buffer overflow problem in algorithmic complexity measure estimation in slow dynamics feedback-controlled systems. It is proved analytically and tested numerically that for a quasiperiodic system like a Fibonacci map, complexity grows logarithmically with the evolutionary length of the data block. It is concluded that a normalized algorithmic complexity measure can be used as a system classifier. This quantity turns out to be one for random sequences and a non-zero value less than one for chaotic sequences. For periodic and quasi-periodic responses, as data strings grow their normalized complexity approaches zero, while a faster deceasing rate is observed for periodic responses. Algorithmic complexity analysis is performed on a class of certain rate convolutional encoders. The degree of diffusion in random-like patterns is measured. Simulation evidence indicates that algorithmic complexity associated with a particular class of 1/n-rate code increases with the increase of the encoder constraint length. This occurs in parallel with the increase of error correcting capacity of the decoder. Comparing groups of rate-1/n convolutional encoders, it is observed that as the encoder rate decreases from 1/2 to 1/7, the encoded data sequence manifests smaller algorithmic complexity with a larger free distance value.
640

Optimization in Graphs under Degree Constraints. Application to Telecommunication Networks

Sau, Ignasi 16 October 2009 (has links) (PDF)
La première partie de cette thèse s'intéresse au groupage de trafic dans les réseaux de télécommunications. La notion de groupage de trafic correspond à l'agrégation de flux de faible débit dans des conduits de plus gros débit. Cependant, à chaque insertion ou extraction de trafic sur une longueur d'onde il faut placer dans le noeud du réseau un multiplexeur à insertion/extraction (ADM). De plus il faut un ADM pour chaque longueur d'onde utilisée dans le noeud, ce qui représente un coût d'équipements important. Les objectifs du groupage de trafic sont d'une part le partage efficace de la bande passante et d'autre part la réduction du coût des équipements de routage. Nous présentons des résultats d'inapproximabilité, des algorithmes d'approximation, un nouveau modèle qui permet au réseau de pouvoir router n'importe quel graphe de requêtes de degré borné, ainsi que des solutions optimales pour deux scénarios avec trafic all-to-all: l'anneau bidirectionnel et l'anneau unidirectionnel avec un facteur de groupage qui change de manière dynamique. La deuxième partie de la thèse s'intéresse aux problèmes consistant à trouver des sous-graphes avec contraintes sur le degré. Cette classe de problèmes est plus générale que le groupage de trafic, qui est un cas particulier. Il s'agit de trouver des sous-graphes d'un graphe donné avec contraintes sur le degré, tout en optimisant un paramètre du graphe (très souvent, le nombre de sommets ou d'arêtes). Nous présentons des algorithmes d'approximation, des résultats d'inapproximabilité, des études sur la complexité paramétrique, des algorithmes exacts pour les graphes planaires, ainsi qu'une méthodologie générale qui permet de résoudre efficacement cette classe de problèmes (et de manière plus générale, la classe de problèmes tels qu'une solution peut être codé avec une partition d'un sous-ensemble des sommets) pour les graphes plongés dans une surface. Finalement, plusieurs annexes présentent des résultats sur des problèmes connexes.

Page generated in 0.2469 seconds