• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1306
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3140
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

On the parameterized complexity of finding short winning strategies in combinatorial games

Scott, Allan Edward Jolicoeur 29 April 2010 (has links)
A combinatorial game is a game in which all players have perfect information and there is no element of chance; some well-known examples include othello, checkers, and chess. When people play combinatorial games they develop strategies, which can be viewed as a function which takes as input a game position and returns a move to make from that position. A strategy is winning if it guarantees the player victory despite whatever legal moves any opponent may make in response. The classical complexity of deciding whether a winning strategy exists for a given position in some combinatorial game has been well-studied both in general and for many specific combinatorial games. The vast majority of these problems are, depending on the specific properties of the game or class of games being studied, complete for either PSPACE or EXP. In the parameterized complexity setting, Downey and Fellows initiated a study of "short" (or k-move) winning strategy problems. This can be seen as a generalization of "mate-in-k" chess problems, in which the goal is to find a strategy which checkmates your opponent within k moves regardless of how he responds. In their monograph on parameterized complexity, Downey and Fellows suggested that AW[*] was the "natural home" of short winning strategy problems, but there has been little work in this field since then. In this thesis, we study the parameterized complexity of finding short winning strategies in combinatorial games. We consider both the general and several specific cases. In the general case we show that many short games are as hard classically as their original variants, and that finding a short winning strategy is hard for AW[P] when the rules are implemented as succinct circuits. For specific short games, we show that endgame problems for checkers and othello are in FPT, that alternating hitting set, hex, and the non-endgame problem for othello are in AW[*], and that short chess is AW[*]-complete. We also consider pursuit-evasion parameterized by the number of cops. We show that two variants of pursuit-evasion are AW[*]-hard, and that the short versions of these problems are AW[*]-complete.
622

A meta-analysis study of project and programme management complexity in the oil and gas sector of the Middle East and North Africa region

Ziadat, Wael January 2018 (has links)
Projects and programmes are inherently complex; the interaction of people, systems, processes and data within a dynamic environment creates an intricate network of agents whose behaviour can be unpredictable and unexpected. The management of this complexity is ordinarily concerned with the implementation of tools and techniques to ensure that projects are completed within the desired cost and time, at the agreed level of performance and quality – this is often referred to as the †̃iron triangleâ€TM. However, the impact of a dynamic external environment on the †̃softâ€TM boundaries of the project domain can lead to extreme difficulty in attempting to forecast or predict outcomes and system behaviours. This thesis contends that there is a clear desideratum for a new paradigm in project management practice and research that moves beyond the traditionalist (reductionist) approach to one that embraces, rather than attempts to simplify complexity. The research described in this thesis seeks to uncover the characteristics of complexity, in the context of projects and programmes, in an attempt to uncover if complexity is a factor in the determination of †̃valuableâ€TM outcomes. Subsequently, and through the theoretical lens of complexity theory, this research seeks to highlight the importance of our understanding and treatment of complexity in the execution and management of projects and programmes. The research further seeks to demonstrate how complexity thinking may inform a more sophisticated understanding of how projects, programmes and portfolios delivered successfully (Ziadat, 2017). The context of the research is the oil and gas (O & G) engineering sector in the Middle East and North Africa (MENA) region. A two stage qualitative and quantitative methodology is applied, based on deductive reasoning. The first stage involves the development of a questionnaire and a series of unstructured interviews to gain an understanding of the practical consideration that emerges from the literature review. The second stage of the research involves the application of meta-analysis to study the correlation between the complexity factors identified in the first stage, aiming for heterogeneity, identification of patterns and directing to achieve robust conclusions by using sensitivity analysis. The thesis proposes a new model of complexity factors for oil & gas engineering projects in the MENA region. The model is designed to facilitate the analysis of the project complexity landscape and to define requirements for oil & gas organisations involved with the delivery of projects and programmes to cope with different complexity factors within and across the MENA region. The outcomes include substantial relationship between technical and health, safety & environment complexity factors and project performance despite the mediation of project management complexity factors, yet the organizational complexity factors can be observed at a significant level when project management in complexity factors are considered as a mediator in the model (Ziadat, 2016).
623

Laying a smoke screen: Ambiguity and neutralization as strategic responses to intra-institutional complexity

Meyer, Renate, Höllerer, Markus January 2016 (has links) (PDF)
Our research contributes to knowledge on strategic organizational responses by addressing a specific type of institutional complexity that has, to date, been rather neglected in scholarly inquiry: conflicting institutional demands that arise within the same institutional order. We suggest referring to such type of complexity as "intra-institutional" - as opposed to "inter-institutional." Empirically, we examine the consecutive spread of two management concepts - shareholder value and corporate social responsibility - among Austrian listed corporations around the turn of the millennium. Our work presents evidence that in institutionally complex situations, the concepts used by organizations to respond to competing demands and belief systems are interlinked and coupled through multiwave diffusion. We point to the open, chameleon-like character of some concepts that makes them particularly attractive for discursive adoption in such situations and conclude that organizations regularly respond to institutional complexity by resorting to discursive neutralization techniques and strategically producing ambiguity. (authors' abstract)
624

Assessing Linguistic Proficiency -The Issue of Syntactic Complexity

Rönnkvist, Patrik January 2021 (has links)
This study investigates the syntactic complexity of the example texts used as guides forassessment in the national tests of the Swedish upper secondary school courses English 5 andEnglish 6. It is guided by two research questions: (1) Is there a progression of increasedcomplexity between the grades assigned to the example texts, and, if so, is any specificmeasure of syntactic complexity more strongly linked to a higher grade than the rest? (2) Isthere a progression of increased complexity between the two courses, and, if so, how doesthis progression manifest itself? A set of 14 quantitative measures of syntactic complexity asidentified by the L2 Syntactic Complexity Analyzer (L2SCA) are examined to answer thesequestions. The majority of the differences between the grades and/or courses represented areshown to be statistically insignificant, and the few instances of statistical significance likelyoccurred either due to a small sample size or due to a questionable tendency of L2SCA whendealing with run-on sentences. In the end, syntactic complexity as expressed through the 14measures seems to be a poor indicator of why a text received a certain grade in either of thetwo represented courses.
625

Complexity matching processes during the coupling of biological systems : application to rehabilitation in elderly / Processus d’appariement des complexités lors du couplage de deux systèmes biologiques : application à la rééducation de la marche chez les personnes âgées

Al Murad, Zainy Mshco Hajy 18 February 2019 (has links)
Plusieurs cadres théoriques ont tenté de rendre compte des processus de synchronisation interpersonnelle. Les théories cognitivistes suggèrent que la synchronisation est réalisée par le biais d’une correction discrète et mutuelle des asynchronies entre les deux partenaires. Les théories dynamiques reposent sur l’hypothèse d’un couplage continu des deux systèmes, conçus comme oscillateurs auto-entretenus. Enfin le modèle du complexity matching repose sur l’hypothèse d’une coordination multi-échelle entre les deux systèmes en interaction. Dans un premier temps, nous développons des tests statistiques permettant de repérer dans les données expérimentales les signatures typiques de ces trois modes de coordination. Nous proposons notamment une signature multifractale, basée sur l’analyse des corrélations entre les spectres multifractals caractérisant les séries produites par les deux systèmes en interaction. Nous développons également une analyse de cross-corrélation fenêtrée, qui permet de dévoiler les processus locaux de synchronisation mis en œuvre. Ces études nous permettent de revisiter un certain nombre de travaux antérieurs. Nous montrons notamment que si la synchronisation de tâches discrètes telles que le tapping repose en effet sur des processus de correction discrète des asynchronies, la synchronisation de tâches continues telles que les oscillations de pendules est essentiellement basée sur les mêmes principes de correction discrète, et non sur un couplage continu des effecteurs. Nos résultats indiquent également que la synchronisation peut révéler des mécanismes hybrides mixant notamment correction des asynchronies et complexity matching. Enfin nous mettons en évidence que la marche synchronisée met en œuvre un effet dominant de complexity matching, d’autant plus prégnant que les deux partenaires sont étroitement couplés (marche bras-dessus-bras-dessous). Nous proposons dans un second temps d’exploiter ce résultat pour tester la possibilité d’une restauration de la complexité chez les personnes âgées. Le vieillissement a en effet été caractérisé comme un processus de perte graduelle de complexité, et cet effet a été notamment documenté dans le domaine de la locomotion. Il a notamment été montré que la perte de complexité corrélait avec la propension à la chute. La théorie du complexity matching suppose que deux systèmes en interaction tendent à aligner leurs niveaux de complexité. Elle suppose également que lorsque deux systèmes de niveaux différents de complexité interagissent, le système le plus complexe tend à attirer le moins complexe, engendrant un accroissement de la complexité chez le second. Nous montrons, dans un protocole au cours duquel des personnes âgées sont invitées à marcher bras-dessus-bras-dessous avec un accompagnant jeune, que la synchronisation entre les deux partenaires est réalisée au travers d’un effet d’appariement des complexités, et que l’entrainement prolongé en marche synchronisée permet une restauration de la complexité de la locomotion chez les personnes âgées. Cet effet perdure lors d’un post-test réalisé deux semaines après la fin de l’entraînement. Ce résultat, outre le fait qu’il conforte un des aspects essentiels de la théorie du complexity matching, ouvre de nouvelles voies de recherche pour la conception de stratégies de réhabilitation et de prévention de la chute. / Several theoretical frameworks have attempted to account for interpersonal synchronization processes. Cognitive theories suggest that synchronization is achieved through discrete and mutual corrections of asynchronies between the two partners. The dynamic theories are based on the assumption of a continuous coupling between the two systems, conceived as self-sustained oscillators. Finally, the complexity matching model is based on the assumption of a multi-scale coordination between the two interacting systems. As a first step, we develop statistical tests in order to identify, in experimental data, the typical signatures of these three modes of coordination. In particular, we propose a multifractal signature, based on the analysis of correlations between the multifractal spectra characterizing the series produced by the two interacting systems. We also develop a windowed cross-correlation analysis, which aims at revealing the nature of the local synchronization processes. These studies allow us to revisit a number of previous works. We show that if the synchronization of discrete tasks such as tapping relies on discrete correction processes of asynchronies, the synchronization of continuous tasks such as pendulum oscillations is essentially based on the same principles of discrete correction, and not on a continuous coupling of effectors. Our results also indicate that synchronization could be sustained by hybrid mechanisms mixing notably asynchronies correction and complexity matching. Finally we highlight that synchronized walking is based on a dominant effect of complexity matching, especially when partners are closely coupled (arm-in-arm walking). We propose in a second step to exploit this result to test the possibility of a restoration of complexity in the elderly. Aging has indeed been characterized as a process of gradual loss of complexity, and this effect has been particularly documented in the field of locomotion. In particular, it has been shown that the loss of complexity correlates in older people with the propensity to fall. Complex matching theory assumes that two interacting systems tend to align their complexity levels. It also assumes that when two systems of different levels of complexity interact, the more complex system tends to attract the less complex, causing an increase in complexity in the second. We show, in a protocol in which older people are invited to walk arm-in-arm with a younger companion, that synchronization between the two partners is achieved through a complexity matching effect, and that prolonged training in such synchronized walking allows a restoration of the complexity of locomotion in the elderly. This effect persists during a post-test conducted two weeks after the end of the training sessions. This result, in addition to reinforcing one of the essential aspects of the theory of complexity matching, opens new avenues of research for the design of rehabilitation and fall prevention strategies.
626

Analysis of impact of process complexity on unbalanced work in assembly process and methods to reduce it : Project undertaken in Electrolux AB Mariestad, under guidance of SWEREA IVF AB

Lokhande, Kushal, Gopalakrishnan, Maheshwaran January 2012 (has links)
No description available.
627

Improving the User Experience in Data Visualization Web Applications

Alexander, Granhof, Jakob, Eriksson January 2021 (has links)
This paper is a literature study with an additional empirical approach to research how to improve user experience in data visualization web applications. This research has been conducted in collaboration with Caretia AB to improve their current data visualization tool. The research studies previous research on the topics of UI design, user experience, visual complexity and user interaction in the attempt to discover what areas of design and intuitivity that improves the user experiences in these kinds of tools. The findings were then tested together with Caretia through a proof-of-concept prototype application which was implemented with said findings. The conclusion of the results is that mapping ontology groups and prior experience as well as reducing visual overload are effective ways of improving intuitivity and user experience.
628

Designing Superior Evolutionary Algorithms via Insights From Black-Box Complexity Theory / Conception de meilleurs algorithmes évolutionnaires grâce à la théorie de la complexité boîte noire

Yang, Jing 04 September 2018 (has links)
Il a été observé que l'exécution des heuristiques de recherche aléatoire dépend d'un ou de plusieurs paramètres. Un certain nombre de résultats montrent un avantage des paramètres dynamiques, c'est-à-dire que les paramètres de l'algorithme sont modifiés au cours de son exécution. Dans ce travail, nous montrons que la complexité de la boîte noire sans biais de la classe de fonction de référence OneMax est $n ln(n) - cn pm o(n)$ pour une constante $c$ comprise entre $0.2539$ et $0.2665$. L'exécution peut être réalisé avec un algorithme simple de type-(1+1) utilisant une puissance de mutation fitness dépendant. Une fois traduite dans le cas du budget fixe, notre algorithme trouve des solutions plus proches de l'optimum de 13% que celles des meilleurs algorithmes connus.Basé sur la puissance de mutation optimale analysée pour OneMaX, nous montrons qu'un choix auto-ajusté du nombre de bits à retourner atteint le même temps d'exécution (excepté $o(n)$ termes inférieurs) et le même (asymptotique) 13% amélioration de la fitness-distance par rapport au RLS. Le mécanisme d'ajustement doit apprendre de manière adaptative la puissance de mutation actuellement optimale des itérations précédentes. Cela vise à la fois à exploiter le fait que des problèmes généralement différents peuvent nécessiter des puissances de mutation différentes et que, pour un problème fixe, différentes puissances peuvent devenir optimales à différentes étapes du processus d'optimisation.Nous étendons ensuite notre stratégie d'auto-ajustement aux algorithmes évolutifs basés sur la population dans des espaces discrets de recherche. Grosso modo, il consiste à créer la moitié de la descendance avec un taux de mutation qui est deux fois plus élevé que le taux de mutation actuel et l'autre moitié avec la moitié du taux actuel. Le taux de mutation est ensuite mis à jour au taux utilisé dans cette sous-population qui contient la meilleure descendance. Nous analysons comment l'algorithme d'évolution $(1+lambda)$ avec ce taux de mutation auto-ajustable optimise la fonction de test OneMax. Nous montrons que cette version dynamique de $(1+lambda)$~EA trouve l'optimum dans un temps d'optimisation attendu (nombre d'évaluations de la fitness) de $O(nlambda/loglambda+nlog n)$. Le temps est asymptotiquement plus petit que le temps d'optimisation de l'EA classique $(1+lambda)$. Des travaux antérieurs montrent que cette performance est la meilleure possible parmi tous les algorithmes de boîtes noires sans biais unaire basés sur des mutations $lambda$-parallèles.Nous proposons et analysons également une version auto-réglage de l'algorithme évolutionnaire $(1,lambda)$ dans lequel le taux de mutation actuel fait partie de l'individu et donc également sujet à mutation. Une analyse d'exécution rigoureuse sur la fonction de référence OneMax révèle qu'un simple schéma de mutation pour le taux conduit à un temps d'optimisation attendu du meilleur $O(nlambda/loglambda+nlog n)$. Notre résultat montre que l'auto-réglage dans le calcul évolutif peut trouver automatiquement des paramètres optimaux complexes. En même temps, cela prouve qu'un schéma d'auto-ajustement relativement compliqué pour le taux de mutation peut être remplacé par notre schéma endogène simple. / It has been observed that the runtime of randomized search heuristics depend on one or more parameters. A number of results show an advantage of dynamic parameter settings, that is, the parameters of the algorithm are changed during its execution. In this work, we prove that the unary unbiased black-box complexity of the OneMax benchmark function class is $n ln(n) - cn pm o(n)$ for a constant $c$ which is between $0.2539$ and $0.2665$. This runtime can be achieved with a simple (1+1)-type algorithm using a fitness-dependent mutation strength. When translated into the fixed-budget perspective, our algorithm finds solutions which are roughly 13% closer to the optimum than those of the best previously known algorithms.Based on the analyzed optimal mutation strength for OneMax, we show that a self-adjusting choice of the number of bits to be flipped attains the same runtime (apart from $o(n)$ lower-order terms) and the same (asymptotic) 13% fitness-distance improvement over RLS. The adjusting mechanism is to adaptively learn the currently optimal mutation strength from previous iterations. This aims both at exploiting that generally different problems may need different mutation strengths and that for a fixed problem different strengths may become optimal in different stages of the optimization process.We then extend our self-adjusting strategy to population-based evolutionary algorithms in discrete search spaces. Roughly speaking, it consists of creating half the offspring with a mutation rate that is twice the current mutation rate and the other half with half the current rate. The mutation rate is then updated to the rate used in that subpopulation which contains the best offspring. We analyze how the $(1+lambda)$ evolutionary algorithm with this self-adjusting mutation rate optimizes the OneMax test function. We prove that this dynamic version of the $(1+lambda)$~EA finds the optimum in an expected optimization time (number of fitness evaluations) of $O(nlambda/loglambda+nlog n)$. This time is asymptotically smaller than the optimization time of the classic $(1+lambda)$ EA. Previous work shows that this performance is best-possible among all $lambda$-parallel mutation-based unbiased black-box algorithms.We also propose and analyze a self-adaptive version of the $(1,lambda)$ evolutionary algorithm in which the current mutation rate is part of the individual and thus also subject to mutation. A rigorous runtime analysis on the OneMax benchmark function reveals that a simple local mutation scheme for the rate leads to an expected optimization time of the best possible $O(nlambda/loglambda+nlog n)$. Our result shows that self-adaptation in evolutionary computation can find complex optimal parameter settings on the fly. At the same time, it proves that a relatively complicated self-adjusting scheme for the mutation rate can be replaced by our simple endogenous scheme.
629

Complexity, Fun, and Robots

Warmke, Daniel A. 23 September 2019 (has links)
No description available.
630

Automotive design aesthetics: Harmony and its influence in semantic perception

Islas Munoz, Juan 15 October 2013 (has links)
No description available.

Page generated in 0.0811 seconds