• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 12
  • 4
  • Tagged with
  • 62
  • 58
  • 49
  • 29
  • 28
  • 28
  • 20
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Complexity of Constraint Satisfaction Problems for Unions of Theories

Greiner, Johannes 11 January 2022 (has links)
Constraint Satisfaction Problems (CSPs) are a class of decision problems where one usually fixes a structure A and seeks to decide whether or not a given conjunction of atomic formulas is satisfiable in A or not. It has been shown by Bodirsky and Grohe that every computational decision problem is equivalent to some CSP via a polynomial-time Turing reduction. For structures A with finite domain Zhuk and Bulatov both proved an algebraic criterion classifying in which cases the CSP of A is in P and when it is NP-hard. For some classes of structures with infinite domain, there are similar P vs NP-hard dichotomies. This thesis continues the latter line of research for CSPs of first-order theories. In this version of CSPs, a theory is fixed and one seeks to decide whether or not a given conjunction of atomic formulas is satisfiable in some model of T. Assuming that the CSPs of theories T1 and T2 are polynomial-time tractable, we prove necessary and sufficient conditions for polynomial-time tractability of the union of T1 and T2. For some classes of theories, P vs NP-hard dichotomies are proven. To achieve this, various 'combinations' of structures are examined, a technique called 'sampling' is generalized to theories and clones of polymorphisms of temporal structures are examined in detail.
42

Contributions to the economic analysis of even-aged silviculture: From simple models to complex analyses

Halbritter, Andreas 19 August 2022 (has links)
In managed forests, the enormous complexity of an ecologic system meets a vast range of economic and other impact factors. Thus, to determine, analyze and understand economically optimal stand management is a task which has kept forest economists occupied for the past 200 years. The approach which has been followed since the days of Martin FAUSTMANN is the analysis of models which describe rather specific management scenarios using a set of clearly defined model assumptions. Unfortunately, the applicability of the findings to more general scenarios is limited. On the other side, the possibility of analyzing general management environments with single models is also limited by increasing complexity. Thus, a holistic understanding of optimal forest management is still missing. This statement also holds for the extensive field of optimal even-aged timber production, which essentially consists of only three main components, i.e., planting, thinning and final harvest. Therefore, this dissertation aims to make a contribution to further increase the general understanding of even-aged forest management. To achieve this goal three steps were taken. First, a qualitative analysis of a combined management plan including decisions on all three basic components is presented based on HALBRITTER and DEEGEN (2015). It provides a discussion of the direct and indirect dependencies between the decision variables of a rotation in a rather classical management environment. Second, three studies are presented which dissolve some of the classical model assumptions and extend the existing knowledge on even-aged forestry to relevant but more complex mangement questions. HALBRITTER (2015) includes natural regeneration and a shelter period in an even-aged system and explores the borders between the even- and uneven-aged management. Thereby, the influence of natural regeneration and the impact of several age classes were studied. HALBRITTER (2020) drops the assumption of stand homogeneity and investigates stand management under heterogeneous tree growth in which, for example, different social classes of trees are maintained. Lastly, HALBRITTER et al. (2020) extend the classical deterministic management environment in the direction of density-dependent hazard risk. This adds an additional aspect to the thinning and the rotation decision because, in this scenario, the probability of stand destruction can be controlled by thinning. As a third step, the studies above were embedded in a patchwork representing a conglomeration of models which are connected and validated by overlapping scopes. Using this approach, a wide range of different management scenarios can be covered by rather simple models. Thus, the complexity of the analysis decreases compared to single models with a more generally applicable framework and the problem of model complexity is mitigated. In addition, the inclusion of reference models with a particular focus on the management components stand establishment, thinning or rotation allows for a clear identification of the relationship between optimal stand management and the characteristics of a scenario. Applied to the qualitative analysis of the four studies above, the approach yields insights which contribute to a better understanding of even-aged forest management.:1. Introduction 2. The FAUSTMANN Framework 2.1 Model Definition 2.2 The FAUSTMANN Model 2.3 Assumptions 2.4 Basic Applications 2.4.1 The Rotation Model 2.4.2 The Thinning Model 2.4.3 The Planting Model 2.4.4 The Uneven-aged Model 3. Problem 4. Methodology 5. The Combined Model 5.1 Model 5.2 Optimal Management 5.3 Impact of Timber Price and Interest Rate 5.4 Discussion in Comparison to the Basic FAUSTMANN Applications 6. Extensions 6.1 Uneven-Aged Extension: The Double-Cohort Model 6.1.1 Even-Aged and Uneven-Aged Stands 6.1.2 Model 6.1.3 Optimal Management 6.1.4 Impact of Timber Price and Interest Rate 6.1.5 Discussion in Comparison to the Basic FAUSTMANN Applications 6.2 Heterogeneous Extension: The Heterogeneous Stand Model 6.2.1 Homogeneous and Heterogenous Stands 6.2.2 Model 6.2.3 Optimal Management 6.2.4 Impact of Timber Price and Interest Rate 6.2.5 Discussion in Comparison to the Basic FAUSTMANN Applications 6.3 Stochastic Extension: The Natural Risk Model 6.3.1 Deterministic and Stochastic Scenarios 6.3.2 Model 6.3.3 Optimal Management 6.3.4 Impact of Timber Price and Interest Rate 6.3.5 Discussion in Comparison to the Basic FAUSTMANN Applications 7. Conclusions 7.1 Optimal Management Strategy 7.1.1 Optimal Planting 7.1.2 Optimal Thinning 7.1.3 Optimal Rotation 7.2 The Patchwork Approach 7.2.1 Applicability of the Patchwork Approach 7.2.2 Limitations of the Patchwork Approach 7.2.3 Comparison to the Holistic Approach 8. Summary
43

Two-Stage Vehicle Routing Problems with Profits and Buffers: Analysis and Metaheuristic Optimization Algorithms

Le, Hoang Thanh 09 June 2023 (has links)
This thesis considers the Two-Stage Vehicle Routing Problem (VRP) with Profits and Buffers, which generalizes various optimization problems that are relevant for practical applications, such as the Two-Machine Flow Shop with Buffers and the Orienteering Problem. Two optimization problems are considered for the Two-Stage VRP with Profits and Buffers, namely the minimization of total time while respecting a profit constraint and the maximization of total profit under a budget constraint. The former generalizes the makespan minimization problem for the Two-Machine Flow Shop with Buffers, whereas the latter is comparable to the problem of maximizing score in the Orienteering Problem. For the three problems, a theoretical analysis is performed regarding computational complexity, existence of optimal permutation schedules (where all vehicles traverse the same nodes in the same order) and potential gaps in attainable solution quality between permutation schedules and non-permutation schedules. The obtained theoretical results are visualized in a table that gives an overview of various subproblems belonging to the Two-Stage VRP with Profits and Buffers, their theoretical properties and how they are connected. For the Two-Machine Flow Shop with Buffers and the Orienteering Problem, two metaheuristics 2BF-ILS and VNSOP are presented that obtain favorable results in computational experiments when compared to other state-of-the-art algorithms. For the Two-Stage VRP with Profits and Buffers, an algorithmic framework for Iterative Search Algorithms with Variable Neighborhoods (ISAVaN) is proposed that generalizes aspects from 2BF-ILS as well as VNSOP. Various algorithms derived from that framework are evaluated in an experimental study. The evaluation methodology used for all computational experiments in this thesis takes the performance during the run time into account and demonstrates that algorithms for structurally different problems, which are encompassed by the Two-Stage VRP with Profits and Buffers, can be evaluated with similar methods. The results show that the most suitable choice for the components in these algorithms is dependent on the properties of the problem and the considered evaluation criteria. However, a number of similarities to algorithms that perform well for the Two-Machine Flow Shop with Buffers and the Orienteering Problem can be identified. The framework unifies these characteristics, providing a spectrum of algorithms that can be adapted to the specifics of the considered Vehicle Routing Problem.:1 Introduction 2 Background 2.1 Problem Motivation 2.2 Formal Definition of the Two-Stage VRP with Profits and Buffers 2.3 Review of Literature on Related Vehicle Routing Problems 2.3.1 Two-Stage Vehicle Routing Problems 2.3.2 Vehicle Routing Problems with Profits 2.3.3 Vehicle Routing Problems with Capacity- or Resource-based Restrictions 2.4 Preliminary Remarks on Subsequent Chapters 3 The Two-Machine Flow Shop Problem with Buffers 3.1 Review of Literature on Flow Shop Problems with Buffers 3.1.1 Algorithms and Metaheuristics for Flow Shops with Buffers 3.1.2 Two-Machine Flow Shop Problems with Buffers 3.1.3 Blocking Flow Shops 3.1.4 Non-Permutation Schedules 3.1.5 Other Extensions and Variations of Flow Shop Problems 3.2 Theoretical Properties 3.2.1 Computational Complexity 3.2.2 The Existence of Optimal Permutation Schedules 3.2.3 The Gap Between Permutation Schedules an Non-Permutation 3.3 A Modification of the NEH Heuristic 3.4 An Iterated Local Search for the Two-Machine Flow Shop Problem with Buffers 3.5 Computational Evaluation 3.5.1 Algorithms for Comparison 3.5.2 Generation of Problem Instances 3.5.3 Parameter Values 3.5.4 Comparison of 2BF-ILS with other Metaheuristics 3.5.5 Comparison of 2BF-OPT with NEH 3.6 Summary 4 The Orienteering Problem 4.1 Review of Literature on Orienteering Problems 4.2 Theoretical Properties 4.3 A Variable Neighborhood Search for the Orienteering Problem 4.4 Computational Evaluation 4.4.1 Measurement of Algorithm Performance 4.4.2 Choice of Algorithms for Comparison 4.4.3 Problem Instances 4.4.4 Parameter Values 4.4.5 Experimental Setup 4.4.6 Comparison of VNSOP with other Metaheuristics 4.5 Summary 5 The Two-Stage Vehicle Routing Problem with Profits and Buffers 5.1 Theoretical Properties of the Two-Stage VRP with Profits and Buffers 5.1.1 Computational Complexity of the General Problem 5.1.2 Existence of Permutation Schedules in the Set of Optimal Solutions 5.1.3 The Gap Between Permutation Schedules an Non-Permutation Schedules 5.1.4 Remarks on Restricted Cases 5.1.5 Overview of Theoretical Results 5.2 A Metaheuristic Framework for the Two-Stage VRP with Profits and Buffers 5.3 Experimental Results 5.3.1 Problem Instances 5.3.2 Experimental Results for O_{max R, Cmax≤B} 5.3.3 Experimental Results for O_{min Cmax, R≥Q} 5.4 Summary Bibliography List of Figures List of Tables List of Algorithms
44

Fine-Grained Parameterized Algorithms on Width Parameters and Beyond

Hegerfeld, Falko 25 October 2023 (has links)
Die Kernaufgabe der parameterisierten Komplexität ist zu verstehen, wie Eingabestruktur die Problemkomplexität beeinflusst. Wir untersuchen diese Fragestellung aus einer granularen Perspektive und betrachten Problem-Parameter-Kombinationen mit einfach exponentieller Laufzeit, d.h., Laufzeit a^k n^c, wobei n die Eingabegröße ist, k der Parameterwert, und a und c zwei positive Konstanten sind. Unser Ziel ist es, die optimale Laufzeitbasis a für eine gegebene Kombination zu bestimmen. Für viele Zusammenhangsprobleme, wie Connected Vertex Cover oder Connected Dominating Set, ist die optimale Basis bezüglich dem Parameter Baumweite bekannt. Die Baumweite gehört zu der Klasse der Weiteparameter, welche auf natürliche Weise zu Algorithmen mit dem Prinzip der dynamischen Programmierung führen. Im ersten Teil dieser Dissertation untersuchen wir, wie sich die optimale Laufzeitbasis für diverse Zusammenhangsprobleme verändert, wenn wir zu ausdrucksstärkeren Weiteparametern wechseln. Wir entwerfen neue parameterisierte Algorithmen und (bedingte) untere Schranken, um diese optimalen Basen zu bestimmen. Insbesondere zeigen wir für die Parametersequenz Baumweite, modulare Baumweite, und Cliquenweite, dass die optimale Basis von Connected Vertex Cover bei 3 startet, sich erst auf 5 erhöht und dann auf 6, wobei hingegen die optimale Basis von Connected Dominating Set bei 4 startet, erst bei 4 bleibt und sich dann auf 5 erhöht. Im zweiten Teil gehen wir über Weiteparameter hinaus und analysieren restriktivere Arten von Parametern. Für die Baumtiefe entwerfen wir platzsparende Verzweigungsalgorithmen. Die Beweistechniken für untere Schranken bezüglich Weiteparametern übertragen sich nicht zu den restriktiveren Parametern, weshalb nur wenige optimale Laufzeitbasen bekannt sind. Um dies zu beheben untersuchen wir Knotenlöschungsprobleme. Insbesondere zeigen wir, dass die optimale Basis von Odd Cycle Transversal parameterisiert mit einem Modulator zu Baumweite 2 den Wert 3 hat. / The question at the heart of parameterized complexity is how input structure governs the complexity of a problem. We investigate this question from a fine-grained perspective and study problem-parameter-combinations with single-exponential running time, i.e., time a^k n^c, where n is the input size, k the parameter value, and a and c are positive constants. Our goal is to determine the optimal base a for a given combination. For many connectivity problems such as Connected Vertex Cover or Connecting Dominating Set, the optimal base is known relative to treewidth. Treewidth belongs to the class of width parameters, which naturally admit dynamic programming algorithms. In the first part of this thesis, we study how the optimal base changes for these connectivity problems when going to more expressive width parameters. We provide new parameterized dynamic programming algorithms and (conditional) lower bounds to determine the optimal base, in particular, we obtain for the parameter sequence treewidth, modular-treewidth, clique-width that the optimal base for Connected Vertex Cover starts at 3, increases to 5, and then to 6, whereas the optimal base for Connected Dominating Set starts at 4, stays at 4, and then increases to 5. In the second part, we go beyond width parameters and study more restrictive parameterizations like depth parameters and modulators. For treedepth, we design space-efficient branching algorithms. The lower bound techniques for width parameterizations do not carry over to these more restrictive parameterizations and as a result, only a few optimal bases are known. To remedy this, we study standard vertex-deletion problems. In particular, we show that the optimal base of Odd Cycle Transversal parameterized by a modulator to treewidth 2 is 3. Additionally, we show that similar lower bounds can be obtained in the realm of dense graphs by considering modulators consisting of so-called twinclasses.
45

Healthy food choice

Mata, Jutta 12 February 2008 (has links)
Die vorliegende Dissertation setzt sich damit auseinander, wie das Zusammenspiel von essensbezogener Umwelt und Kognition Ernährungsentscheidungen beeinflusst. Im ersten Manuskript, “When Diets Last: Lower Cognitive Complexity Increases Diet Adherence” wird die Bedeutung der kognitiven Komplexität von Ernährungsregeln für das Einhalten einer Diät untersucht. Können Diäten scheitern, weil sie aus kognitiver Perspektive zu komplex sind, z.B. weil sich Diäthaltende nicht alle wichtigen Informationen merken oder verarbeiten können? 1136 Diäthaltende nahmen an einer längsschnittlichen Onlinestudie teil. Vorangegangenes Diätverhalten, Selbstwirksamkeit, Planung und wahrgenommene Regelschwierigkeit erhöhten das Risiko, die Diät vorzeitig aufzugeben, wobei Selbstwirksamkeit und wahrgenommene Regelschwierigkeit die einflussreichsten Faktoren waren. Im zweiten Manuskript „Meat Label Design: Effects on Stage Progression, Risk Perception, and Product Evaluation” wird der Einfluss gesundheitsrelevanter Information auf Labeln für Produktbewertung und Intention, Tierhaltung und Inhaltsstoffe von Lebensmitteln in die Kaufentscheidung einzubeziehen, untersucht. Es wurde betrachtet, wie Inhalt und Kontext (separate versus conjoint Darbietung) der Labelinformation die Bewertung von Fleischprodukten beeinflusst. Die Ergebnisse zeigen, dass sich bei einer conjoint im Gegensatz zur separaten Darbietung die Bewertung der Produkte umkehrt. Darüber hinaus hatten Personen, die zuvor nicht motiviert waren gesundheitsrelevante Aspekte in ihr Einkaufsverhalten einzubeziehen, nach Betrachten der Label eine höhere Intention diese zu berücksichtigen. Im dritten Manuskript, „Predicting Children’s Meal Preferences: How Much Do Parents Know?“, wurden Präferenzvorhersagen bezüglich der Essensentscheidungen Anderer erforscht. Es wurde untersucht, wie gut und mit Hilfe welcher Information Eltern die Mittagessenpräferenzen ihrer Kinder vorhersagen. Die Vorhersagegenauigkeit der Eltern entsprach der Stabilität der Essenspräferenzen ihrer Kinder, d.h. dass die Eltern so genau waren, wie möglich. Die Ergebnisse suggerieren, dass Eltern vor allem spezifisches Wissen über die Präferenzen ihrer Kinder und Projektion ihrer eigenen Vorlieben für die Vorhersagen nutzten. / This dissertation focuses on food-related decision making, in particular, how food related environments and cognition interact to determine people’s food choices. The first manuscript, “When Diets Last: Lower Cognitive Complexity Increases Diet Adherence,” investigates the role of the cognitive complexity in diet adherence. Can weight loss diets fail because they are too complicated from a cognitive point of view, meaning that dieters are not able to recall or process the diet rules? The impact of excessive cognitive demands on diet adherence were investigated with 1,136 dieters in a longitudinal online-questionnaire. We measured perceived rule complexity controlling for other factors known to influence adherence. Previous diet behavior, self-efficacy, planning and perceived rule complexity predicted an increased risk to quit the diet prematurely, with self-efficacy and diet complexity being the strongest factors. The second manuscript, “Meat Label Design: Effects on Stage Progression, Risk Perception, and Product Evaluation,” presents two studies which tested the impact of health-related meat labels on product evaluation and intention. Specifically, the studies examined how informational content and the context (separate vs. conjoint evaluation) in which labels are assessed influence the evaluation of meat products. The results showed that conjoint assessment of labels can lead to contrary product rankings compared to separate evaluations. Moreover, the results suggest that being exposed to food labels containing specific health-relevant information can increase motivation to consider health aspects in those consumers without previous intention to do so. The third manuscript, “Predicting Children’s Meal Preferences: How Much Do Parents Know?” investigated prediction behavior concerning other people’s food choices. In particular, it asked how accurately and what cues parents use to predict their children’s meal choices. Overall, parents’ prediction accuracy matched the stability of children’s meal choices, implying that accuracy was as high as can be expected. The results suggest parents were able to obtain high predictive accuracy by using specific knowledge about their child’s likes and projecting their own preferences.
46

From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems

Plociennik, Kai 18 February 2011 (has links) (PDF)
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.
47

Implications of eigenvector localization for dynamics on complex networks

Aufderheide, Helge E. 19 September 2014 (has links) (PDF)
In large and complex systems, failures can have dramatic consequences, such as black-outs, pandemics or the loss of entire classes of an ecosystem. Nevertheless, it is a centuries-old intuition that by using networks to capture the core of the complexity of such systems, one might understand in which part of a system a phenomenon originates. I investigate this intuition using spectral methods to decouple the dynamics of complex systems near stationary states into independent dynamical modes. In this description, phenomena are tied to a specific part of a system through localized eigenvectors which have large amplitudes only on a few nodes of the system's network. Studying the occurrence of localized eigenvectors, I find that such localization occurs exactly for a few small network structures, and approximately for the dynamical modes associated with the most prominent failures in complex systems. My findings confirm that understanding the functioning of complex systems generally requires to treat them as complex entities, rather than collections of interwoven small parts. Exceptions to this are only few structures carrying exact localization, whose functioning is tied to the meso-scale, between the size of individual elements and the size of the global network. However, while understanding the functioning of a complex system is hampered by the necessary global analysis, the prominent failures, due to their localization, allow an understanding on a manageable local scale. Intriguingly, food webs might exploit this localization of failures to stabilize by causing the break-off of small problematic parts, whereas typical attempts to optimize technological systems for stability lead to delocalization and large-scale failures. Thus, this thesis provides insights into the interplay of complexity and localization, which is paramount to ascertain the functioning of the ever-growing networks on which we humans depend.
48

Models of Discrete-Time Stochastic Processes and Associated Complexity Measures / Modelle stochastischer Prozesse in diskreter Zeit und zugehörige Komplexitätsmaße

Löhr, Wolfgang 24 June 2010 (has links) (PDF)
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight's prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
49

Chemical complexity of odors increases reliability of olfactory threshold testing

Oleszkiewicz, Anna, Pellegrino, Robert, Pusch, Katharina, Margot, Celine, Hummel, Thomas 17 July 2017 (has links)
Assessment of odor thresholds is a widely recognized method of measuring olfactory abilities in humans. To date no attempts have been made to assess whether chemical complexity of odors used can produce more reliable results. To this end, we performed two studies of repeated measures design with 121 healthy volunteers (age 19–62 years). In Study 1, we compared thresholds obtained from tests based on one odor presented in a pen-like odor dispensing device with three odors and six odors mixtures presented in glass containers. In study 2 we compared stimuli of one and three odors, both presented in glass containers. In both studies measurements were performed twice, separated by at least three days. Results indicate that the multiple odor mixtures produced more reliable threshold scores, as compared to thresholds based on a single substance.
50

From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems: From Worst-Case to Average-Case Efficiency – Approximating Combinatorial Optimization Problems

Plociennik, Kai 27 January 2011 (has links)
In theoretical computer science, various notions of efficiency are used for algorithms. The most commonly used notion is worst-case efficiency, which is defined by requiring polynomial worst-case running time. Another commonly used notion is average-case efficiency for random inputs, which is roughly defined as having polynomial expected running time with respect to the random inputs. Depending on the actual notion of efficiency one uses, the approximability of a combinatorial optimization problem can be very different. In this dissertation, the approximability of three classical combinatorial optimization problems, namely Independent Set, Coloring, and Shortest Common Superstring, is investigated for different notions of efficiency. For the three problems, approximation algorithms are given, which guarantee approximation ratios that are unachievable by worst-case efficient algorithms under reasonable complexity-theoretic assumptions. The algorithms achieve polynomial expected running time for different models of random inputs. On the one hand, classical average-case analyses are performed, using totally random input models as the source of random inputs. On the other hand, probabilistic analyses are performed, using semi-random input models inspired by the so called smoothed analysis of algorithms. Finally, the expected performance of well known greedy algorithms for random inputs from the considered models is investigated. Also, the expected behavior of some properties of the random inputs themselves is considered.

Page generated in 0.0649 seconds