• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1305
  • 700
  • 234
  • 111
  • 97
  • 43
  • 36
  • 18
  • 16
  • 15
  • 15
  • 14
  • 11
  • 10
  • 10
  • Tagged with
  • 3139
  • 581
  • 547
  • 366
  • 355
  • 298
  • 295
  • 293
  • 237
  • 220
  • 213
  • 208
  • 191
  • 186
  • 178
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

Scheduling Pipelined Applications: Models, Algorithms and Complexity

Benoit, Anne 08 July 2009 (has links) (PDF)
In this document, I explore the problem of scheduling pipelined applications onto large-scale distributed platforms, in order to optimize several criteria. A particular attention is given to throughput maximization (i.e., the number of data sets that can be processed every time unit), latency minimization (i.e., the time required to process one data set entirely), and failure probability minimization. First, I accurately define the models and the scheduling problems, and exhibit surprising results, such as the difficulty to compute the optimal throughput and/or latency that can be obtained given a mapping. In particular, I detail the importance of the communication models, which induce quite different levels of difficulty. Second, I give an overview of complexity results for various cases, both for mono-criterion and for bi-criteria optimization problems. I illustrate the impact of the models on the problem complexity. Finally, I show some extensions of this work to different applicative contexts and to dynamic platforms.
382

ERP Systems - Fully Integrated Solution or a Transactional Platform? / ERP system - Fullt integrerad lösning eller en plattform för transaktioner?

Sandberg, Johan January 2008 (has links)
<p>This paper addresses the question of how to make use of Enterprise Resource Planning (ERP) systems in companies in the process industry were there is a pervasive need of process standardization. ERP systems have the potential to contribute with standardization and integration of organizational data through an of-the-shelf solution. In practice results of ERP systems implementation has varied greatly. Considering their implications on business processes and the complexity of the systems this should not come as a surprise. ERP systems do not only imply standardization of data but also standardization of key processes in the company. The consequences on the individual organization are therefore hard to predict. Making strategic choices between different degrees of in-house developed systems, integration of solutions from many different suppliers or to only rely on the ERP systems consultants and their proposed implementation of solutions, can be a troublesome balance act. This paper describes a case study of the Swedish diary company Norrmejerier and the implementation of the ERP system IFS analyzed from a perspective of complex system and standardization. The use of IFS at Norrmejerier can be characterized as a loosely coupled integration with the ERP system as a central integration facilitator. This solution allowed the company to make use of standardization benefits, filling the need of special functionality and at the same time limiting the negative unexpected consequences such as decreased activity support and increased complexity. The key contributions of this paper are that it shows how ERP´s can contribute to standardization and integration efforts in IT environments with peculiar demands on functionality. Secondly it demonstrates how negative side effects related to implementation of ERP systems can be managed and limited.</p>
383

Computational Experience and the Explanatory Value of Condition Numbers for Linear Optimization

Ordónez, Fernando, Freund, Robert M. 25 September 2003 (has links)
The goal of this paper is to develop some computational experience and test the practical relevance of the theory of condition numbers C(d) for linear optimization, as applied to problem instances that one might encounter in practice. We used the NETLIB suite of linear optimization problems as a test bed for condition number computation and analysis. Our computational results indicate that 72% of the NETLIB suite problem instances are ill-conditioned. However, after pre-processing heuristics are applied, only 19% of the post-processed problem instances are ill-conditioned, and log C(d) of the finitely-conditioned post-processed problems is fairly nicely distributed. We also show that the number of IPM iterations needed to solve the problems in the NETLIB suite varies roughly linearly (and monotonically) with log C(d) of the post-processed problem instances. Empirical evidence yields a positive linear relationship between IPM iterations and log C(d) for the post-processed problem instances, significant at the 95% confidence level. Furthermore, 42% of the variation in IPM iterations among the NETLIB suite problem instances is accounted for by log C(d) of the problem instances after pre-processin
384

A Potential Reduction Algorithm With User-Specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm Start

Freund, Robert M. 10 1900 (has links)
This paper develops a potential reduction algorithm for solving a linear-programming problem directly from a "warm start" initial point that is neither feasible nor optimal. The algorithm is of an "interior point" variety that seeks to reduce a single potential function which simultaneously coerces feasibility improvement (Phase I) and objective value improvement (Phase II). The key feature of the algorithm is the ability to specify beforehand the desired balance between infeasibility and nonoptimality in the following sense. Given a prespecified balancing parameter /3 > 0, the algorithm maintains the following Phase I - Phase II "/3-balancing constraint" throughout: (cTx- Z*) < /3TX, where cTx is the objective function, z* is the (unknown) optimal objective value of the linear program, and Tx measures the infeasibility of the current iterate x. This balancing constraint can be used to either emphasize rapid attainment of feasibility (set large) at the possible expense of good objective function values or to emphasize rapid attainment of good objective values (set /3 small) at the possible expense of a lower infeasibility gap. The algorithm exhibits the following advantageous features: (i) the iterate solutions monotonically decrease the infeasibility measure, (ii) the iterate solutions satisy the /3-balancing constraint, (iii) the iterate solutions achieve constant improvement in both Phase I and Phase II in O(n) iterations, (iv) there is always a possibility of finite termination of the Phase I problem, and (v) the algorithm is amenable to acceleration via linesearch of the potential function.
385

GPSG-Recognition is NP-Hard

Ristad, Eric Sven 01 March 1985 (has links)
Proponents of generalized phrase structure grammar (GPSG) cite its weak context-free generative power as proof of the computational tractability of GPSG-Recognition. Since context-free languages (CFLs) can be parsed in time proportional to the cube of the sentence length, and GPSGs only generate CFLs, it seems plausible the GPSGs can also be parsed in cubic time. This longstanding, widely assumed GPSG "efficient parsability" result in misleading: parsing the sentences of an arbitrary GPSG is likely to be intractable, because a reduction from 3SAT proves that the universal recognition problem for the GPSGs of Gazdar (1981) is NP-hard. Crucially, the time to parse a sentence of a CFL can be the product of sentence length cubed and context-free grammar size squared, and the GPSG grammar can result in an exponentially large set of derived context-free rules. A central object in the 1981 GPSG theory, the metarule, inherently results in an intractable parsing problem, even when severely constrained. The implications for linguistics and natural language parsing are discussed.
386

A Comparative Analysis of Reinforcement Learning Methods

Mataric, Maja 01 October 1991 (has links)
This paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of experiments. Next we discuss the key problems of RL: time and space complexity, input generalization, sensitivity to parameter values, and selection of the reinforcement function. We address the tradeoffs between the built-in and learned knowledge and the number of training examples required by a learning algorithm. Finally, we suggest directions for future research.
387

Indexing for Visual Recognition from a Large Model Base

Breuel, Thomas M. 01 August 1990 (has links)
This paper describes a new approach to the model base indexing stage of visual object recognition. Fast model base indexing of 3D objects is achieved by accessing a database of encoded 2D views of the objects using a fast 2D matching algorithm. The algorithm is specifically intended as a plausible solution for the problem of indexing into very large model bases that general purpose vision systems and robots will have to deal with in the future. Other properties that make the indexing algorithm attractive are that it can take advantage of most geometric and non-geometric properties of features without modification, and that it addresses the incremental model acquisition problem for 3D objects.
388

Simplifying transformations for type-alpha certificates

Arkoudas, Konstantine 13 November 2001 (has links)
This paper presents an algorithm for simplifying NDL deductions. An array of simplifying transformations are rigorously defined. They are shown to be terminating, and to respect the formal semantis of the language. We also show that the transformations never increase the size or complexity of a deduction---in the worst case, they produce deductions of the same size and complexity as the original. We present several examples of proofs containing various types of "detours", and explain how our procedure eliminates them, resulting in smaller and cleaner deductions. All of the given transformations are fully implemented in SML-NJ. The complete code listing is presented, along with explanatory comments. Finally, although the transformations given here are defined for NDL, we point out that they can be applied to any type-alpha DPL that satisfies a few simple conditions.
389

Complexity of Human Language Comprehension

Ristad, Eric Sven 01 December 1988 (has links)
The goal of this article is to reveal the computational structure of modern principle-and-parameter (Chomskian) linguistic theories: what computational problems do these informal theories pose, and what is the underlying structure of those computations? To do this, I analyze the computational complexity of human language comprehension: what linguistic representation is assigned to a given sound? This problem is factored into smaller, interrelated (but independently statable) problems. For example, in order to understand a given sound, the listener must assign a phonetic form to the sound; determine the morphemes that compose the words in the sound; and calculate the linguistic antecedent of every pronoun in the utterance. I prove that these and other subproblems are all NP-hard, and that language comprehension is itself PSPACE-hard.
390

Hard and soft conditions on the faculty of language : constituting parametric variation

Zeijlstra, Hedde January 2009 (has links)
In this paper I argue that both parametric variation and the alleged differences between languages in terms of their internal complexity straightforwardly follow from the Strongest Minimalist Thesis that takes the Faculty of Language (FL) to be an optimal solution to conditions that neighboring mental modules impose on it. In this paper I argue that hard conditions like legibility at the linguistic interfaces invoke simplicity metrices that, given that they stem from different mental modules, are not harmonious. I argue that widely attested expression strategies, such as agreement or movement, are a direct result of conflicting simplicity metrices, and that UG, perceived as a toolbox that shapes natural language, can be taken to consist of a limited number of markings strategies, all resulting from conflicting simplicity metrices. As such, the contents of UG follow from simplicity requirements, and therefore no longer necessitate linguistic principles, valued or unvalued, to be innately present. Finally, I show that the SMT does not require that languages themselves have to be optimal in connecting sound to meaning.

Page generated in 0.0694 seconds