381 |
Biodiversity: Its Measurement and MetaphysicsRoche, David January 2001 (has links)
Biodiversity is a concept that plays a key role in both scientific theories such as the species-area law and conservation politics. Currently, however, little agreement exists on how biodiversity should be defined, let alone measured. This has led to suggestions that biodiversity is not a metaphysically robust concept, with major implications for its usefulness in formulating scientific theories and making conservation decisions. A general discussion of biodiversity is presented, highlighting its application both in scientific and conservation contexts, its relationship with environmental ethics, and existing approaches to its measurement. To overcome the limitations of existing biodiversity concepts, a new concept of biocomplexity is proposed. This concept equates the biodiversity of any biological system with its effective complexity. Biocomplexity is shown to be the only feasible measure of biodiversity that captures the essential features desired of a general biodiversity concept. In particular, it is a well-defined, measurable and strongly intrinsic property of any biological system. Finally, the practical application of biocomplexity is discussed.
|
382 |
Quantum complexity, Emergence and Computation by Measurement : On what computers reveal about physical laws, and what physical laws reveal about computersMile Gu Unknown Date (has links)
Any computation is facilitated by some physical process, and the observable quantities of any physical process can be viewed as a computation. These close ties suggest that the study of what universal computers are capable of may lead to additional insight about the physical universe, and vice versa. In his thesis, we explore three lines of research that are linked to this central theme. The first partition shows how notions of non-computability and undecidability eventually led to evidence of emergence, the concept that even if a ‘theory of everything’ governing all microscopic interactions were discovered, the understanding of macroscopic order is likely to require additional insights. The second partition proposes a physically motivated model of computation that relates quantum complexity, quantum optimal control, and Riemannian geometry. Thus insights in any one of these disciplines could also lead to insights in the others. The remainder of this partition explores a simple application of these relations. The final partition proposes a model of quantum computation that generalizes measurement based computation to continuous variables. We outline its optical implementation, whereby any computation can be performed by single mode measurements on a resource state that can be prepared by passing a collection of squeezed states through a beamsplitter network.
|
383 |
Scheduling Pipelined Applications: Models, Algorithms and ComplexityBenoit, Anne 08 July 2009 (has links) (PDF)
In this document, I explore the problem of scheduling pipelined applications onto large-scale distributed platforms, in order to optimize several criteria. A particular attention is given to throughput maximization (i.e., the number of data sets that can be processed every time unit), latency minimization (i.e., the time required to process one data set entirely), and failure probability minimization. First, I accurately define the models and the scheduling problems, and exhibit surprising results, such as the difficulty to compute the optimal throughput and/or latency that can be obtained given a mapping. In particular, I detail the importance of the communication models, which induce quite different levels of difficulty. Second, I give an overview of complexity results for various cases, both for mono-criterion and for bi-criteria optimization problems. I illustrate the impact of the models on the problem complexity. Finally, I show some extensions of this work to different applicative contexts and to dynamic platforms.
|
384 |
ERP Systems - Fully Integrated Solution or a Transactional Platform? / ERP system - Fullt integrerad lösning eller en plattform för transaktioner?Sandberg, Johan January 2008 (has links)
<p>This paper addresses the question of how to make use of Enterprise Resource Planning (ERP) systems in companies in the process industry were there is a pervasive need of process standardization. ERP systems have the potential to contribute with standardization and integration of organizational data through an of-the-shelf solution. In practice results of ERP systems implementation has varied greatly. Considering their implications on business processes and the complexity of the systems this should not come as a surprise. ERP systems do not only imply standardization of data but also standardization of key processes in the company. The consequences on the individual organization are therefore hard to predict. Making strategic choices between different degrees of in-house developed systems, integration of solutions from many different suppliers or to only rely on the ERP systems consultants and their proposed implementation of solutions, can be a troublesome balance act. This paper describes a case study of the Swedish diary company Norrmejerier and the implementation of the ERP system IFS analyzed from a perspective of complex system and standardization. The use of IFS at Norrmejerier can be characterized as a loosely coupled integration with the ERP system as a central integration facilitator. This solution allowed the company to make use of standardization benefits, filling the need of special functionality and at the same time limiting the negative unexpected consequences such as decreased activity support and increased complexity. The key contributions of this paper are that it shows how ERP´s can contribute to standardization and integration efforts in IT environments with peculiar demands on functionality. Secondly it demonstrates how negative side effects related to implementation of ERP systems can be managed and limited.</p>
|
385 |
Computational Experience and the Explanatory Value of Condition Numbers for Linear OptimizationOrdónez, Fernando, Freund, Robert M. 25 September 2003 (has links)
The goal of this paper is to develop some computational experience and test the practical relevance of the theory of condition numbers C(d) for linear optimization, as applied to problem instances that one might encounter in practice. We used the NETLIB suite of linear optimization problems as a test bed for condition number computation and analysis. Our computational results indicate that 72% of the NETLIB suite problem instances are ill-conditioned. However, after pre-processing heuristics are applied, only 19% of the post-processed problem instances are ill-conditioned, and log C(d) of the finitely-conditioned post-processed problems is fairly nicely distributed. We also show that the number of IPM iterations needed to solve the problems in the NETLIB suite varies roughly linearly (and monotonically) with log C(d) of the post-processed problem instances. Empirical evidence yields a positive linear relationship between IPM iterations and log C(d) for the post-processed problem instances, significant at the 95% confidence level. Furthermore, 42% of the variation in IPM iterations among the NETLIB suite problem instances is accounted for by log C(d) of the problem instances after pre-processin
|
386 |
A Potential Reduction Algorithm With User-Specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm StartFreund, Robert M. 10 1900 (has links)
This paper develops a potential reduction algorithm for solving a linear-programming problem directly from a "warm start" initial point that is neither feasible nor optimal. The algorithm is of an "interior point" variety that seeks to reduce a single potential function which simultaneously coerces feasibility improvement (Phase I) and objective value improvement (Phase II). The key feature of the algorithm is the ability to specify beforehand the desired balance between infeasibility and nonoptimality in the following sense. Given a prespecified balancing parameter /3 > 0, the algorithm maintains the following Phase I - Phase II "/3-balancing constraint" throughout: (cTx- Z*) < /3TX, where cTx is the objective function, z* is the (unknown) optimal objective value of the linear program, and Tx measures the infeasibility of the current iterate x. This balancing constraint can be used to either emphasize rapid attainment of feasibility (set large) at the possible expense of good objective function values or to emphasize rapid attainment of good objective values (set /3 small) at the possible expense of a lower infeasibility gap. The algorithm exhibits the following advantageous features: (i) the iterate solutions monotonically decrease the infeasibility measure, (ii) the iterate solutions satisy the /3-balancing constraint, (iii) the iterate solutions achieve constant improvement in both Phase I and Phase II in O(n) iterations, (iv) there is always a possibility of finite termination of the Phase I problem, and (v) the algorithm is amenable to acceleration via linesearch of the potential function.
|
387 |
GPSG-Recognition is NP-HardRistad, Eric Sven 01 March 1985 (has links)
Proponents of generalized phrase structure grammar (GPSG) cite its weak context-free generative power as proof of the computational tractability of GPSG-Recognition. Since context-free languages (CFLs) can be parsed in time proportional to the cube of the sentence length, and GPSGs only generate CFLs, it seems plausible the GPSGs can also be parsed in cubic time. This longstanding, widely assumed GPSG "efficient parsability" result in misleading: parsing the sentences of an arbitrary GPSG is likely to be intractable, because a reduction from 3SAT proves that the universal recognition problem for the GPSGs of Gazdar (1981) is NP-hard. Crucially, the time to parse a sentence of a CFL can be the product of sentence length cubed and context-free grammar size squared, and the GPSG grammar can result in an exponentially large set of derived context-free rules. A central object in the 1981 GPSG theory, the metarule, inherently results in an intractable parsing problem, even when severely constrained. The implications for linguistics and natural language parsing are discussed.
|
388 |
A Comparative Analysis of Reinforcement Learning MethodsMataric, Maja 01 October 1991 (has links)
This paper analyzes the suitability of reinforcement learning (RL) for both programming and adapting situated agents. We discuss two RL algorithms: Q-learning and the Bucket Brigade. We introduce a special case of the Bucket Brigade, and analyze and compare its performance to Q in a number of experiments. Next we discuss the key problems of RL: time and space complexity, input generalization, sensitivity to parameter values, and selection of the reinforcement function. We address the tradeoffs between the built-in and learned knowledge and the number of training examples required by a learning algorithm. Finally, we suggest directions for future research.
|
389 |
Indexing for Visual Recognition from a Large Model BaseBreuel, Thomas M. 01 August 1990 (has links)
This paper describes a new approach to the model base indexing stage of visual object recognition. Fast model base indexing of 3D objects is achieved by accessing a database of encoded 2D views of the objects using a fast 2D matching algorithm. The algorithm is specifically intended as a plausible solution for the problem of indexing into very large model bases that general purpose vision systems and robots will have to deal with in the future. Other properties that make the indexing algorithm attractive are that it can take advantage of most geometric and non-geometric properties of features without modification, and that it addresses the incremental model acquisition problem for 3D objects.
|
390 |
Simplifying transformations for type-alpha certificatesArkoudas, Konstantine 13 November 2001 (has links)
This paper presents an algorithm for simplifying NDL deductions. An array of simplifying transformations are rigorously defined. They are shown to be terminating, and to respect the formal semantis of the language. We also show that the transformations never increase the size or complexity of a deduction---in the worst case, they produce deductions of the same size and complexity as the original. We present several examples of proofs containing various types of "detours", and explain how our procedure eliminates them, resulting in smaller and cleaner deductions. All of the given transformations are fully implemented in SML-NJ. The complete code listing is presented, along with explanatory comments. Finally, although the transformations given here are defined for NDL, we point out that they can be applied to any type-alpha DPL that satisfies a few simple conditions.
|
Page generated in 0.0702 seconds