• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4347
  • 1626
  • 598
  • 349
  • 331
  • 215
  • 77
  • 65
  • 64
  • 57
  • 57
  • 57
  • 57
  • 57
  • 56
  • Tagged with
  • 10626
  • 3573
  • 1740
  • 1377
  • 1317
  • 1265
  • 1211
  • 1101
  • 1070
  • 1040
  • 976
  • 931
  • 823
  • 778
  • 648
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Polyhedral structure of the K-median problem

Zhao, Wenhui, January 2007 (has links)
Thesis (Ph. D.)--Ohio State University, 2007. / Title from first page of PDF file. Includes bibliographical references (p. 119-120).
302

Optimizing steering heuristics for clustered microarchitectures

LaDuca, Robert James. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Watson School of Engineering and Applied Science (Computer Science), 2006. / Includes bibliographical references.
303

Adaptive Behaviour Based Robotics using On-Board Genetic Programming

Kofod-Petersen, Anders January 2002 (has links)
<p>This thesis investigates the use of Genetic Programming (GP) to evolve controllers for an autonomous robot.</p><p>GP is a type of Genetic Algorithm (GA) using the Darwinian idea of natural selection and genetic recombination, where the individuals most often is represented as a tree-structure. The GP is used to evolve a population of possible solutions over many generations to solve problems.</p><p>The most common approach used today, to develop controllers for autonomous robots, is to employ a GA to evolve an Artificial Neural Network (ANN). This approach is most often used in simulation only or in conjunction with online evolution; where simulation still covers the largest part of the process.</p><p>The GP has been largely neglected in Behaviour Based Robotics (BBR). The is primarily due to the problem of speed, which is the biggest curse of any standard GP. The main contribution of this thesis is the approach of using a linear representation of the GP in online evolution, and to establish whether or not the GP is feasible in this situation. Since this is not a comparison with other methods, only a demonstration of the possibilities with GP, there is no need for testing the particular test cases with other methods.</p><p>The work in this thesis builds upon the work by Wolfgang Banzhaf and Peter Nordin, and therefore a comparison with their work will be done.</p>
304

Gruppuppgifter : attityder, lärande och Extreme Programming

Elvheim, Martin January 2003 (has links)
No description available.
305

How does refactoring affects performance? / Refaktoreringens påverkan på prestanda

Högberg, Jonas January 2010 (has links)
<p>The biggest field in the recent decade in software development has been a subject known as Agile Development. In Agile development the construction of the software is an iterative process and is done with close contact with the costumer. One of the most well-known agile methods is Extreme Programming, which suggests a number of practices to develop software. One of those practices is test-Driven Development, which is the writing of the test code before you write the actual code. This means that one can test the code after it is finished. This creates an opportunity to change the design of the code and then test it again with your test code and discover if any functionality has been lost. The purpose of refactoring is the improvement of the design of existing code. How refactoring affects the performance is not widely discussed and therefore this thesis is going to examine that field. Code examples with and without refactoring principles have been tested. The investigation has been divided into two parts, part one tests individual refactoring principles and part two tests a test application. There are many opinions how to interpret the results of a performance test. After an extensive investigation the arithmetic mean was chosen, mainly because it reflects the total runtime for a series of executions. To test the hypothesis that total execution time will change with refactoring the Students t-test was used. It was chosen because it can be applied even when the variance is unknown. The results were clear, the arithmetic mean increased in five out of six refactoring principles. The test application also increased, but with only 4 %. The reason for the small increase was that it is not possible to go from a non refactored application to a fully refactored application. Another reason is that is was developed with Swedish Rail Administration’s framework which of course was not refactored. The conclusion of this thesis was that one should be careful with refactoring the parts of the code that is executed the most. One should have the “90-10 rule” in mind, it states that 90 % of the execution time is done in 10 % of the code. Another important aspect is that very often is an existing framework used, if you only refactor the new code and not the framework it leads to that only a subset of the code is being refactored. This means that the application does not get fully refactored and therefore the consequences of the refactoring mitigates.</p>
306

Visual Compositional-Relational Programming

Zetterström, Andreas January 2010 (has links)
<p>In an ever faster changing environment, software developers not only need agile methods, but also agile programming paradigms and tools. A paradigm shift towards declarative programming has begun; a clear indication of this is Microsoft's substantial investment in functional programming. Moreover, several attempts have been made to enable visual programming. We believe that software development is ready for a new paradigm which goes beyond any existing declarative paradigm: visual compositional-relational programming. Compositional-relational programming (CRP) is a purely declarative paradigm -- making it suitable for a visual representation. All procedural aspects -- including the increasingly important issue of parallelization -- are removed from the programmer's consideration and handled in the underlying implementation. The foundation for CRP is a theory of higher-order combinatory logic programming developed by Hamfelt and Nilsson in the 1990's. This thesis proposes a model for visualizing compositional-relational programming. We show that the diagrams are isomorphic with the programs represented in textual form. Furthermore, we show that the model can be used to automatically generate code from diagrams, thus paving the way for a visual integrated development environment for CRP, where programming is performed by combining visual objects in a drag-and-drop fashion. At present, we implement CRP using Prolog. However, in future we foresee an implementation directly on one of the major object-oriented frameworks, e.g. the .NET platform, with the aim to finally launch relational programming into large-scale systems development.</p>
307

Controller programming with CoDeSys for an automated timber sorting system

Breitholtz, Nils January 2010 (has links)
<p>This report describes the development of weight measurement application and transducer positioning for the A Sort prototype that has been developed for automatic grading and sorting of timber. The prototype consists of a transportation system with hydraulic and electrical motors, a measurement system with laser scanners and acoustic measurement equipment and a control program for the automated process with CoDeSys. The objective was to integrate these parts in an automatic system process, controlling a prototype designed for acoustic measurement of logs. The devices were installed and configured to communicate via an existing fieldbus line using CANopen as communication protocol. A control program was made for each task and implemented in the control process for the automatic measurement of logs. Two load cells were installed beneath a moving tilt and the measurement equipment was tested and calibrated using three different logs with known weight. The testing showed that in order to get higher accuracy the construction needs to be modified. Photo cells were installed on the measurement frames and a program was made in order to make the acoustic measurement of the logs work properly.</p>
308

Towards architecture-adaptable parallel programming

Kumaran, Santhosh 26 July 1996 (has links)
There is a software gap in parallel processing. The short lifespan and small installation base of parallel architectures have made it economically infeasible to develop platform-specific parallel programming environments that deliver performance and programmability. One obvious solution is to build architecture-independent programming environments. But the architecture independence usually comes at the expense of performance, since the most efficient parallel algorithm for solving a problem often depends on the target platform. Thus, unless a parallel programming system has the ability to adapt the algorithm to the architecture, it will not be effectively machine-independent. This research develops a new methodology for architecture-adaptable parallel programming. The methodology is built on three key ideas: (1) the use of a database of parameterized algorithmic templates to represent computable functions; (2) frame-based representation of processing environments; and (3) the use of an analytical performance prediction tool for automatic algorithm design. This methodology pursues a problem-oriented approach to parallel processing as opposed to the traditional algorithm-oriented approach. This enables the development of software environments with a high level of abstraction. The users state the problem to be solved using a high-level notation; they are freed from the esoteric tasks of parallel algorithm design and implementation. This methodology has been validated in the format of a prototype of a system capable of automatically generating an efficient parallel program when presented with a well-defined problem and the description of a target platform. The use of object technology has made the system easily extensible. The templates are designed using a parallel adaptation of the well-known divide-and-conquer paradigm. The prototype system has been used to solve several numerical problems efficiently on a wide spectrum of architectures. The target platforms include multicomputers (Thinking Machines CM-5 and Meiko CS-2), networks of workstations (IBM RS/6000s connected by FDDI), multiprocessors (Sequent Symmetry, SGI Power Challenge, and Sun SPARCServer), and a hierarchical system consisting of a cluster of multiprocessors on Myrinet. / Graduation date: 1997
309

Analytical performance prediction of data-parallel programs

Clement, Mark J. 25 July 1994 (has links)
Graduation date: 1995
310

Dynamic Pricing in a Competitive Environment

Perakis, Georgia, Sood, Anshul 01 1900 (has links)
We present a dynamic optimization approach for perishable products in a competitive and dynamically changing market. We build a general optimization framework that ties together the competetive and the dynamic nature of pricing. This approach also allows differential pricing for large customers as well as demand learning for the seller. We analyze special cases of the model and illustrate the policies numerically. / Singapore-MIT Alliance (SMA)

Page generated in 0.0651 seconds