• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • Tagged with
  • 9
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A PROLOG based framework for combined simulation / knowledge based systems

Farimani-Toroghi, Seyed Nasser January 1994 (has links)
No description available.
2

The application of message passing to concurrent programming

Harland, David M. January 1981 (has links)
The development of concurrency in computer systems will be critically reviewed and an alternative strategy proposed. This is a programming language designed along semantic principles, and it is based upon the treatment of concurrent processes as values within that language's universe of discourse. An asynchronous polymorphic message system is provided to enable co-existent processes to communicate freely. This is presented as a fundamental language construct, and it is completely general purpose, as all values, however complex, can be passed as messages. Various operations are also built into the language so as to permit processes to discover and examine one another. These permit the development of robust systems, where localised failures can be detected, and action can be taken to recover. The orthogonality of the design is discussed and its implementation in terms of an incremental compiler and abstract machine interpreter is outlined in some detail. This thesis hopes to demonstrate that message-oriented communication in a highly parallel system of processes is not only a natural form of expression, but is eminently practical, so long as the entities performing the communication are values in the language.
3

Automatic handling in production processes

Jutla, H. S. January 1984 (has links)
No description available.
4

Možnosti rozvoje algoritmického myšlení s využitím mobilních dotykových zařízení / Possibilities of developing algorithmic thinking with mobile devices

Kozub, Martin January 2019 (has links)
This thesis explores algorithmic thinking development on a high school student level. In the introduction algorithmic thinking as well as current practices of teaching algorithmic thinking are examined. A closer look on programming languages, programming methods and tools which can be utilised to develop algorithmic thinking (such as robotic kits, personal computers and mobile devices with touch screens) follows. After analyzing general properties of these tools one mobile device is selected and its programming possibilities are then explored in the empirical part of this paper. As part of the selected proactive action research a number of activities as well as overall process approach is first designed following by their practical verification in a high school. Final results are summarised as suggestions for improvements for further teaching.
5

Optimization techniques in data mining with applications to biomedical and psychophysiological data sets

Yu, Zhaohan 01 May 2009 (has links)
Our research mainly consisted by two parts. First, apply p-norm error measure instead of 1-norm measure in a linear programming discrimination, which generates a linear hyperplane to classify two data sets. With this p-norm error measure, the errors generated by the classifier are not treated equally but rather biased. For 1, the bigger one error is, the more weight it obtains in the objective function. Second, investigation is conducted on a psychophysiological data set. Various methods are tested on this multi-dimensional time-series data set, from the linear programming method to the neural network method. With the help of DFT, The data is able to be transferred from the time domain to the frequency domain, in which the data set has interesting patterns
6

Static guarantees for coordinated components : a statically typed composition model for stream-processing networks

Penczek, Frank January 2012 (has links)
Does your program do what it is supposed to be doing? Without running the program providing an answer to this question is much harder if the language does not support static type checking. Of course, even if compile-time checks are in place only certain errors will be detected: compilers can only second-guess the programmer’s intention. But, type based techniques go a long way in assisting programmers to detect errors in their computations earlier on. The question if a program behaves correctly is even harder to answer if the program consists of several parts that execute concurrently and need to communicate with each other. Compilers of standard programming languages are typically unable to infer information about how the parts of a concurrent program interact with each other, especially where explicit threading or message passing techniques are used. Hence, correctness guarantees are often conspicuously absent. Concurrency management in an application is a complex problem. However, it is largely orthogonal to the actual computational functionality that a program realises. Because of this orthogonality, the problem can be considered in isolation. The largest possible separation between concurrency and functionality is achieved if a dedicated language is used for concurrency management, i.e. an additional program manages the concurrent execution and interaction of the computational tasks of the original program. Such an approach does not only help programmers to focus on the core functionality and on the exploitation of concurrency independently, it also allows for a specialised analysis mechanism geared towards concurrency-related properties. This dissertation shows how an approach that completely decouples coordination from computation is a very supportive substrate for inferring static guarantees of the correctness of concurrent programs. Programs are described as streaming networks connecting independent components that implement the computations of the program, where the network describes the dependencies and interactions between components. A coordination program only requires an abstract notion of computation inside the components and may therefore be used as a generic and reusable design pattern for coordination. A type-based inference and checking mechanism analyses such streaming networks and provides comprehensive guarantees of the consistency and behaviour of coordination programs. Concrete implementations of components are deliberately left out of the scope of coordination programs: Components may be implemented in an external language, for example C, to provide the desired computational functionality. Based on this separation, a concise semantic framework allows for step-wise interpretation of coordination programs without requiring concrete implementations of their components. The framework also provides clear guidance for the implementation of the language. One such implementation is presented and hands-on examples demonstrate how the language is used in practice.
7

Optimalizace rovrhu na vysoké škole / University Schedule Optimisation

Skrbková, Tereza January 2009 (has links)
Scheduling is a practical optimisiation problem which can be solved by means of integer or binary programming methods. This paper focuses on university scheduling, in particular the schedule of the University of Economics in Prague, it is however possible to apply the results to schedules of other universities. We begin with the basics of linear programming, focusing on integer and binary programming as well as selected methods used to solve these problems. We then construct an optimisation model for the schedule of a subset of subjects based on real requirements (data 2009) and we compare the results with the actual schedule of the University of Economics in Prague. In conclusion we discuss some of the assumptions made during model development. The model is then generalised to include the entire set of subjects of the university. For the conversion of the software results into a more legible format, we include two MS Excel macros as part of this paper.
8

Advances in Answer Set Planning

Polleres, Axel 27 August 2003 (has links) (PDF)
Planning is a challenging research area since the early days of Artificial Intelligence. The planning problem is the task of finding a sequence of actions leading an agent from a given initial state to a desired goal state. Whereas classical planning adopts restricting assumptions such as complete knowledge about the initial state and deterministic action effects, in real world scenarios we often have to face incomplete knowledge and non-determinism. Classical planning languages and algorithms do not take these facts into account. So, there is a strong need for formal languages describing such non-classical planning problems on the one hand and for (declarative) methods for solving these problems on the other hand.In this thesis, we present the action language Kc, which is based on flexible action languages from the knowledge representation community and extends these by useful concepts from logic programming.We define two basic semantics for this language which reflect optimistic and secure (i.e. sceptical) plans in presence of incomplete information or nondeterminism. These basic semantics are furthermore extended to planning with action costs, where each action can have an assigned cost value. Here, we address optimal plans as well as plans which stay within a certain overall cost limit.Next, we develop efficient (i.e. polynomial) transformations from planning problems described in our language Kc to disjunctive logic programs which are then evaluated under the so-called Answer Set Semantics. In this context, we introduce a general new method for problem solving in Answer Set Programming (ASP) which takes the genuine "guess and check" paradigm in ASP into account and allows us to integrate separate "guess" and "check" programs into a single logic program. Based on these methods, we have implemented the planning system DLVK. We discuss problem solving and knowledge representation in Kc using DLVK by means of several examples. The proposed methods and the DLVK system are also evaluated experimentally and compared against related approaches. Finally, we present a practical application scenario from the area of design and monitoring of multi-agent systems. As we will see, this monitoring approach is not restricted to our particular formalism. / Austrian Science Funds (FWF)
9

Optimization of checkpointing and execution model for an implementation of OpenMP on distributed memory architectures / Optimisation des checkpoints et du modèle d'exécution pour une implémentation de OpenMP sur architectures à mémoire distribuée

Tran, Van Long 14 November 2018 (has links)
OpenMP et MPI sont devenus les outils standards pour développer des programmes parallèles sur une architecture à mémoire partagée et à mémoire distribuée respectivement. Comparé à MPI, OpenMP est plus facile à utiliser. Ceci est dû au fait qu’OpenMP génère automatiquement le code parallèle et synchronise les résultats à l’aide de directives, clauses et fonctions d’exécution, tandis que MPI exige que les programmeurs fassent ce travail manuellement. Par conséquent, des efforts ont été faits pour porter OpenMP sur les architectures à mémoire distribuée. Cependant, à l’exclusion de CAPE, aucune solution ne satisfait les deux exigences suivantes: 1) être totalement conforme à la norme OpenMP et 2) être hautement performant. CAPE (Checkpointing-Aided Parallel Execution) est un framework qui traduit et fournit automatiquement des fonctions d’exécution pour exécuter un programme OpenMP sur une architecture à mémoire distribuée basé sur des techniques de checkpoint. Afin d’exécuter un programme OpenMP sur un système à mémoire distribuée, CAPE utilise un ensemble de modèles pour traduire le code source OpenMP en code source CAPE, puis le code source CAPE est compilé par un compilateur C/C++ classique. Fondamentalement, l’idée de CAPE est que le programme s’exécute d’abord sur un ensemble de nœuds du système, chaque nœud fonctionnant comme un processus. Chaque fois que le programme rencontre une section parallèle, le maître distribue les tâches aux processus esclaves en utilisant des checkpoints incrémentaux discontinus (DICKPT). Après l’envoi des checkpoints, le maître attend les résultats renvoyés par les esclaves. L’étape suivante au niveau du maître consiste à recevoir et à fusionner le résultat des checkpoints avant de les injecter dans sa mémoire. Les nœuds esclaves quant à eux reçoivent les différents checkpoints, puis l’injectent dans leur mémoire pour effectuer le travail assigné. Le résultat est ensuite renvoyé au master en utilisant DICKPT. À la fin de la région parallèle, le maître envoie le résultat du checkpoint à chaque esclave pour synchroniser l’espace mémoire du programme. Dans certaines expériences, CAPE a montré des performances élevées sur les systèmes à mémoire distribuée et constitue une solution viable entièrement compatible avec OpenMP. Cependant, CAPE reste en phase de développement, ses checkpoints et son modèle d’exécution devant être optimisés pour améliorer les performances, les capacités et la fiabilité. Cette thèse vise à présenter les approches proposées pour optimiser et améliorer la capacité des checkpoints, concevoir et mettre en œuvre un nouveau modèle d’exécution, et améliorer la capacité de CAPE. Tout d’abord, nous avons proposé une arithmétique sur les checkpoints qui modélise la structure de leurs données et ses opérations. Cette modélisation contribue à optimiser leur taille et à réduire le temps nécessaire à la fusion, tout en améliorant leur capacité. Deuxièmement, nous avons développé TICKPT (Time-Stamp Incremental Checkpointing) une implémentation de l’arithmétique sur les checkpoints. TICKPT est une amélioration de DICKPT, il a ajouté l’horodatage aux checkpoints pour en identifier l’ordre. L’analyse et les expériences comparées montrent TICKPT sont non seulement plus petites, mais qu’ils ont également moins d’impact sur les performances du programme. Troisièmement, nous avons conçu et implémenté un nouveau modèle d’exécution et de nouveaux prototypes pour CAPE basés sur TICKPT. Le nouveau modèle d’exécution permet à CAPE d’utiliser les ressources efficacement, d’éviter les risques de goulots d’étranglement et de satisfaire à l’exigence des les conditions de Bernstein. Au final, ces approches améliorent significativement les performances de CAPE, ses capacités et sa fiabilité. Le partage des données implémenté sur CAPE et basé sur l’arithmétique sur des checkpoints est ouvert et basé sur TICKPT. Cela démontre également la bonne direction que nous avons prise et rend CAPE plus complet / OpenMP and MPI have become the standard tools to develop parallel programs on shared-memory and distributed-memory architectures respectively. As compared to MPI, OpenMP is easier to use. This is due to the ability of OpenMP to automatically execute code in parallel and synchronize results using its directives, clauses, and runtime functions while MPI requires programmers do all this manually. Therefore, some efforts have been made to port OpenMP on distributed-memory architectures. However, excluding CAPE, no solution has successfully met both requirements: 1) to be fully compliant with the OpenMP standard and 2) high performance. CAPE stands for Checkpointing-Aided Parallel Execution. It is a framework that automatically translates and provides runtime functions to execute OpenMP program on distributed-memory architectures based on checkpointing techniques. In order to execute an OpenMP program on distributed-memory system, CAPE uses a set of templates to translate OpenMP source code to CAPE source code, and then, the CAPE source code is compiled by a C/C++ compiler. This code can be executed on distributed-memory systems under the support of the CAPE framework. Basically, the idea of CAPE is the following: the program first run on a set of nodes on the system, each node being executed as a process. Whenever the program meets a parallel section, the master distributes the jobs to the slave processes by using a Discontinuous Incremental Checkpoint (DICKPT). After sending the checkpoints, the master waits for the returned results from the slaves. The next step on the master is the reception and merging of the resulting checkpoints before injecting them into the memory. For slave nodes, they receive different checkpoints, and then, they inject it into their memory to compute the divided job. The result is sent back to the master using DICKPTs. At the end of the parallel region, the master sends the result of the checkpoint to every slaves to synchronize the memory space of the program as a whole. In some experiments, CAPE has shown very high-performance on distributed-memory systems and is a viable and fully compatible with OpenMP solution. However, CAPE is in the development stage. Its checkpoint mechanism and execution model need to be optimized in order to improve the performance, ability, and reliability. This thesis aims at presenting the approaches that were proposed to optimize and improve checkpoints, design and implement a new execution model, and improve the ability for CAPE. First, we proposed arithmetics on checkpoints, which aims at modeling checkpoint’s data structure and its operations. This modeling contributes to optimize checkpoint size and reduces the time when merging, as well as improve checkpoints capability. Second, we developed TICKPT which stands for Time-stamp Incremental Checkpointing as an instance of arithmetics on checkpoints. TICKPT is an improvement of DICKPT. It adds a timestamp to checkpoints to identify the checkpoints order. The analysis and experiments to compare it to DICKPT show that TICKPT do not only provide smaller in checkpoint size, but also has less impact on the performance of the program using checkpointing. Third, we designed and implemented a new execution model and new prototypes for CAPE based on TICKPT. The new execution model allows CAPE to use resources efficiently, avoid the risk of bottlenecks, overcome the requirement of matching the Bernstein’s conditions. As a result, these approaches make CAPE improving the performance, ability as well as reliability. Four, Open Data-sharing attributes are implemented on CAPE based on arithmetics on checkpoints and TICKPT. This also demonstrates the right direction that we took, and makes CAPE more complete

Page generated in 0.0766 seconds