• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1353
  • 192
  • 73
  • 30
  • 27
  • 11
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 3605
  • 3605
  • 1069
  • 940
  • 902
  • 710
  • 706
  • 509
  • 447
  • 442
  • 396
  • 344
  • 291
  • 263
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Learning computer architecture concepts through interactive model building

Lees, Brian January 1987 (has links)
No description available.
122

Towards a general temporal theory

Ma, Jixin January 1994 (has links)
The research work presented herein addresses time representation and temporal reasoning in the domain of artificial intelligence. A general temporal theory, as an extension of Alien and Hayes', Gallon's and Vilain's theories, is proposed which treats both time intervals and time points on an equal footing; that is, both intervals and points are taken as primitive time elements in the theory. This means that neither do intervals have to be constructed out of points, nor do points have to be created as some limiting construction of intervals. This approach is different from that of Ladkin, of Van Beek, of Dechter, Meiri and Pearl, and of Maiocchi, which is either to construct intervals out of points, or to treat points and intervals separately. The theory is presented in terms of a series of axioms which characterise a single temporal relation, "meets", over time elements. The axiomatisation allows non-linear time structures such as branching time and parallel time, and additional axioms specifying the linearity and density of time are specially presented. A formal characterisation for the open and closed nature of primitive intervals, which has been a problematic question of time representation in artificial intelligence, is provided in terms of the "meets" relation. It is shown to be consistent with the conventional definitions of open/closed intervals which are constructed out of points. It is also shown that this general theory is powerful enough to subsume some representative temporal theories, such as Alien and Hayes's interval based theory, Bruce's and McDermott's point based theories, and the interval and point based theory of Vilain, and of Gallon. A finite time network based on the theory is specially addressed, where a consistency checker in two different forms is provided for cases with, and without, duration reasoning, respectively. Utilising the time axiomatisation, the syntax and semantics of a temporal logic for reasoning about propositions whose truth values are associated with particular intervals/points are explicitly defined. It is shown that the logic is more expressive than that of some existing systems, such as Alien's interval-based logic, the revised theory proposed by Gallon, Shoham's point-based interval logic, and Haugh's MTA based logic; and the corresponding problems with these systems are satisfactorily solved. Finally, as an application of the temporal theory, a new architecture for a temporal database system which allows the expression of relative temporal knowledge of data transaction and data validity times is proposed. A general retrieval mechanism is presented for a database with a purely qualitative temporal component which allows queries with temporal constraints in terms of any logical combination of Alien's temporal relations. To reduce the computational complexity of the consistency checking algorithm when quantitative time duration knowledge is added, a class of databases, termed time-limited databases, is introduced. This class allows absolute-time-stamped and relative time information in a form which is suitable for many practical applications, where qualitative temporal information is only occasionally needed, and the efficient retrieval mechanisms for absolute-time-stamped databases may be adopted.
123

Intelligent monitoring of business processes using case-based reasoning

Kapetanakis, Stylianos January 2012 (has links)
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform. Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions. Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes. The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty. Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning.
124

A new framework for supporting and managing multi-disciplinary system-simulation in a PLM environment

Mahler, Michael January 2014 (has links)
In order to keep products and systems attractive to consumers, developers have to do what they can to meet growing customers’ requirements. These requirements could be direct demands of customers but could also be the consequence of other influences such as globalization, customer fragmentation, product portfolio, regulations and so on. In the manufacturing industry, most companies are able to meet these growing requirements with mechatronic and interdisciplinary designed and developed products, which demand the collaboration between different disciplines. For example, the generation of a virtual prototype and its simulation tools of a mechatronic and multi-disciplinary product or system could require the cooperation of multiple departments within a company or between business partners. In a simulation, a virtual prototype is used for testing a product or a system. This virtual prototype and test approach could be used from the early stages of the development process to the end of the product or system lifecycle. Over years, different approaches/systems to generating virtual prototypes and testing have been designed and developed. But these systems have not been properly integrated, although some efforts have been made with limited success. Therefore, the requirement exists to propose and develop new technologies, methods and methodologies for achieving this integration. In addition, the use of simulation tools requires special expertise for the generation of simulation models, plus the formats of product prototypes and simulation data are different for each system. This adds to the requirements of a guideline or framework for implementing the integration of a multi- and inter- disciplinary product design, simulation software and data management during the entire product lifecycle. The main functionality and metadata structures of the new framework have been identified and optimised. The multi-disciplinary simulation data and their collection processes, the existing PLM (product lifecycle management) software and their applications have been analysed. In addition, the inter-disciplinary collaboration between a variety of simulation software has been analysed and evaluated. The new framework integrates the identified and optimised functionality and metadata structures to support and manage multi- and inter-disciplinary simulation in a PLM system environment. It is believed that this project has made 6 contributions to new knowledge generation: (1) the New Conceptual Framework to Enhance the Support and Management of Multi-Disciplinary System-Simulation, (2) the New System-Simulation Oriented and Process Oriented Data Handling Approach, (3) the Enhanced Traceability of System-Simulation to Sources and Represented Products and Functions, (4) the New System-Simulation Derivation Approach, (5) the New Approach for the Synchronisation of System Describing Structures and (6) the Enhanced System-Simulation Result Data Handling Approach. In addition, the new framework would bring significant benefits to each industry it is applied to. They are: (1) the more effective re-use of individual simulation models in system-simulation context, (2) the effective pre-defining and preparing of individual simulation models, (3) the easy and native reviewable system-simulation structures in relation to input-sources, such as products and / or functions, (4) the easy authoring-software independent update of system-simulation-structures, product-structures and function-structures, (5) the effective, distributed and cohesive post-process and interpretation of system-simulation-results, (6) the effective, easy and unique traceability of the data which means cost reductions in documentation and data security, and (7) the greater openness and flexibility in simulation software interactions with the data holding system. Although the proposed and developed conceptual framework has not been implemented (that would require vast resources), it can be expected that the benefits in 7 above will lead to significant advances in the simulation of new product design and development over the whole lifecycle, offering enormous practical value to the manufacturing industry. Due to time and resource constraints as well as the effort that would be involved in the implementation of the proposed new framework, it is clear there are some limitations to this PhD thesis. Five areas have been identified where further work is needed to improve the quality of this project: (1) an expanded industrial sector and product design and development processes, (2) parameter oriented system and production description in the new framework, (3) the improved user interface design of the new framework, (4) the automatic generation of simulation processes and (5) enhancement of the individual simulation models.
125

Compiler architecture using a portable intermediate language

Reig Galilea, Fermín Javier January 2002 (has links)
The back end of a compiler performs machine-dependent tasks and low-level optimisations that are laborious to implement and difficult to debug. In addition, in languages that require run-time services such as garbage collection, the back end must interface with the run-time system to provide those services. The net result is that building a compiler back end entails a high implementation cost. In this dissertation I describe reusable code generation infrastructure that enables the construction of a complete programming language implementation (compiler and run-time system) with reduced effort. The infrastructure consists of a portable intermediate language, a compiler for this language and a low-level run-time system. I provide an implementation of this system and I show that it can support a variety of source programming languages, it reduces the overall eort required to implement a programming language, it can capture and retain information necessary to support run-time services and optimisations, and it produces efficient code.
126

Softwarové pirátství / Software piracy

Bárta, Jan January 2011 (has links)
The computer program has recently become a common part of everyday life of the modern society whereas the life and work are very hard to imagine without them. Computer program's right infringement, software piracy, is a very specific phenomenon, which, according to my opinion, deserves thorough processing. This thesis is trying to support specialized publications with facts especially from the area of functional aspects of warez scene and its connection with common users of pirate programs, emphasizing single pirate methods, the structure and legal analysis of important aspects of this phenomenon. This thesis consists of nine chapters, which are further divided with respect to the single topics. Chapter One is an introduction into the topic and adumbrates basic problems of software piracy. Chapter Two attends to the computer program and software. The first part distinguishes the terms software and computer program, the second part displays basic sources of computer programs legal regulation, part three characterizes the computer program as a special type of author's craft, the fourth part describes contractual and non-contractual usage of computer programs, the fifth part explains the topic of digital rights management and the last, the sixth part represents analysis of legal protection of computer...
127

Active database behaviour : the REFLEX approach

Naqvi, Waseem Hadder January 1995 (has links)
Modern day and new generation applications have more demanding requirements than traditional database management systems (DBMS) are able to support. Two of these requirements, timely responses to the change of database state and application domain knowledge stored within the database, are embodied within active database technology. Currently, there are a number of research prototype active database systems throughout the world. In order for an organisation to use any such prototype system, it may have to forsake existing products and resources and embark on substantial reinvestment in the new database products and associated resources and retraining costs. This approach would clearly be unfavourable as it is expensive both in terms of time and money. A more suitable approach would be to allow active behaviour to be added onto their existing systems. This scenario is addressed within this research. It investigates how best active behaviour can be augmented to existing DBMSs, so as to preserve the investments in an organisation's resources, by examining the following issues, (i.) what form the knowledge model should take, (ii.) should rules and events be modelled as first class objects, (iii.) how will the triggering events be specified, (iv.) how the user will interact with the system. Various design decisions were taken, which were investigated by implementation of a series of working prototypes, on the ONTOS DBMS platform. The resultant REFLEX model was successfully ported and adapted onto a second POET platform. The porting process uncovered some interesting issues regarding preconceived ideas about the portability of open systems.
128

Reducing deadline miss rate for grid workloads running in virtual machines : a deadline-aware and adaptive approach

Khalid, Omer January 2011 (has links)
This thesis explores three major areas of research; integration of virutalization into scientific grid infrastructures, evaluation of the virtualization overhead on HPC grid job’s performance, and optimization of job execution times to increase their throughput by reducing job deadline miss rate. Integration of the virtualization into the grid to deploy on-demand virtual machines for jobs in a way that is transparent to the end users and have minimum impact on the existing system poses a significant challenge. This involves the creation of virtual machines, decompression of the operating system image, adapting the virtual environment to satisfy software requirements of the job, constant update of the job state once it’s running with out modifying batch system or existing grid middleware, and finally bringing the host machine back to a consistent state. To facilitate this research, an existing and in production pilot job framework has been modified to deploy virtual machines on demand on the grid using virtualization administrative domain to handle all I/O to increase network throughput. This approach limits the change impact on the existing grid infrastructure while leveraging the execution and performance isolation capabilities of virtualization for job execution. This work led to evaluation of various scheduling strategies used by the Xen hypervisor to measure the sensitivity of job performance to the amount of CPU and memory allocated under various configurations. However, virtualization overhead is also a critical factor in determining job execution times. Grid jobs have a diverse set of requirements for machine resources such as CPU, Memory, Network and have inter-dependencies on other jobs in meeting their deadlines since the input of one job can be the output from the previous job. A novel resource provisioning model was devised to decrease the impact of virtualization overhead on job execution. Finally, dynamic deadline-aware optimization algorithms were introduced using exponential smoothing and rate limiting to predict job failure rates based on static and dynamic virtualization overhead. Statistical techniques were also integrated into the optimization algorithm to flag jobs that are at risk to miss their deadlines, and taking preventive action to increase overall job throughput.
129

Model updating of modal parameters from experimental data and applications in aerospace

Keye, Stefan January 2003 (has links)
The research in this thesis is associated with different aspects of experimental analyses of structural dynamic systems and the correction of the corresponding mathematical models using the results of experimental investigations as a reference. A comprehensive finite-element model updating software technology is assembled and various novel features are implemented. The software technology is integrated into an experimental test facility for structural dynamic identification and used in a number of real life aerospace applications which illustrate the advantages of the new features. To improve the quality of the experimental reference data a novel non-iterative method for the computation of optimised multi-point excitation force vectors for Phase Resonance Testing is introduced. The method is unique in that it is based entirely on experimental data, allows to determine both the locations and force components resulting in the highest phase purity, and enable to predict the corresponding mode indicator. A minimisation criterion for the real-part response of the test structure with respect to the total response is utilised and, unlike with the application of other methods, no further information such as a mass matrix from a finite-element model or assumptions on the structure's damping characteristics is required. Performance in comparison to existing methods is assessed in a numerical study using an analytical eleven-degrees-of-freedom model. Successful applications to a simple laboratory satellite structure and under realistic test conditions during the Ground Vibration Test on the European Space Agency's Polar Platform are described. Considerable improvements are achieved with respect to the phase purity of the identified mode shapes as compared to other methods or manual tuning strategies as well as the time and effort involved in the application during Ground Vibration Testing. Various aspects regarding the application of iterative model updating methods to aerospace-related test structures and live experimental data are discussed. A new iterative correction parameter selection technique enabling to create a physically correct updated analytical model and a novel approach for the correction of structural components with viscous material properties are proposed. A finite-element model of the GARTEUR SM-AG19 laboratory test structure is updated using experimental modal data from a Ground Vibration Test. In order to assess the accuracy and physical consistency of the updated model a novel approach is applied where only a fraction of the mode shapes and natural frequencies from the experimental data base is used in the model correction process and analytical and experimental modal data beyond the range utilised for updating are correlated. To evaluate the influence of experimental errors on the accuracy of finite-element model corrections a numerical simulation procedure is developed. The effects of measurement uncertainties on the substructure correction factors, natural frequency deviations, and mode shape correlation are investigated using simulated experimental modal data. Various numerical models are generated to study the effects of modelling error magnitudes and locations. As a result, the correction parameter uncertainty increases with the magnitude of the experimental errors and decreases with the number of modes involved in the updating process. Frequency errors, however, since they are not averaged during updating, must be measured with an adequately high precision. Next, the updating procedure is applied to an authentic industrial aerospace structure. The finite-element model of the EC 135 helicopter is utilised and a novel technique for the parameterisation of substructures with non-isotropic material properties is suggested. Experimental modal parameters are extracted from frequency responses recorded during a Shake Test on the EC 135-S01 prototype. In this test case, the correction process involves the handling of a high degree of modal and spatial incompleteness in the experimental reference data. Accordingly, new effective strategies for the selection of updating parameters are developed which are both physically significant and likewise have a sufficient sensitivity with respect to the analytical modal parameters. Finally, possible advantages of model updating in association with a model-based method for the identification and localisation of structural damage are investigated. A new technique for identifying and locating delamination damages in carbon fibre reinforced polymers is introduced. The method is based on a correlation of damage-induced modal damping variations from an elasto-mechanic structure to the corresponding data from a numerical model in order to derive information on the damage location. Using a numerical model enables the location of damages in a three-dimensional structure from experimental data obtained with only a single response sensor. To acquire sufficiently accurate experimental data a novel criterion for the determination of most appropriate actuator and sensor positions and a polynomial curve fitting technique are suggested. It will be shown that in order to achieve a good location precision the numerical model must retain a high degree of accuracy and physical consistency.
130

A programming system for end-user functional programming

Alam, Abu S. January 2015 (has links)
This research involves the construction of a programming system, HASKEU, to support end-user programming in a purely functional programming language. An end-user programmer is someone who may program a computer to get their job done, but has no interest in becoming a computer programmer. A purely functional programming language is one that does not require the expression of statement sequencing or variable updating. The end-user is offered two views of their functional program. The primary view is a visual one, in which the program is presented as a collection of boxes (representing processes) and lines (representing data flow). The secondary view is a textual one, in which the program is presented as a collection of written function definitions. It is expected that the end-user programmer will begin with the visual view, perhaps later moving on to the textual view. The task of the programming system is to ensure that the visual and textual views are kept consistent as the program is constructed. The foundation of the programming system is a implementation of the Model-View-Controller (MVC) design pattern as a reactive program using the elegant Functional Reactive Programming (FRP) framework. Human-Computer Interaction (HCI) principles and methods are considered in all design decisions. A usabilty study was made to �find out the effectiveness of the new system.

Page generated in 0.0702 seconds