• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 720
  • 715
  • 707
  • 398
  • 385
  • 382
  • 164
  • 97
  • 86
  • 82
  • 44
  • 42
  • 39
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Distributed Calculation Object Models

Imran, Syed 15 August 1996 (has links)
A Calculation Object Model (COM) based on the calculation objects. It can be created using different programming languages and in our case we have used Java on the Eclipse Java platform. These calculation objects resides at server side, which when initiated get the values from the clients and perform the required functions of calculation. The values (data) these COMs gets are stored basically in xml based database of the different Calculation Objects (COs) which act as a client when passing the values forward and as server when getting the values from other objects. In the case of databases, traditional database is always been in practice i.e. RDBMS but due to having a drawbacks xml database has been preferred.COM is semantic description to calculate the costs of things in complex scenarios, where we have many Calculation Objects. When we talk about databases, presently we use a relational database, where everything looks like a table with rows and columns but in reality, however is more complicated than the data to be in table format only. Data that needs to be stored does not always exist in tabular format and required to be benefit from tools that are more closely fits its natural structure. Generally the traditional databases and in particularly SQL databases have been so incredibly successful, that they have completely eliminated the competition. Infect, the relational databases fit a lot of problems very well but they don’t really fit for eXtensible Markup Language (XML) document’s data. Nowadays a great deal of data is being encoded in XML documents and more is being created every day, thus felt the need for something better.
2

The object-oriented composition of DES simulation software from prefabricated components developed within different programming environments

Carvalho, Maps 05 January 2007 (has links)
Developers of simulation software have responded to the increasing commercial demand for customised solutions by adding new features and tools to their existing simulation packages. This has led to huge, monolithic applications with functionalities that are constantly extended by addition of wizards, templates and add-ons in ‘generalisingcustomising-generalising’ development cycle. This approach has been successful so far,but customising much of the contemporary simulation software is increasingly difficult. An alternative approach is to compose, simulation packages from prefabricated components that the users may select, modify and assemble so to acquire the functionality that suits each simulation model. This strategy requires component-based paradigms and integration mechanisms that support the straightforward composition of components regardless of their development and deployment contexts.This research exploits the Microsoft’s .NET integration philosophy to investigate how discrete event simulation (DES) software could pursue a component-based approach that integrates components sourced within distinct contexts. The thesis describes the DotNetSim project that explores the composition of DES applications from components developed within different Microsoft packages in different programming languages. This is done by prototyping DES software across the entire requirements of a simulation application package.The DotNetSim prototype consists of a Visio-based DES modelling environment which integrates with a .NET simulation engine which, in turn, integrates with an Excel-based output analysis environment. The graphical modelling environment emulates Schruben’s Event Graph methodology for simulation modelling. VisioTM is extended by a number of VBA programs to link together different Microsoft applications in order to capture the models’ application logic and dynamics. The simulation engine consists of a number of C# and VB.Net components that implement an event-based simulation executive. It reads the model’s logic and dynamics by instantiating the graphical modelling environment, runs the event-based simulation and returns the simulation results to Excel for analysis. The output analysis environment is a template that illustrates the specialisation of the generic data analysis and reporting capabilities of ExcelTM to serve the simulation analysis. The components interact directly by instantiating one another’s objects.These three coarse-grained components could be substituted by others that deliver the same functionality, though with different internal operations. With further work, these components could be deployed as web services to which the model’s logic is remotely input. / The object-oriented composition of DES simulation software from prefabricated components developed within different programming environments.
3

Exploring the memorability of multiple recognition-based graphical passwords and their resistance to guessability attacks

Chowdhury, Soumyadeb January 2015 (has links)
Most users find it difficult to remember traditional text-based passwords. In order to cope with multiple passwords, users tend to adopt unsafe mechanisms like writing down the passwords or sharing them with others. Recognition-based graphical authentication systems (RBGSs) have been proposed as one potential solution to minimize the above problems. But, most prior works in the field of RBGSs make the unrealistic assumption of studying a single password. It is also an untested assumption that RBGS passwords are resistant to being written down or verbally communicated. The main aim of the research reported in this thesis is to examine the memorability of multiple image passwords and their guessability using written descriptions (provided by the respective account holders). In this context, the thesis presents four user studies. The first user study (US1) examined the usability of multiple RBGS passwords with four different image types: Mikon, doodle, art and everyday objects (e.g. images of food, buildings, sports etc.). The results obtained in US1 demonstrated that subjects found it difficult to remember four RBGS passwords (of the same image type) and the memorability of the passwords deteriorated over time. The results of another usability study (US2) conducted using the same four image types (as in US1) demonstrated that the memorability of the multiple RBGS passwords created by employing a mnemonic strategy do not improve even when compared to the existing multiple password studies and US1. In the context of the guessability, a user study (GS1) examined the guessability of RBGS passwords (created in US1), using the textual descriptions given by the respective account holders. Another study (GS2) examined the guessability of RBGS passwords (created in US2), using descriptions given by the respective account holders. The results obtained from both the studies showed that RBGS passwords can be guessed using the password descriptions in the experimental set-up used. Additionally, this thesis presents a novel Passhint authentication system (PHAS).The results of a usability study (US3) demonstrated that the memorability of multiple PHAS passwords is better than in existing Graphical authentication systems (GASs). Although the registration time is high, authentication time for the successful attempts is either equivalent to or less than the time reported for previous GASs. The guessability study (GS3) showed that the art passwords are the least guessable, followed by Mikon, doodle and objects in that order. This thesis offers these initial studies as a proof of principle to conduct large scale field studies in the future with PHAS. Based on the review of the existing literature, this thesis identifies the need for a general set of principles to design usability experiments that would allow systematic evaluation and comparison of different authentication systems. From the empirical studies (US1, US2 and US3) reported in this thesis, we found that multiple RBGS passwords are difficult to remember, and the memorability of such passwords can be increased using the novel PHAS. We also recommend using the art images as the passwords in PHAS, because they are found to be the least guessable using the written descriptions in the empirical studies (GS1, GS2 and GS3) reported in this thesis.
4

Investigation into an improved modular rule-based testing framework for business rules

Wetherall, Jodie January 2010 (has links)
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process. This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set. The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable. The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications. Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono. Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors.
5

Open or closed? : the politics of software licensing in Argentina and Brazil

Jones, Ivor January 2015 (has links)
Whether software is licensed under terms which ‘close off’ or make accessible the underlying code that comprises software holds profound implications for development due to the centrality of this good to contemporary life. In the 2000s, developing countries adopted policies to promote free and open source software (F/OSS) for reasons of technological autonomy and to reduce spending on royalties for foreign produced proprietary software. However, the adoption of such policies varied across countries. Focusing upon Argentina and Brazil, two countries that reflect contrasting policy outcomes in promoting F/OSS, I explain why and how different policies came to be adopted by analysing the way in which institutions and patterns of association affected the lobbying power of advocates and opponents of F/OSS promotion. Advocates are generally weak actors, yet they might strengthen their lobbying power through embeddedness within political and state institutions which offer opportunities to mobilise resources and forge ties with political decision-makers. Opponents are generally strong, business actors, yet their lobbying power may be attenuated by weak concentration in business association, reducing their capacity to mobilise and coordinate support. In Argentina, where F/OSS advocates’ institutional embeddedness was weak and concentration in business association was strong, the government was prevented from promoting F/OSS, despite signs that it wished to do so. In Brazil, where F/OSS advocates’ institutional embeddedness was strong and concentration in business association was weak, the government promoted F/OSS despite vociferous opposition from amongst the largest firms in the world.
6

Scalable audio processing across heterogeneous distributed resources : an investigation into distributed audio processing for Music Information Retrieval

Al-Shakarchi, Ahmad January 2013 (has links)
Audio analysis algorithms and frameworks for Music Information Retrieval (MIR) are expanding rapidly, providing new ways to discover non-trivial information from audio sources, beyond that which can be ascertained from unreliable metadata such as ID3 tags. MIR is a broad field and many aspects of the algorithms and analysis components that are used are more accurate given a larger dataset for analysis, and often require extensive computational resources. This thesis investigates if, through the use of modern distributed computing techniques, it is possible to design an MIR system that is scalable as the number of participants increases, which adheres to copyright laws and restrictions, whilst at the same time enabling access to a global database of music for MIR applications and research. A scalable platform for MIR analysis would be of benefit to the MIR and scientific community as a whole. A distributed MIR platform that encompasses the creation of MIR algorithms and workflows, their distribution, results collection and analysis, is presented in this thesis. The framework, called DART - Distributed Audio Retrieval using Triana - is designed to facilitate the submission of MIR algorithms and computational tasks against either remotely held music and audio content, or audio provided and distributed by the MIR researcher. Initially a detailed distributed DART architecture is presented, along with simulations to evaluate the validity and scalability of the architecture. The idea of a parameter sweep experiment to find the optimal parameters of the Sub-Harmonic Summation (SHS) algorithm is presented, in order to test the platform and use it to perform useful and real-world experiments that contribute new knowledge to the field. DART is tested on various pre-existing distributed computing platforms and the feasibility of creating a scalable infrastructure for workflow distribution is investigated throughout the thesis, along with the different workflow distribution platforms that could be integrated into the system. The DART parameter sweep experiments begin on a small scale, working up towards the goal of running experiments on thousands of nodes, in order to truly evaluate the scalability of the DART system. The result of this research is a functional and scalable distributed MIR research platform that is capable of performing real world MIR analysis, as demonstrated by the successful completion of several large scale SHS parameter sweep experiments across a variety of different input data - using various distribution methods - and through finding the optimal parameters of the implemented SHS algorithm. DART is shown to be highly adaptable both in terms of the distributed MIR analysis algorithm, as well as the distribution
7

Performance engineering of hybrid message passing + shared memory programming on multi-core clusters

Chorley, Martin James January 2012 (has links)
The hybrid message passing + shared memory programming model combines two parallel programming styles within the same application in an effort to improve the performance and efficiency of parallel codes on modern multi-core clusters. This thesis presents a performance study of this model as it applies to two Molecular Dynamics (MD) applications. Both a large scale production MD code and a smaller scale example MD code have been adapted from existing message passing versions by adding shared memory parallelism to create hybrid message passing + shared memory applications. The performance of these hybrid applications has been investigated on different multi-core clusters and compared with the original pure message passing codes. This performance analysis reveals that the hybrid message passing + shared memory model provides performance improvements under some conditions, while the pure message passing model provides better performance in others. Typically, when running on small numbers of cores the pure message passing model provides better performance than the hybrid message passing + shared memory model, as hybrid performance suffers due to increased overheads from the use of shared memory constructs. However, when running on large numbers of cores the hybrid model performs better as these shared memory overheads are minimised while the pure message passing code suffers from increased communication overhead. These results depend on the interconnect used. Hybrid message passing + shared memory molecular dynamics codes are shown to exhibit different communication profiles from their pure message passing versions and this is revealed to be a large factor in the performance difference between pure message passing and hybrid message passing + shared memory codes. An extension of this result shows that the choice of interconnection fabric used in a multi-core cluster has a large impact on the performance difference between the pure message passing and the hybrid code. The factors affecting the performance of the applications have been analytically examined in an effort to describe, generalise and predict the performance of both the pure message passing and hybrid message passing + shared memory codes.
8

Web page performance analysis

Chiew, Thiam Kian January 2009 (has links)
Computer systems play an increasingly crucial and ubiquitous role in human endeavour by carrying out or facilitating tasks and providing information and services. How much work these systems can accomplish, within a certain amount of time, using a certain amount of resources, characterises the systems’ performance, which is a major concern when the systems are planned, designed, implemented, deployed, and evolve. As one of the most popular computer systems, the Web is inevitably scrutinised in terms of performance analysis that deals with its speed, capacity, resource utilisation, and availability. Performance analyses for the Web are normally done from the perspective of the Web servers and the underlying network (the Internet). This research, on the other hand, approaches Web performance analysis from the perspective of Web pages. The performance metric of interest here is response time. Response time is studied as an attribute of Web pages, instead of being considered purely a result of network and server conditions. A framework that consists of measurement, modelling, and monitoring (3Ms) of Web pages that revolves around response time is adopted to support the performance analysis activity. The measurement module enables Web page response time to be measured and is used to support the modelling module, which in turn provides references for the monitoring module. The monitoring module estimates response time. The three modules are used in the software development lifecycle to ensure that developed Web pages deliver at worst satisfactory response time (within a maximum acceptable time), or preferably much better response time, thereby maximising the efficiency of the pages. The framework proposes a systematic way to understand response time as it is related to specific characteristics of Web pages and explains how individual Web page response time can be examined and improved.
9

The impact of localized road accident information on road safety awareness

Zheng, Yunan January 2007 (has links)
The World Health Organization (WHO) estimate that road traffic accidents represent the third leading cause of ‘death and disease’ worldwide. Many countries have, therefore, launched safety campaigns that are intended to reduce road traffic accidents by increasing public awareness. In almost every case, however, a reduction in the total number of fatalities has not been matched by a comparable fall in the total frequency of road traffic accidents. Low severity incidents remain a significant problem. One possible explanation is that these road safety campaigns have had less effect than design changes. Active safety devices such as anti-lock braking, and passive measures, such as side impact protection, serve to mitigate the consequences of those accidents that do occur. A number of psychological phenomena, such as attribution error, explain the mixed success of road safety campaigns. Most drivers believe that they are less likely to be involved in an accident than other motorists. Existing road safety campaigns do little to address this problem; they focus on national and regional statistics that often seem remote from the local experiences of road users. Our argument is that localized road accident information would have better impact on people’s safety awareness. This thesis, therefore, describes the design and development of a software tool to provide the general public with access to information on the location and circumstances of road accidents in a Scottish city. We also present the results of an evaluation to determine whether the information provided by this software has any impact on individual risk perception. A route planing experiment was also carried out. The results from the experiment gives more positive feedback that road users would consider accident information if such information was available for them.
10

Analysing accident reports using structured and formal methods

Burns, Colin Paul January 2000 (has links)
Formal methods are proposed as a means to improve accident reports, such as the report into the 1996 fire in the Channel Tunnel between the UK and France. The size and complexity of accident reports create difficulties for formal methods, which traditionally suffer from problems of scalability and poor readability. This thesis demonstrates that features of an engineering-style formal modelling process, particularly the structuring of activity and management of information, reduce the impact of these problems and improve the accuracy of formal models of accident reports. This thesis also contributes a detailed analysis of the methodological requirements for constructing accident report models. Structured, methodical construction and mathematical analysis of the models elicits significant problems in the content and argumentation of the reports. Once elicited, these problems can be addressed. This thesis demonstrates the benefits and limitations of taking a wider scope in the modelling process than is commonly adopted for formal accident analysis. We present a deontic action logic as a language for constructing models of accident reports. Deontic action models offer a novel view of the report, which highlights both the expected and actual behaviour in the report, and facilitates examination of the conflict between the two. This thesis contributes an objective analysis of the utility of both deontic and action logic operators to the application of modelling accident reports. A tool is also presented that executes a subset of the logic, including these deontic and action logic operators.

Page generated in 0.0351 seconds