• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 77
  • 28
  • 10
  • 9
  • Tagged with
  • 422
  • 80
  • 74
  • 44
  • 40
  • 40
  • 40
  • 39
  • 39
  • 29
  • 28
  • 27
  • 26
  • 24
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Critical investigation of virtual universities : applying the UK structure to Saudi Arabia

Albaqami, Nasser January 2014 (has links)
The purpose of this study was to investigate the feasibility, practicality and desirability of establishing a virtual university (VU) using new technologies in Saudi Arabia and to explore how to apply the existing VU frameworks to the Saudi Arabian education system. This is desirable in order to accommodate the rapid growth in the number of secondary school graduates, and is regarded as one of the most important challenges currently facing Saudi Universities. The study traces the origins of VUs in the UK and Europe, then examines the tools, forums and methods in use, focusing on the main service-oriented architecture and the Simple Object Access Protocol framework. Primary data were gathered by means of two sets of questionnaires, to explore the appetite for a virtual university in Saudi Arabia and to investigate the use of virtual learning in the UK. Three UK universities that strongly promote virtual learning (The Open University, the International Virtual University and Oxford University) were also researched online, providing an additional edge to the wider research on other universities. The investigation was motivated by a desire to produce a model that would widen learning opportunities for those who otherwise have no access to formal education in Saudi Arabia. The result is a virtual university model designed and developed to be a safe and secure Web-based educational system, providing online education for all, regardless of geographical position or time of day. Data were gathered mainly from secondary sources, such as journals, conference reports and books. A literature review critically assessed several technologies and protocols, and a critical comparison of Web services was conducted. Evidence from the questionnaire, the literature review and informal discussions led this researcher to pursue further the concepts of messaging technology and distributed communication, focusing on implementing JMS and a message-passing system. As a result, a chat application which utilises the publish-and-subscribe messaging model and a translator are presented and recommended as essential elements in achieving virtualisation in higher education. The thesis proposes a third-generation virtual university utilising cloud computing, offering integrated services to learners and including different types of online learning materials, specialized virtual centres for the development of educational courses, library and administrative functions, an interactive environment and online collaboration.
12

CMMI-CM compliance checking of formal BPMN models using Maude

El-Saber, Nissreen A. S. January 2015 (has links)
From the perspective of business process improvement models, a business process which is compliant with best practices and standards (e.g. CMMI) is necessary for defining almost all types of contracts and government collaborations. In this thesis, we propose a formal pre-appraisal approach for Capability Maturity Model Integration (CMMI) compliance checking based on a Maude-based formalization of business processes in Business Process Model and Notation (BPMN). The approach can be used to assess the designed business process compliance with CMMI requirements as a step leading to a full appraisal application. In particular, The BPMN model is mapped into Maude, and the CMMI compliance requirements are mapped into Linear Temporal Logic (LTL) then the Maude representation of the model is model checked against the LTL properties using the Maude’s LTL model checker. On the process model side, BPMN models may include structural issues that hinder their design. In this thesis, we propose a formal characterization and semantics specification of well-formed BPMN processes using the formalization of rewriting logic (Maude) with a focus on data-based decision gateways and data objects semantics. Our formal specification adheres to the BPMN standards and enables model checking using Maude’s LTL model checker. The proposed semantics is formally proved to be sound based on the classical workflow model soundness definition. On the compliance requirements side, CMMI configuration management process is used as a source of compliance requirements which then are mapped through compliance patterns into LTL properties. Model checking results of Maude based implementation are explained based on a compliance grading scheme. Examples of CMMI configuration management processes are used to illustrate the approach.
13

Design-by-contract for software architectures

Poyias, Kyriakos January 2014 (has links)
We propose a design by contract (DbC) approach to specify and maintain architectural level properties of software. Such properties are typically relevant in the design phase of the development cycle but may also impact the execution of systems. We give a formal framework for specifying software architectures (and their refi nements) together with contracts that architectural con figurations abide by. In our framework, we can specify that if an architecture guarantees a given pre- condition and a refi nement rule satisfi es a given contract, then the refi ned architecture will enjoy a given post-condition. Methodologically, we take Architectural Design Rewriting (ADR) as our architectural description language. ADR is a rule-based formal framework for modelling (the evolution of) software architectures. We equip the recon figuration rules of an ADR architecture with pre- and post-conditions expressed in a simple logic; a pre-condition constrains the applicability of a rule while a post-condition specifi es the properties expected of the resulting graphs. We give an algorithm to compute the weakest precondition out of a rule and its post-condition. Furthermore, we propose a monitoring mechanism for recording the evolution of systems after certain computations, maintaining the history in a tree-like structure. The hierarchical nature of ADR allows us to take full advantage of the tree-like structure of the monitoring mechanism. We exploit this mechanism to formally defi ne new rewriting mechanisms for ADR reconfi guration rules. Also, by monitoring the evolution we propose a way of identifying which part of a system has been a ffected when unexpected run-time behaviours emerge. Moreover, we propose a methodology that allows us to select which rules can be applied at the architectural level to reconfigure a system so to regain its architectural style when it becomes compromised by unexpected run-time recon figurations.
14

Production and use of documentation in scientific software development

Pawlik, Aleksandra January 2014 (has links)
Software is becoming ubiquitous in science. The success of the application of scientific software depends on effective communication about what the software does and how it operates. Documentation captures the communication about the software. For that reason, practices around scientific software documentation need to be better understood. This thesis presents four qualitative empirical studies that look in depth at the production and use of documentation of scientific software. Together, the studies provide evidence emphasising the importance of documentation and shows the handshake between written documentation and the informal, ephemeral information exchange that happens within the community. Four reasons behind the obstacles to producing effective scientific software documentation are identified: 1) the insufficient resources; 2) lack of incentives for researchers; 3) the influence of the community of practice; 4) the necessity of keeping up with the regular advancements of science. Benefits of the process of producing documentation are also identified: 1) aiding reasoning; 2) supporting reproducibility of science; 3) in certain contexts, expanding the community of users and developers around the software. The latter is investigated through a case study of documentation ‘crowdsourcing’. The research reveals that there is a spectrum of users, with differing needs with respect to documentation. This, in turn, requires different approaches in addressing their needs. The research shows that the view of what constitutes documentation must be broad, in order to recognise how wide a range of resources (e.g., formal documents, email, online fora, comments in the source code) is actually used in communicating knowledge about scientific software. Much of the information about the software resides within the community of practice (and may not be documented). These observations are of practical use for those producing documentation in different contexts of scientific software development, for example providing guidance about engaging a community in ‘crowdsourcing’ documentation.
15

A new method for identifying weaknesses in, and evaluating enhancements to, object-oriented programming teaching and learning

Allinjawi, Arwa Abdulaziz January 2014 (has links)
Difficulties in learning programming especially Object-Oriented Programming (OOP) have been widespread in the Computer Science (CS) departments. Researchers have proposed different approaches to improve the teaching and learning of OOP concepts. One possible method is to engage the students with stimulating 3D visualization environments to reduce the complexity while enhancing understanding of concepts. The visualization environments may improve programmer productivity and achievement of the OOP learning outcomes. In addition, many researchers have presented various assessment methods for diagnosing learning problems to improve the teaching of programming in CS higher education. However, it is still the case that researchers' conclusions are often based on subjective assessments, because CS lacks standard assessment methods for educators to measure their students' learning outcomes. This research presents the incorporation of two assessment approaches, concept-effect propagation and the Handy Instrument for Course Level Assessment (HI -Class), to promote a modified diagnostic inference about students' persistent achievements. The resulting Achievement Degree Analysis (ADA) approach diagnoses the students' problem outcomes and demonstrates its effectiveness within the context of an OOP course by determining which particular OOP concepts were perceived as being particularly difficult to learn. Usage of the ADA method is then demonstrated using a cohort of students from the CPCS203 course at King Abdulaziz University (KAU) , Faculty of Computing and Information Technology (FCIT), female section, in Saudi Arabia. It was first used to diagnose the learning achievement of specific concepts. Secondly, it was used to statistically evaluate the effectiveness of the visualization environment, Alice, which has been hypothesized to improve novice programmers' understanding of OOP concepts. No statistically significant improvement of understanding was detected in this particular context. Reasons for the null result are discussed. The thesis concludes with a discussion of (a) further experiments that may be undertaken to explore the impact of visualization environments, and (b) work that may be undeliaken to demonstrate the general applicability of the ADA method.
16

Software requirements change analysis and prediction

McGee, S. E. January 2014 (has links)
Software requirements continue to evolve during application development to meet the changing needs of customers and market demands. Complementing current approaches that constrain the risk that changing requirements pose to project cost, schedule and quality, this research seeks to investigate the efficacy of Bayesian networks to predict levels of volatility early in the project lifecycle. A series of industrial empirical studies is undertaken to explore the causes and consequences of requirements change, the results of which inform prediction feasibility and model construction. Models are then validated using data from four projects in two industrial organisations. Results from empirical studies indicate that classification of changes according to the source of the change is practical and informative to decisions concerning requirements management and process selection. Changes coming from sources considered external to the project are more expensive and difficult to control by comparison to the more numerous changes that occur as result of adjustments to product direction, or requirement specification. Although certain requirements are more change prone than others, the relationship between volatility and requirement novelty and complexity is not straightforward. Bayesian network models to predict levels of requirement volatility constructed based upon these results perform better than project management estimations of volatility when models are trained from a project sharing industrial context. This research carries the implication that process selection should be based upon the types of changes likely, and that formal predictive models are a promising alternative to project management estimation when investment in data collection, re-use and learning is supported.
17

Hybrid architecture for high performance lookup

Perez, Keyssy Guerra January 2015 (has links)
In this research improved header lookup and flow rule update speeds over conventional lookup algorithms is investigated. A detailed study if several well-known lookup algorithms reveals that individual packet header fields lookups and combining the lookup results achieved high lookup speed. The traditional packet classification solutions are not suitable for the actial network requirements, which are being promoted as the basis for Software-Defined Networking (SON) and the OpenFlow protocol. The proposed hybrid lookup architecture is comprised of various lookup algorithms, which are 'selected for packet classification based on the requirements of applications or functions, such as SDN. SDN introduces programmability to the network, with the opportunity to dynamically route traffic based on flow descriptions. Thus, improving the network processing performance with the . proposed configurable solution will/directly support the proposed capability of programmability in SDN. The hardware implementation of the proposed configurable lookup architecture, which is able to work at line rate, is evaluated for different combinations of ~he set of lookup algorithms. Recent Open Flow switches tend to perform packet header lookup through the multiple lookup tables to enable new protocols, such as VxLAN. Hence, an extension of the proposed architecture is also developed based on multiple table lookup. The extended architecture takes advantages of lookup speed and flexibility from the previous architecture for 15-field lookup at the current line rate of 40-1 00 Gbps.
18

Confidentiality properties and the B method

Onunkun, T. J. January 2012 (has links)
Programs in the presence of nondeterminism or underspecification may mask the presence of insecure information flow between variables. This may re¬sult in the refinement paradox when such programs are refined to a de¬terministic implementation. Hence nondeterministic programs that satisfy possibilistic security properties like Generalised Noninterference (GNI) may, on refinement, fail corresponding deterministic security properties such as Noninterference (NI). We propose in this thesis an automatable information flow analysis frame-work to capture information flow between variables and flag flows that breach information flow policies defined as a multi-level secure lattice-based system. We separate the problem of satisfaction of the refinement relation from the problem of preservation of security properties of interest at every refinement step, and focus on the latter problem. We formalise our core analysis on standalone B Machines, develop the proof obligations of the framework, and introduce security conditions that must be satisfied to guarantee secure information flow between the vari¬ables within a single B machine (Chapter 3). We show that our analysis is more robust than standard flow-insensitive security type systems like the one developed by Volpano, Smith, and Irvine [76], since our analysis is flow-sensitive, i.e., responsive to information flow. For example, our frame-work correctly analyses a program whose overall flow is secure as secure, even when some of its subprograms may be insecure, whereas [76] will er¬roneously classify such programs as insecure, a problem commonly termed false negative. We also show the correctness of our framework in Chapter 3. A natural sequel to our core information flow analysis of standalone B Machines is an extension of the framework to analyse structured B Machines, i.e., information flow arising from the use of B structuring mechanisms such as SEES, INCLUDES, etc (Chapter 4). The third major part of the thesis (Chapter 5) involves the analysis of information flow between the variables in a hypothetical case study using the C++ implementation of the information flow analyser formalised in the preceding chapters. We also discuss our intuitions on future extensions.
19

A comparative study of model transformation approaches through a systematic procedural framework and goal question metrics paradigm

Kolahdouz Rahimi, Shekoufeh January 2013 (has links)
Model Driven Engineering has become a key Software Engineering approach, which aims at improving the cost-effectiveness and reusability of software by capturing the essential semantics of systems in models. By means of model transformations, these models can be analysed, improved, and mapped to executable implementations in a variety of languages and platforms. A large number of different transformation languages and tools, ranging from graph theoretic to relational, hybrid and imperative exist across the research community. A key problem in the current state of Model Driven Engineering is the lack of guidelines and techniques for measuring or improving transformation quality. This thesis addresses this problem by defining a transformation quality framework based on the ISO/IEC 9126 international software quality standard. The framework is validated on different transformation languages using diverse case studies. The case studies highlight the problems with the specification and design of particular categories of model transformation, and provide challenging examples by which model transformation languages and approaches can be compared. The evaluation procedure provides clear guidelines for suitability of selected transformation approaches on specific transformation problem by identifying the advantage and disadvantage of each approach.
20

Observations and explanations of characteristic features in the performance profiles of evolutionary algorithms

Oates, Martin J. January 2003 (has links)
No description available.

Page generated in 1.2697 seconds