1 |
Multimodal e-feedback : an empirical studyAlharbi, Abdulrhman Abdulghani January 2013 (has links)
This thesis investigated the applicability of unique combinations of multimodal metaphors to deliver different types of feedback. The thesis evaluates the effect of these combinations on the usability of electronic feedback interfaces and on the users' engagement to learning. The empirical research described in this thesis consists of three experimental phases. In the first phase, an initial experiment was carried out with 40 users to explore and compare the usability and users' engagement of facially animated expressive avatars with text and natural recorded speech, and text with graphics metaphors. The second experimental phase involved an experiment conducted with 36 users to investigate user perception of feedback communicated using avatar with facial expressions and body gestures, and voice expressions of synthesised speech. This experiment also aimed at evaluating the role that an avatar could play as virtual tutor in e-feedback interfaces by comparing the usability and engagement of users using three different modes of interaction: video with tutor that presented information with facial expressions, synthesised spoken messages supported with text, and avatars with facial expressions and body gestures. The third experimental phase, introduced and investigated a novel approach to communicate e-feedback that was based on the results of the previous experiments. This approach involved speaking avatars to deliver feedback with the aid of earcons, auditory icons, facial expressions and body gestures. The results demonstrated the usefulness and applicability of the tested metaphors to enhance e-feedback usability and to enable users to attain a better engagement with the feedback. A set of empirically derived guidelines for the design and use of these metaphors to communicate e-feedback are also introduced and discussed.
|
2 |
Policy based runtime verification of information flowSarrab, Mohamed Khalefa January 2011 (has links)
Standard security mechanism such as Access control, Firewall and Encryption only focus on controlling the release of information but no limitations are placed on controlling the propagation of that confidential information. The principle problem of controlling sensitive information confidentiality starts after access is granted. The research described in this thesis belongs to the constructive research field where the constructive refers to knowledge contributions being developed as a new framework, theory, model or algorithm. The methodology of the proposed approach is made up of eight work packages. One addresses the research background and the research project requirements. Six are scientific research work packages. The last work package concentrates on the thesis writing up. There is currently no monitoring mechanism for controlling information flow during runtime that support behaviour configurability and User interaction. Configurability is an important requirement because what is considered to be secure today can be insecure tomorrow. The interaction with users is very important in flexible and reliable security monitoring mechanism because different users may have different security requirements. The interaction with monitoring mechanism enables the user to change program behaviours or modify the way that information flows while the program is executing. One of the motivations for this research is the information flow policy in the hand of the end user. The main objective of this research is to develop a usable security mechanism for controlling information flow within a software application during runtime. Usable security refers to enabling users to manage their systems security without defining elaborate security rules before starting the application. Our aim is to provide usable security that enables users to manage their systems' security without defining elaborate security rules before starting the application. Security will be achieved by an interactive process in which our framework will query the user for security requirements for specific pieces of information that are made available to the software and then continue to enforce these requirements on the application using a novel runtime verification technique for tracing information flow. The main achievement of this research is a usable security mechanism for controlling information flow within a software application during runtime. Security will be achieved by an interactive process to enforce user requirements on the application using runtime verification technique for tracing information flow. The contributions are as following. Runtime Monitoring: The proposed runtime monitoring mechanism ensures that the program execution is contains only legal flows that are defined in the information flow policy or approved by the user. Runtime Management: The behaviour of a program that about to leak confidential information will be altered by the monitor according to the user decision. User interaction control: The achieved user interaction with the monitoring mechanism during runtime enable users to change the program behaviours while the program is executing.
|
3 |
An investigation into a digital forensic model to distinguish between 'insider' and 'outsider'Al-Morjan, Abdulrazaq Abdulaziz January 2010 (has links)
IT systems are attacked using computers and networks to facilitate their crimes and hide their identities, creating new challenges for corporate security investigations. There are two main types of attacker: insiders and outsiders. Insiders are trusted users who have gained authorised access to an organisation's IT resources in order to execute their job responsibilities. However, they deliberately abuse their authorised (i.e. insider) access in order to contravene an organisation’s policies or to commit computer crimes. Outsiders gain insider access to an organisation's IT objects through their ability to bypass security mechanisms without prior knowledge of the insider’s job responsibilities, an advanced method of attacking an organisation’s resources in such a way as to prevent the abnormal behaviour typical of an outsider attack from being detected, and to hide the attacker’s identity. For a number of reasons, corporate security investigators face a major challenge in distinguishing between the two types of attack. Not only is there no definitive model of digital analysis for making such a distinction, but there has to date been no intensive research into methods of doing so. Identification of these differences is attempted by flawed investigative approaches to three aspects: location from which an attack is launched, attack from within the organisation's area of control, and authorised access. The results of such unsound investigations could render organisations subject to legal action and negative publicity. To address the issue of the distinction between insider and outsider attacks, this research improves upon the first academic forensic analysis model, Digital Forensic Research Workshop (DFRWS) [63]. The outcome of this improvement is the creation of a Digital Analysis Model for Distinction between Insider and Outsider Attacks (DAMDIOA), a model that results in an improvement in the analysis investigation process, as well as the process of decision. This improvement is effected by two types of proposed decision: fixed and tailored. The first is based on a predetermined logical condition, the second on the proportion of suspicious activity. The advantage of the latter is that an organisation can adjust its threshold of tolerance for such activity based on its level of concern for the type of attack involved. This research supports the possibility of distinguishing between insider and outsider attacks by running a network simulation which carried out a number of email attack experiments to test DAMDIOA. It found that, when DAMDIOA used predetermined decisions based on legitimate activities, it was able to differentiate the type of attack in seven of the eight experiments conducted. It was the tailored decisions with threshold levels Th=0.2 and 0.3 that conferred the ability to make such distinctions. When the researcher compared legitimate activities, including users’ job responsibilities, with the current methods of distinguishing between insider and outsider attacks,the criterion of authorised access failed three times to make that distinctions. This method of distinction is useless when there is a blank or shared password. He also discovered that both the location from which an attack was launched and attacks from areas within an organisation’s control failed five times to differentiate between such attacks. There are no substantive differences between these methods. The single instance in which the proposed method failed to make these distinctions was because the number of legitimate activities equalled the number of suspicious ones. DAMDIOA has been used by two organisations for dealing with the misuse of their computers, in both cases located in open areas and weakly protected by easily guessed passwords. IT policy was breached and two accounts moved from the restricted to the unlimited Internet policy group. This model was able to identify the insiders concerned by reviewing recorded activities and linking them with the insiders’ job responsibilities. This model also highlights users’ job responsibilities as a valuable source of forensic evidence that may be used to distinguish between insider and outsider attacks. DAMDIOA may help corporate security investigators identify suspects accurately and avoid incurring financial loss for their organisations. This research also recommends many improvements to the process by which user activities are collected before the attack takes place, thereby enabling distinctions to be better drawn. It also proposes the creation of a physical and logical log management system, a centralised database for all employee activities that will reduce organisations’ financial expenditures. Suggestions are also proposed for future research to classify legitimate and suspicious activities, evaluate them, identify the important ones and standardise the process of identifying and collecting users’ job responsibilities. This work will remove some of the limitations of the proposed model.
|
4 |
Development of a novel toner for electrophotography based additive manufacturing processBanerjee, Soumya January 2011 (has links)
This thesis is intended to conduct feasibility study of producing 3D objects by printing thermoplastic elastomer using electrophotography technique and thereafter sintering the whole layer using infrared light source .The term Selective laser printing (SLP) has been coined by the author for this new process. This thesis provides the feasibility of developing experimental toner using thermoplastic toner using both mono and dual component print engines.
|
5 |
Improving project management planning and control in service operations environmentAl-Kaabi, Mohamed January 2011 (has links)
Projects have evidently become the core activity in most companies and organisations where they are investing significant amount of resources in different types of projects as building new services, process improvement, etc. This research has focused on service sector in attempt to improve project management planning and control activities. The research is concerned with improving the planning and control of software development projects. Existing software development models are analysed and their best practices identified and these have been used to build the proposed model in this research. The research extended the existing planning and control approaches by considering uncertainty in customer requirements, resource flexibility and risks level variability. In considering these issues, the research has adopted lean principles for planning and control software development projects. A novel approach introduced within this research through the integration of simulation modelling techniques with Taguchi analysis to investigate ‗what if‘ project scenarios. Such scenarios reflect the different combinations of the factors affecting project completion time and deliverables. In addition, the research has adopted the concept of Quality Function Deployment (QFD) to develop an automated Operations Project Management Deployment (OPMD) model. The model acts as an iterative manner uses ‗what if‘ scenario performance outputs to identify constraints that may affect the completion of a certain task or phase. Any changes made during the project phases will then automatically update the performance metrics for each software development phases. In addition, optimisation routines have been developed that can be used to provide management response and to react to the different levels of uncertainty. Therefore, this research has looked at providing a comprehensive and visual overview of important project tasks i.e. progress, scheduled work, different resources, deliverables and completion that will make it easier for project members to communicate with each other to reach consensus on goals, status and required changes. Risk is important aspect that has been included in the model as well to avoid failure. The research emphasised on customer involvement, top management involvement as well as team members to be among the operational factors that escalate variability levels 3 and effect project completion time and deliverables. Therefore, commitment from everyone can improve chances of success. Although the role of different project management techniques to implement projects successfully has been widely established in areas such as the planning and control of time, cost and quality; still, the distinction between the project and project management is less than precise and a little was done in investigating different levels of uncertainty and risk levels that may occur during different project phase.
|
6 |
Directional routing techniques in VANETAl-Doori, Moath January 2011 (has links)
Vehicle Ad hoc Networks (VANET) emerged as a subset of the Mobile Ad hoc Network (MANET) application; it is considered to be a substantial approach to the ITS (Intelligent Transportation System). VANETs were introduced to support drivers and improve safety issues and driving comfort, as a step towards constructing a safer, cleaner and more intelligent environment. At the present time vehicles are equipped with a number of sensors and devices, including On Board Units (OBU); this enables vehicles to sense situations affecting other vehicles and manage communications, by exploiting infrastructures such as the Road Side Unit (RSU); creating a Vehicle to Infrastructure (V2I) pathway, or interacting directly with other vehicles creating a Vehicle to Vehicle (V2V) pathway. Owing to the lack of infrastructures and difficulties involved in providing comprehensive coverage for all roads because of the high expense associated with installation, the investigation in this research concentrates on the V2V communication type rather than theV2I communication type. Many challenges have emerged in VANET, encouraging researchers to investigate their research in an attempt to meet these challenges. Routing protocol issues are considered to be a critical dilemma that needs to be tackled in VANET, particularly in a sparse environment, by designing an effcient routing mechanism that impacts on enhancing network performance in terms of disseminating messages to a desireddestination, balancing the generated packet (overhead) on the network and increasing the ratio of packet delivery with a reduced time delay. VANET has some unique characteristics compared to MANET; specifically it includes high mobility and constrained patterns restricted by roads, which lead to generation of a disconnected area occurring continuously between vehicles creating a Delay Tolerant Network (DTN). This is in opposition to applying the multi-hope technique properly to deliver the packet to its desire destination. The aim in this thesis comprises two main contributions. First developing novel routing protocols for a sparse environment in VANET with the context of utilising the mobility feature, with the aid of the equipped devices, such as Global Position System (GPS) and Navigation System (NS). This approach exploits the knowledge of Second Heading Direction (SHD), which represents the knowledge of the next road direction the vehicle is intending to take, in order to increase the packet delivery ratio, and to increase the route stability by decreasing instances of route breakage. This approach comprises two approaches; the first approach was designed for a highway scenario, by selecting the next hop node based on a filtration process, to forward the packet to the desired destination, while the second approach was developed for the intersection and roundabout scenario, in order to deliver the packet to the destination (unknown location). The formalising and specification of the VSHDRP has been performed using the CCA (Calculus of Context-aware Ambient), in order to evaluate the protocols behaviours, the protocol has been validated using the ccaPL. In addition the performance of the VSHDRP has been evaluated using the NS-2 simulator; comparing it with Greedy Perimeter Stateless Routing (GPSR) protocol, to reveal the strengths and weaknesses of the protocol. Second, developing a novel approach to broadcasting the HELLO beacon message adaptively in VANET based on the node's circumstances (direction and speed), in order to minimise the broadcasting of unnecessary HELLO beacon messages. A novel architecture has been built based on the adaptive HELLO beacon message, which clarifies how the OBU components are interacting with the connected sensors, in order to portray any changes in the vehicle's circumstances, so as to take the right decision to determine appropriate action. This architecture has been built based on the concept of a context aware system, which divides the architecture into three main phases; sensing processing and acting.
|
7 |
Investigating the current state of the art on Ethics Reviews of Information and Communications Technology Research in UK UniversitiesEke, Damian January 2012 (has links)
Information and Communications Technology (ICT) is a concept that represents the convergence of some defining technologies of our time (information technology, computer technology and media technology) and the increasing influences its research and usage have on the society today. With such distinctive features as pervasiveness, ubiquity, malleability, interactivity, augmentation, autonomy and virtualization, ICT research and usage provide an array of ethical and social challenges. However, most contemporary research on ICT ethics concentrates only either on the usage of ICT artefacts or on questions that raise significant theoretical or practical interest such as ICT implants or eHealth. There has been little research on the ethical and social issues associated with research and development. This research draws attention back to the beginning- research and development. It seeks to find out how ethical issues are considered in ICT research being carried out in the UK computing departments/faculties. It investigates the current state of the art with regard to ethics reviews of ICT research through establishing its relevance, availability and effectiveness. The fact that high quality researches (including European projects) in ICT are currently being carried out in so many UK computing departments, which also produce engineers for IT companies, justifies the choice of the UK in this research. Even though a quantitative tool of questionnaire was used as part of the data collection methods, the interpretive and subjective nature of this research find reference in the use of interviews for the main data collection and dialectical hermeneutics for data analysis. Through a dialectical hermeneutic process, the different understandings on availability, relevance and effectiveness of ethics reviews of ICT research emerged. These understandings showed the strengths and weaknesses of the current ethics review procedures in the UK computing departments which provided the basis for relevant recommendations to policy makers.
|
8 |
An investigation into the influence of learning styles and other factors affecting students' perception of virtual learning environmentsSwesi, Khaled January 2012 (has links)
In order to explain and determine the attitudes and factors affecting perceptions of students to adopt and use Virtual Learning Environment (VLE) as a tool in complementing and supplementing face-to-face learning, this research combined two theoretical models: Technology Acceptance Model (TAM), one of the more popular acceptance models, and Learning Style Inventory (LSI). The Technology Acceptance Model is one of the models used to study the problem of low adoption or underutilization of technology while learning styles model adopted in order to determines the preferred learning styles for the users of VLE. This study investigates students at Tripoli University, the main University in Libya to understand their perceptions of using VLE with respect to their learning styles. The study used a quantitative descriptive research design method by using a survey as the primary means of data collection. Empirical data were collected from different departments and schools (n=302) to examine the impact of specialisation construct. The study proposed a conceptual model which includes external variables derived from previous research, the core TAM model combined with learning style as an independent variable in order to determine the impact of learning styles on the perception of students towards VLE use. A combination of t tests, ANOVAs, chi-squares, and Pearson’s product-moment correlation coefficients was used to analyse the data by using two techniques: single and multiple regressions. Findings from the quantitative data revealed that, regardless of gender or learning styles impacts, the participants have a strong positive behavioural intention to use VLE tools in their existing learning environment. The results of this study implied that gender and learning styles did not play a significant role in determining perceptions and usage of VLE. However, the other defined independent variables had significant effects on the model and contributed to the explanation of the model except for example, job relevance, complexity factors. The interesting result found in this study was the fact that the specialisation constructs shows that there is a different level of VLE use depending on the student’s specialisation, namely that natural and formal science students showed the most interest in using the new technology. Another interesting outcome found that students’ perceived ease of use demonstrated a more consistent influence compared to usefulness in determining the usage of VLE. This finding is new and is inconsistent with most previous research. Although, the results show that there is no significant impact of learning styles on the research model, the results, however, show learning styles can play a very important role as a moderating factor between beliefs constructs and external variables. The results of the coefficients were not the same for each learning style, which may indicate that different learning styles moderate the relationships between variables involved in the research model (VLEAM). The people with the highest coefficients were those with the assimilator style compared to other learning styles, followed by divergent/accommodator convergers. This means that assimilators are the best target learners for VLE. However, the results show that female assimilators have more negative impact on PU, meaning that they regard VLE as being less useful. The parameters in the model may be altered for each learning style to get the maximum benefit from the model. From a theoretical and methodological perspective, it was found that TAM being a simple psychological model was not good enough to explain broader systems such as VLE and subsequently has not fully explained students’ perceptions towards use. In the light of the findings, the study suggested that a study of adoption and acceptance technology should move from using a simple psychological TAM model to another form that is able to measure IS that contains complex functions. The outcomes of the study are beneficial to decision makers at the university level when making decisions about technologies that affect the teaching and learning process as well as assisting in institutional decision in regards to where to commit resources (technology, monetary, labour, etc.) to implement and maintain those systems.
|
9 |
Reasoning about history based access control policy using past time operators of interval temporal logicAlsarhani, Sami January 2014 (has links)
Interval Temporal Logic (ITL) is a flexible notation for the propositional and first-order logical reasoning about periods of time that exist in specifications of hardware and software systems. ITL is different from other temporal logics since it can deal with both sequential and parallel composition and provides powerful and extensible specification and verification methods for reasoning about properties such as safety, time projection and liveness. Most imperative programming constructs can be seen as ITL formula that form the basis of an executable framework called Tempura that is used for the development and testing of ITL specifications.\\ ITL has only future operators, but the use of past operators make specifications referring to history more succinct; that is, there are classes of properties that can be expressed by means of much shorter formulas. What is more, statements are easier to express (simplicity) when past operators are included. Moreover, using past operators does not increase the complexity of interval temporal logic regarding the formula size and the simplicity. This thesis introduces past time of interval temporal logic where, instead of future time operators Chop, Chopstar, and Skip, we have past operators past Chop, past Chopstar and past Skip. The syntax and semantics of past time ITL are given together with its axiom and proof system. Furthermore, Security Analysis Toolkit for Agents (SANTA) operators such always-followed-by and the strong version of it has been given history based semantics using past time operators. In order to evaluate past time interval temporal logic, the problem of specification, verification of history based access control policies has been selected. This problem has already been solved using future time of interval temporal logic ITL but the drawback is that policy rules are not succinct and simple. However, the use of past time operators of ITL produces simple and succinct policy rules. The verification technique used to proof the safety property of history based access control policies is adapted for past time ITL to show that past time operators of interval temporal logic can specify and verify a security scenario such as history based access control policy.
|
10 |
Impact of climate change on extinction risk of montane tree speciesTejedor Garavito, Natalia January 2014 (has links)
The potential impacts of climate change on many species worldwide remains unknown, especially in those tropical regions that are centers of endemism and are highly biodiverse. This thesis provides an insight into the extinction risk of selected tree species using different species distribution modelling techniques and reviewing the current conservation status on montane forest in the Tropical Andes. Starting with a global analysis, the potential impacts of climate change on montane ecoregions is investigated, by identifying those that are more vulnerable to the expected changes in temperature and precipitation, from global predictions under different climate change scenarios. It then gives an insight on the current and potential threats to biodiversity in the Andean region, including the identification of those that are most likely to be responsible for increasing the extinction risk of the species. With the use of the IUCN Red List Categories and Criteria, selected tree species were assessed to identify their extinction risk. Information on the species’ current distribution was collated and used to estimate their potential distribution under climate change, by using different modelling techniques. These results were used to reassess the species using the IUCN Red List and establish the changes in Red List Category. Lastly, it provides a discussion that integrates all the results obtained throughout the thesis, to explore the implications for conservation, in order to highlight the overriding importance of including threatened tree species to target conservation efforts in the region, while considering the uncertainties that surround predictions under climate change scenarios, modelling techniques and the use of the IUCN Red List.
|
Page generated in 0.0203 seconds