Spelling suggestions: "subject:"1776 computer software"" "subject:"1776 coomputer software""
231 |
Managing the bazaar : commercialization and peripheral participation in mature, community-led free/open source software projectsBerdou, Evangelia January 2007 (has links)
The thesis investigates two fundamental dynamics of participation and collaboration in mature, community-led Free/Open Source (F/OS) software projects - commercialization and peripheral participation. The aim of the thesis is to examine whether the power relations that underlie the F/OS model of development are indicative of a new form of power relations supported by ICTs. Theoretically, the thesis is located within the Communities of Practice (CoP) literature and it draws upon Michel Foucault's ideas about the historical and relational character of power. It also mobilizes, to a lesser extent, Erving Goffman's notion of `face-work'. This framework supports a methodology that questions the rationality of how F/OS is organized and examines the relations between employed coders and volunteers, experienced and inexperienced coders, and programmers and nonprogrammers. The thesis examines discursive and structural dimensions of collaboration and employs quantitative and qualitative methods. Structural characteristics are considered in the light of arguments about embeddedness. The thesis contributes insights into how the gift economy is embedded in the exchange economy and the role of peripheral contributors. The analysis indicates that community-integrated paid developers have a key role in project development, maintaining the infrastructure aspects of the code base. The analysis suggests that programming and non-programming contributors are distinct in their make-up, priorities and rhythms of participation, and that learning plays an important role in controlling access. The results show that volunteers are important drivers of peripheral activities, such as translation and documentation. The term `autonomous peripherality' is used to capture the unique characteristics of these activities. These findings support the argument that centrality and peripherality are associated with the division of labour, which, in turn, is associated with employment relations and frameworks of institutional support. The thesis shows how the tensions produced by commercialization and peripheral participation are interwoven with values of meritocracy, ritual and strategic enactment of the idea of community as well as with tools and techniques developed to address the emergence of a set of problems specific to management and governance. These are characterized as `technologies of communities'. It is argued that the emerging topology of F/OS participation, seen as a `relational meshwork', is indicative of a redefinition of the relationship between sociality and economic production within mature, community-led F/OS projects.
|
232 |
Extension to models of coincident failure in multiversion softwareSalako, Kizito Oluwaseun January 2012 (has links)
Fault-tolerant architectures for software-based systems have been used in various practical applications, including Right control systems for commercial airliners (e.g. AIRBUS A340, A310) as part of an aircraft's so-called fiy-bY-'win: Right control system [1], the control systems for autonomous spacecrafts (e.g. Cassini-Huygens Saturn orbiter and probe) [2], rail interlocking systems [3] and nuclear reactor safety systems [4, 5]. The use of diverse, independently developed, functionally equivalent software modules in a fault-tolerant con- figura tion has been advocated as a means of achieving highly reliable systems from relatively less reliable system components [6, 7, 8, 9]. In this regard it had been postulated that [6] "The independence of programming efforts will greatly reduce the probability of identical softuiare faults occurring 'in two 01' more versions of the proqram." Experimental evaluation demonstrated that despite the independent creation of such versions positive failure correlation between the versions can be expected in practice [10, 11]. The conceptual models of Eckhardt et al [12] and Littlewood et al [13], referred to as the EL model and LM model respectively, were instrumental in pointing out sources of uncertainty that determine both the size and sign of such failure correlation. In particular, there are two important sources of uncertainty: The process of developing software: given sufficiently complex system requirements, the particular software version that will be produced from such a process is not knqwn with certainty. Consequently, complete knowledge of what the failure behaviour of the software will be is also unknown; The occurrence of demands during system operation: during system operation it may not be certain which demand 11 system will receive next from the environment. To explain failure correlation between multiple software versions the EL model introduced lite notion of difficulty: that is, given a demand that could occur during system operation there is a chance that a given software development team will develop a software component that fails when handling such a demand as part of the system. A demand with an associated high probability of developed software failing to handle it correctly is considered to be a "difficult" demand for a development team: a low probability of failure would suggest an "easy" demand. In the EL model different development. teams, even when isolated from each other, are identical in how likely they are to make mistakes while developing their respective software versions. Consequently, despite the teams possibly creating software versions that fail on different demands, in developing their respective versions the teams find the same demands easy, and the same demands difficult. The implication of this is the versions developed by the teams do not fail independently; if one observes t.he failure-of one team's version this could indicate that the version failed on a difficult. demand, thus increasing one's expectation that the second team's version will also fail on that demand. Succinctly put, due to correlated "difficulties" between the teams across the demands, "independently developed software cannot be expected to fail independently". The LM model takes this idea a step further by illustrating, under rather general practical conditions, that negative failure correlation is also possible; possible, because the teams may be sufficiently diverse in which demands they find "difficult". This in turn implies better reliability than would be expected under naive assumptions of failure independence between software modules built by the respective teams. Although these models provide such insight they also pose questions yet to be answered.
|
233 |
Diagnosing runtime violations of security and dependability propertiesTsigkritis, Theocharis January 2010 (has links)
Monitoring the preservation of security and dependability (S&D) properties of complex software systems is widely accepted as a necessity. Basic monitoring can detect violations but does not always provide sufficient information for deciding what the appropriate response to a violation is. Such decisions often require additional diagnostic information that explains why a violation has occurred and can, therefore, indicate what would be an appropriate response action to it. In this thesis, we describe a diagnostic procedure for generating explanations of violations of S&D properties developed as extension of a runtime monitoring framewoek, called EVEREST. The procedure is based on a combination of abductive and evidential reasoning about violations of S&D properties which are expressed in Event Calculus.
|
234 |
A neural-symbolic system for temporal reasoning with application to model verification and learningBorges, Rafael January 2012 (has links)
The effective integration of knowledge representation, reasoning and learning into a robust computational model is one of the key challenges in Computer Science and Artificial Intelligence. In particular, temporal models have been fundamental in describing the behaviour of Computational and Neural-Symbolic Systems. Furthermore, knowledge acquisition of correct descriptions of the desired system’s behaviour is a complex task in several domains. Several efforts have been directed towards the development of tools that are capable of learning, describing and evolving software models. This thesis contributes to two major areas of Computer Science, namely Artificial Intelligence (AI) and Software Engineering. Under an AI perspective, we present a novel neural-symbolic computational model capable of representing and learning temporal knowledge in recurrent networks. The model works in integrated fashion. It enables the effective representation of temporal knowledge, the adaptation of temporal models to a set of desirable system properties and effective learning from examples, which in turn can lead to symbolic temporal knowledge extraction from the corresponding trained neural networks. The model is sound, from a theoretical standpoint, but is also tested in a number of case studies. An extension to the framework is shown to tackle aspects of verification and adaptation under the SE perspective. As regards verification, we make use of established techniques for model checking, which allow the verification of properties described as temporal models and return counter-examples whenever the properties are not satisfied. Our neural-symbolic framework is then extended to deal with different sources of information. This includes the translation of model descriptions into the neural structure, the evolution of such descriptions by the application of learning of counter examples, and also the learning of new models from simple observation of their behaviour. In summary, we believe the thesis describes a principled methodology for temporal knowledge representation, learning and extraction, shedding new light on predictive temporal models, not only from a theoretical standpoint, but also with respect to a potentially large number of applications in AI, Neural Computation and Software Engineering, where temporal knowledge plays a fundamental role.
|
235 |
FPGA-based architectures for next generation communications networksHegarty, Declan January 2008 (has links)
This engineering doctorate concerns the application of Field Programmable Gate Array (FPGA) technology to some of the challenges faced in the design of next generation communications networks. The growth and convergence of such networks has fuelled demand for higher bandwidth systems, and a requirement to support a diverse range of payloads across the network span. The research which follows focuses on the development of FPGA-based architectures for two important paradigms in contemporary networking - Forward Error Correction and Packet Classification. The work seeks to combine analysis of the underlying algorithms and mathematical techniques which drive these applications, with an informed approach to the design of efficient FPGA-based circuits.
|
236 |
On backoff mechanisms for wireless Mobile Ad Hoc NetworksManaseer, Saher January 2010 (has links)
Since their emergence within the past decade, which has seen wireless networks being adapted to enable mobility, wireless networks have become increasingly popular in the world of computer research. A Mobile Ad hoc Network (MANET) is a collection of mobile nodes dynamically forming a temporary network without the use of any existing network infrastructure. MANETs have received significant attention in recent years due to their easiness to setup and to their potential applications in many domains. Such networks can be useful in situations where there is not enough time or resource to configure a wired network. Ad hoc networks are also used in military operations where the units are randomly mobile and a central unit cannot be used for synchronization. The shared media used by wireless networks, grant exclusive rights for a node to transmit a packet. Access to this media is controlled by the Media Access Control (MAC) protocol. The Backoff mechanism is a basic part of a MAC protocol. Since only one transmitting node uses the channel at any given time, the MAC protocol must suspend other nodes while the media is busy. In order to decide the length of node suspension, a backoff mechanism is installed in the MAC protocol. The choice of backoff mechanism should consider generating backoff timers which allow adequate time for current transmissions to finish and, at the same time, avoid unneeded idle time that leads to redundant delay in the network. Moreover, the backoff mechanism used should decide the suitable action to be taken in case of repeated failures of a node to attain the media. Further, the mechanism decides the action needed after a successful transmission since this action affects the next time backoff is needed. The Binary exponential Backoff (BEB) is the backoff mechanisms that MANETs have adopted from Ethernet. Similar to Ethernet, MANETs use a shared media. Therefore, the standard MAC protocol used for MANETs uses the standard BEB backoff algorithms. The first part of this work, presented as Chapter 3 of this thesis, studies the effects of changing the backoff behaviour upon a transmission failure or after a successful transmission. The investigation has revealed that using different behaviours directly affects both network throughput and average packet delay. This result indicates that BEB is not the optimal backoff mechanism for MANETs. Up until this research started, no research activity has focused on studying the major parameters of MANETs. These parameters are the speed at which nodes travel inside the network area, the number of nodes in the network and the data size generated per second. These are referred to as mobility speed, network size and traffic load respectively. The investigation has reported that changes made to these parameters values have a major effect on network performance. Existing research on backoff algorithms for MANETs mainly focuses on using external information, as opposed to information available from within the node, to decide the length of backoff timers. Such information includes network traffic load, transmission failures of other nodes and the total number of nodes in the network. In a mobile network, acquiring such information is not feasible at all times. To address this point, the second part of this thesis proposes new backoff algorithms to use with MANETs. These algorithms use internal information only to make their decisions. This part has revealed that it is possible to achieve higher network throughput and less average packet delay under different values of the parameters mentioned above without the use of any external information. This work proposes two new backoff algorithms. The Optimistic Linear-Exponential Backoff, (OLEB), and the Pessimistic Linear-Exponential Backoff (PLEB). In OLEB, the exponential backoff is combined with linear increment behaviour in order to reduce redundant long backoff times, during which the media is available and the node is still on backoff status, by implementing less dramatic increments in the early backoff stages. PLEB is also a combination of exponential and linear increment behaviours. However, the order in which linear and exponential behaviours are used is the reverse of that in OLEB. The two algorithms have been compared with existing work. Results of this research report that PLEB achieves higher network throughput for large numbers of nodes (e.g. 50 nodes and over). Moreover, PLEB achieves higher network throughput with low mobility speed. As for average packet delay, PLEB significantly improves average packet delay for large network sizes especially when combined with high traffic rate and mobility speed. On the other hand, the measurements of network throughput have revealed that for small networks of 10 nodes, OLEB has higher throughput than existing work at high traffic rates. For a medium network size of 50 nodes, OLEB also achieves higher throughput. Finally, at a large network size of 100 nodes, OLEB reaches higher throughput at low mobility speed. Moreover, OLEB produces lower average packet delay than the existing algorithms at low mobility speed for a network size of 50 nodes. Finally, this work has studied the effect of choosing the behaviour changing point between linear and exponential increments in OLEB and PLEB. Results have shown that increasing the number of times in which the linear increment is used increases network throughput. Moreover, using larger linear increments increase network throughput.
|
237 |
Audio-visual football video analysis, from structure detection to attention analysisRen, Reede January 2008 (has links)
Sport video is an important video genre. Content-based sports video analysis attracts great interest from both industry and academic fields. A sports video is characterised by repetitive temporal structures, relatively plain contents, and strong spatio-temporal variations, such as quick camera switches and swift local motions. It is necessary to develop specific techniques for content-based sports video analysis to utilise these characteristics. For an efficient and effective sports video analysis system, there are three fundamental questions: (1) what are key stories for sports videos; (2) what incurs viewer’s interest; and (3) how to identify game highlights. This thesis is developed around these questions. We approached these questions from two different perspectives and in turn three research contributions are presented, namely, replay detection, attack temporal structure decomposition, and attention-based highlight identification. Replay segments convey the most important contents in sports videos. It is an efficient approach to collect game highlights by detecting replay segments. However, replay is an artefact of editing, which improves with advances in video editing tools. The composition of replay is complex, which includes logo transitions, slow motions, viewpoint switches and normal speed video clips. Since logo transition clips are pervasive in game collections of FIFA World Cup 2002, FIFA World Cup 2006 and UEFA Championship 2006, we take logo transition detection as an effective replacement of replay detection. A two-pass system was developed, including a five-layer adaboost classifier and a logo template matching throughout an entire video. The five-layer adaboost utilises shot duration, average game pitch ratio, average motion, sequential colour histogram and shot frequency between two neighbouring logo transitions, to filter out logo transition candidates. Subsequently, a logo template is constructed and employed to find all transition logo sequences. The precision and recall of this system in replay detection is 100% in a five-game evaluation collection. An attack structure is a team competition for a score. Hence, this structure is a conceptually fundamental unit of a football video as well as other sports videos. We review the literature of content-based temporal structures, such as play-break structure, and develop a three-step system for automatic attack structure decomposition. Four content-based shot classes, namely, play, focus, replay and break were identified by low level visual features. A four-state hidden Markov model was trained to simulate transition processes among these shot classes. Since attack structures are the longest repetitive temporal unit in a sports video, a suffix tree is proposed to find the longest repetitive substring in the label sequence of shot class transitions. These occurrences of this substring are regarded as a kernel of an attack hidden Markov process. Therefore, the decomposition of attack structure becomes a boundary likelihood comparison between two Markov chains. Highlights are what attract notice. Attention is a psychological measurement of “notice ”. A brief survey of attention psychological background, attention estimation from vision and auditory, and multiple modality attention fusion is presented. We propose two attention models for sports video analysis, namely, the role-based attention model and the multiresolution autoregressive framework. The role-based attention model is based on the perception structure during watching video. This model removes reflection bias among modality salient signals and combines these signals by reflectors. The multiresolution autoregressive framework (MAR) treats salient signals as a group of smooth random processes, which follow a similar trend but are filled with noise. This framework tries to estimate a noise-less signal from these coarse noisy observations by a multiple resolution analysis. Related algorithms are developed, such as event segmentation on a MAR tree and real time event detection. The experiment shows that these attention-based approach can find goal events at a high precision. Moreover, results of MAR-based highlight detection on the final game of FIFA 2002 and 2006 are highly similar to professionally labelled highlights by BBC and FIFA.
|
238 |
A generic approach to the evolution of interaction in ubiquitous systemsMcBryan, Tony January 2011 (has links)
This dissertation addresses the challenge of the configuration of modern (ubiquitous, context-sensitive, mobile et al.) interactive systems where it is difficult or impossible to predict (i) the resources available for evolution, (ii) the criteria for judging the success of the evolution, and (iii) the degree to which human judgements must be involved in the evaluation process used to determine the configuration. In this thesis a conceptual model of interactive system configuration over time (known as interaction evolution) is presented which relies upon the follow steps; (i) identification of opportunities for change in a system, (ii) reflection on the available configuration alternatives, (iii) decision-making and (iv) implementation, and finally iteration of the process. This conceptual model underpins the development of a dynamic evolution environment based on a notion of configuration evaluation functions (hereafter referred to as evaluation functions) that provides greater flexibility than current solutions and, when supported by appropriate tools, can provide a richer set of evaluation techniques and features that are difficult or impossible to implement in current systems. Specifically this approach has support for changes to the approach, style or mode of use used for configuration - these features may result in more effective systems, less effort involved to configure them and a greater degree of control may be offered to the user. The contributions of this work include; (i) establishing the the need for configuration evolution through a literature review and a motivating case study experiment, (ii) development of a conceptual process model supporting interaction evolution, (iii) development of a model based on the notion of evaluation functions which is shown to support a wide range of interaction configuration approaches, (iv) a characterisation of the configuration evaluation space, followed by (v) an implementation of these ideas used in (vi) a series of longitudinal technology probes and investigations into the approaches.
|
239 |
Technologies of indigeneity : indigenous collective identity narratives in online communitiesLongboan, Liezel C. January 2013 (has links)
This thesis examines contemporary constructions of collective indigenous identity. It specifically focuses on the offline and online interactions among the members of Bibaknets, an online community for indigenous peoples from the highlands of the Cordillera Region, Philippines. The study explores the relational and positional nature of collective indigenous identity as Cordillerans attempt to resolve the tensions between their experiences of marginalisation and their goal for empowerment. Drawing upon Michel Foucault’s concept of governmentality, the thesis critically analyses the processes of Cordilleran collective identity construction which are inscribed in power relations not only between highlanders and the dominant population but also among themselves. On the one hand, members are motivated to join and participate in Bibaknets discussions as a forum for Cordillerans. On the other hand, such participation is constrained by some members who direct the discussions and consequently define the membership of the forum.
|
240 |
Analysing web-based malware behaviour through client honeypotsAlosefer, Yaser January 2012 (has links)
With an increase in the use of the internet, there has been a rise in the number of attacks on servers. These attacks can be successfully defended against using security technologies such as firewalls, IDS and anti-virus software, so attackers have developed new methods to spread their malicious code by using web pages, which can affect many more victims than the traditional approach. The attackers now use these websites to threaten users without the user’s knowledge or permission. The defence against such websites is less effective than traditional security products meaning the attackers have the advantage of being able to target a greater number of users. Malicious web pages attack users through their web browsers and the attack can occur even if the user only visits the web page; this type of attack is called a drive-by download attack. This dissertation explores how web-based attacks work and how users can be protected from this type of attack based on the behaviour of a remote web server. We propose a system that is based on the use of client Honeypot technology. The client Honeypot is able to scan malicious web pages based on their behaviour and can therefore work as an anomaly detection system. The proposed system has three main models: state machine, clustering and prediction models. All these three models work together in order to protect users from known and unknown web-based attacks. This research demonstrates the challenges faced by end users and how the attacker can easily target systems using drive-by download attacks. In this dissertation we discuss how the proposed system works and the research challenges that we are trying to solve, such as how to group web-based attacks into behaviour groups, how to avoid attempts at obfuscation used by attackers and how to predict future malicious behaviour for a given web-based attack based on its behaviour in real time. Finally, we have demonstrate how the proposed system will work by implementing a prototype application and conducting a number of experiments to show how we were able to model, cluster and predict web-based attacks based on their behaviour. The experiment data was collected randomly from online blacklist websites.
|
Page generated in 1.0153 seconds