• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 47
  • 34
  • Tagged with
  • 377
  • 85
  • 70
  • 44
  • 44
  • 38
  • 37
  • 34
  • 33
  • 28
  • 25
  • 24
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An efficient approach to online bot detection based on a reinforcement learning technique

Alauthman, Mohammad January 2016 (has links)
In recent years, Botnets have been adopted as a popular method used to carry and spread many malicious codes on the Internet. These codes pave the way to conducting many fraudulent activities, including spam mail, distributed denial of service attacks (DDoS) and click fraud. While many Botnets are set up using a centralized communication architecture such as Internet Relay Chat (IRC) and Hypertext Transfer Protocol (HTTP), peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control (C&C) messages, which is a more resilient and robust communication channel infrastructure. Without a centralized point for C&C servers, P2P Botnets are more flexible to defeat countermeasures and detection procedures than traditional centralized Botnets. Several Botnet detection techniques have been proposed, but Botnet detection is still a very challenging task for the Internet security community because Botnets execute attacks stealthily in the dramatically growing volumes of network traffic. However, current Botnet detection schemes face significant problem of efficiency and adaptability. The present study combined a traffic reduction approach with reinforcement learning (RL) method in order to create an online Bot detection system. The proposed framework adopts the idea of RL to improve the system dynamically over time. In addition, the traffic reduction method is used to set up a lightweight and fast online detection method. Moreover, a host feature based on traffic at the connection-level was designed, which can identify Bot host behaviour. Therefore, the proposed technique can potentially be applied to any encrypted network traffic since it depends only on the information obtained from packets header. Therefore, it does not require Deep Packet Inspection (DPI) and cannot be confused with payload encryption techniques. The network traffic reduction technique reduces packets input to the detection system, but the proposed solution achieves good a detection rate of 98.3% as well as a low false positive rate (FPR) of 0.012% in the online evaluation. Comparison with other techniques on the same dataset shows that our strategy outperforms existing methods. The proposed solution was evaluated and tested using real network traffic datasets to increase the validity of the solution.
142

Content-aware and context-aware adaptive video streaming over HTTP

Ognenoski, Ognen January 2016 (has links)
Adaptive HTTP video streaming techniques are rapidly becoming the main method for video delivery over the Internet. From a conceptual viewpoint, adaptive HTTP video streaming systems enable adaptation of the video quality according to network conditions (link-awareness), content characteristics (content-awareness), user preferences (user-awareness) or device capabilities (device awareness). Proprietary adaptive HTTP video streaming platforms from Apple, Adobe and Microsoft preceded the completion of a standard for adaptive HTTP video streaming, i.e., the MPEG-DASH standard. The dissertation presents modeling approaches, experiments, simulations and subjective tests tightly related to adaptive HTTP video streaming with particular emphasis on the MPEG-DASH standard. Different case studies are investigated through novel models based on analytical and simulation approaches. In particular, adaptive HTTP video streaming over Long Term Evolution (LTE) networks, over cloud infrastructure, and streaming of medical videos are investigated and the relevant benefits and drawbacks of using adaptive HTTP video streaming for these cases are highlighted. Further, mathematical tools and concepts are used to acquire quantifiable knowledge related to the HTTP/TCP communication protocol stack and to investigate dependencies between adaptive HTTP video streaming parameters and the underlying Quality of Service (QoS) and Quality of Experience (QoE). Additionally, a novel method and model for QoE assessment are proposed, derived in a specific experimental setup. A more general setup is then considered and a QoE metric is derived. The QoE metric expresses the users' quality for adaptive HTTP video streaming by taking into consideration rebuffering, video quality and content-related parameters. In the end, a novel analytical model that captures the user's perception of quality via the experienced delay during streaming navigation is derived. The contributions in this dissertation and the relevant conclusions are obtained by simulations, experimental demo setups, subjective tests and analytical modeling.
143

Intelligent agents for mobile location services

McInerney, James January 2014 (has links)
Understanding human mobility patterns is a significant research endeavour that has recently received considerable attention. Developing the science to describe and predict how people move from one place to another during their daily lives promises to address a wide range of societal challenges: from predicting the spread of infectious diseases, improving urban planning, to devising effective emergency response strategies. Individuals are also set to benefit from this area of research, as mobile devices will be able to analyse their mobility pattern and offer context-aware assistance and information. For example, a service could warn about travel disruptions before the user is likely to encounter them, or provide recommendations and mobile vouchers for local services that promise to be of high value to the user, based on their predicted future plans. More ambitiously, control systems for home heating and electric vehicle charging could be enhanced with knowledge of when the user will be home. In this thesis, we focus on such anticipatory computing. Some aspects of the vision of context-awareness have been pursued for many years, resulting in mature research in the area of ubiquitous systems. However, the combination of surprisingly rapid adoption of advanced mobile devices by consumers and the broad acceptance of location-based apps has surfaced not only new opportunities, but also a number of pressing challenges. In more detail, these challenges are the (i) prediction of future mobility, (ii) inference of features of human location behaviour, and (iii) use of prediction and inference to make decisions about timely information or control actions. Our research brings together, for the first time, the entire workflow that a mobile location service needs to follow, in order to achieve an understanding of mobile user needs and to act on such understanding effectively. This framing of the problem highlights the shortcomings of existing approaches which we seek to address. In the current literature, prediction is only considered for established users, which implicitly assumes that new users will continue to use an initially inaccurate prediction system long enough for it to improve and increase in accuracy over time. Additionally, inference of user behaviour is mostly concerned with interruptibility, which does not take into account the constructive role of intelligent location services that goes beyond simply avoiding interrupting the user at inopportune times (e.g., in a meeting, or while driving). Finally, no principled decision framework for intelligent location services has been provided that takes into account the results of prediction and inference. To address these shortcomings, we make three main contributions to the state of the art. Firstly, we provide a novel Bayesian model that relates the location behaviour of new and established users, allowing the reuse of structure learnt from rich mobility data. This model shows a factor of 2.4 improvement over the state-of-the-art baseline in heldout data likelihood in experiments using the Nokia Lausanne dataset. Secondly, we give new tools for the analysis and prediction of routine in mobility, which is a latent feature of human behaviour, that informs the service about the user’s availability to follow up on any information provided. And thirdly, we provide a fully worked example of an intelligent mobile location service (a crowdsourced package delivery service) that performs decision-making using predictive densities of current and future user mobility. Simulations using real mobility data from the Orange Ivory Coast dataset indicate a 81.3% improvement in service efficiency when compared with the next best (non-anticipatory) approach.
144

Indoor collaborative positioning based on a multi-sensor and multi-user system

Jing, Hao January 2015 (has links)
With recent developments in the Global Satellite Navigation Systems (GNSS), the applications and services of positioning and navigation have developed rapidly worldwide. Location-based services (LBS) have become a big application which provide position related services to the mass market. As LBS applications become more popular, positioning services and capacity are demanded to cover all types of environment with improved accuracy and reliability. While GNSS can provide promising positioning and navigation solutions in open outdoor environments, it does not work well when inside buildings, in tunnels or under canopy. Positioning in such difficult environments have been known as the indoor positioning problem. Although the problem has been looked into for more than a decade, there currently no solution that can compare to the performance of GNSS in outdoor environments. This thesis introduces a collaborative indoor positioning solution based on particle filtering which integrates multiple sensors, e.g. inertial sensors, Wi-Fi signals, map information etc., and multiple local users which provide peer-to-peer (P2P) relative ranging measurements. This solution addresses three current problems of indoor positioning. First of all is the positioning accuracy, which is limited by the availability of sensors and the quality of their signals in the environment. The collaborative positioning solution integrates a number of sensors and users to provide better measurements and restrict measurement error from growing. Secondly, the reliability of the positioning solutions, which is also affected by the signal quality. The unpredictable behaviour of positioning signals and data could lead to many uncertainties in the final positioning result. A successful positioning system should be able to deal with changes in the signal and provide reliable positioning results using different data processing strategies. Thirdly, the continuity and robustness of positioning solutions. While the indoor environment can be very different from one another, hence applicable signals are also different, the positioning solution should take into account the uniqueness of different situations and provide continuous positioning result regardless of the changing datWith recent developments in the Global Satellite Navigation Systems (GNSS), the applications and services of positioning and navigation have developed rapidly worldwide. Location based services (LBS) have become a big application which provide position related services to the mass market. As LBS applications become more popular, positioning services and capacity are demanded to cover all types of environment with improved accuracy and reliability. While GNSS can provide promising positioning and navigation solutions in open outdoor environments, it does not work well when inside buildings, in tunnels or under canopy. Positioning in such difficult environments have been known as the indoor positioning problem. Although the problem has been looked into for more than a decade, there currently no solution that can compare to the performance of GNSS in outdoor environments. This thesis introduces a collaborative indoor positioning solution based on particle filtering which integrates multiple sensors, e.g. inertial sensors, Wi-Fi signals, map information etc., and multiple local users which provide peer-to-peer (P2P) relative ranging measurements. This solution addresses three current problems of indoor positioning. First of all is the positioning accuracy, which is limited by the availability of sensors and the quality of their signals in the environment. The collaborative positioning solution integrates a number of sensors and users to provide better measurements and restrict measurement error from growing. Secondly, the reliability of the positioning solutions, which is also affected by the signal quality. The unpredictable behaviour of positioning signals and data could lead to many uncertainties in the final positioning result. A successful positioning system should be able to deal with changes in the signal and provide reliable positioning results using different data processing strategies. Thirdly, the continuity and robustness of positioning solutions. While the indoor environment can be very different from one another, hence applicable signals are also different, the positioning solution should take into account the uniqueness of different situations and provide continuous positioning result regardless of the changing data. The collaborative positioning aspect is examined from three aspects, the network geometry, the network size and the P2P ranging measurement accuracy. Both theoretical and experimental results indicate that a collaborative network with a low dilution of precision (DOP) value could achieve better positioning accuracy. While increasing sensors and users will reduce DOP, it will also increase computation load which is already a disadvantage of particle filters. The most effective collaborative positioning network size is thus identified and applied. While the positioning system measurement error is constrained by the accuracy of the P2P ranging constraint, the work in this thesis shows that even low accuracy measurements can provide effective constraint as long as the system is able to identify the different qualities of the measurements. The proposed collaborative positioning algorithm constrains both inertial measurements and Wi-Fi fingerprinting to enhance the stability and accuracy of positioning result, achieving metre-level accuracy. The application of collaborative constraints also eliminate the requirement for indoor map matching which had been a very useful tool in particle filters for indoor positioning purposes. The wall constraint can be replaced flexibly and easily with relative constraint. Simulations and indoor trials are carried out to evaluate the algorithms. Results indicate that metre-level positioning accuracy could be achieved and collaborative positioning also gives the system more flexibility to adapt to different situations when Wi-Fi or collaborative ranging is unavailable. The collaborative positioning aspect is examined from three aspects, the network geometry, the network size and the P2P ranging measurement accuracy. Both theoretical and experimental results indicate that a collaborative network with a low dilution of precision (DOP) value could achieve better positioning accuracy. While increasing sensors and users will reduce DOP, it will also increase computation load which is already a disadvantage of particle filters. The most effective collaborative positioning network size is thus identified and applied. While the positioning system measurement error is constrained by the accuracy of the P2P ranging constraint, the work in this thesis shows that even low accuracy measurements can provide effective constraint as long as the system is able to identify the different qualities of the measurements. The proposed collaborative positioning algorithm constrains both inertial measurements and Wi-Fi fingerprinting to enhance the stability and accuracy of positioning result, achieving metre-level accuracy. The application of collaborative constraints also eliminate the requirement for indoor map matching which had been a very useful tool in particle filters for indoor positioning purposes. The wall constraint can be replaced flexibly and easily with relative constraint. Simulations and indoor trials are carried out to evaluate the algorithms. Results indicate that metre-level positioning accuracy could be achieved and collaborative positioning also gives the system more flexibility to adapt to different situations when Wi-Fi or collaborative ranging is unavailable.
145

Service composition based on SIP peer-to-peer networks

Lehmann, Armin January 2014 (has links)
Today the telecommunication market is faced with the situation that customers are requesting for new telecommunication services, especially value added services. The concept of Next Generation Networks (NGN) seems to be a solution for this, so this concept finds its way into the telecommunication area. These customer expectations have emerged in the context of NGN and the associated migration of the telecommunication networks from traditional circuit-switched towards packet-switched networks. One fundamental aspect of the NGN concept is to outsource the intelligence of services from the switching plane onto separated Service Delivery Platforms using SIP (Session Initiation Protocol) to provide the required signalling functionality. Caused by this migration process towards NGN SIP has appeared as the major signalling protocol for IP (Internet Protocol) based NGN. This will lead in contrast to ISDN (Integrated Services Digital Network) and IN (Intelligent Network) to significantly lower dependences among the network and services and enables to implement new services much easier and faster. In addition, further concepts from the IT (Information Technology) namely SOA (Service-Oriented Architecture) have largely influenced the telecommunication sector forced by amalgamation of IT and telecommunications. The benefit of applying SOA in telecommunication services is the acceleration of service creation and delivery. Main features of the SOA are that services are reusable, discoverable combinable and independently accessible from any location. Integration of those features offers a broader flexibility and efficiency for varying demands on services. This thesis proposes a novel framework for service provisioning and composition in SIP-based peer-to-peer networks applying the principles of SOA. One key contribution of the framework is the approach to enable the provisioning and composition of services which is performed by applying SIP. Based on this, the framework provides a flexible and fast way to request the creation for composite services. Furthermore the framework enables to request and combine multimodal value-added services, which means that they are no longer limited regarding media types such as audio, video and text. The proposed framework has been validated by a prototype implementation.
146

Quality of Service optimisation framework for Next Generation Networks

Weber, Frank Gerd January 2012 (has links)
Within recent years, the concept of Next Generation Networks (NGN) has become widely accepted within the telecommunication area, in parallel with the migration of telecommunication networks from traditional circuit-switched technologies such as ISDN (Integrated Services Digital Network) towards packet-switched NGN. In this context, SIP (Session Initiation Protocol), originally developed for Internet use only, has emerged as the major signalling protocol for multimedia sessions in IP (Internet Protocol) based NGN. One of the traditional limitations of IP when faced with the challenges of real-time communications is the lack of quality support at the network layer. In line with NGN specification work, international standardisation bodies have defined a sophisticated QoS (Quality of Service) architecture for NGN, controlling IP transport resources and conventional IP QoS mechanisms through centralised higher layer network elements via cross-layer signalling. Being able to centrally control QoS conditions for any media session in NGN without the imperative of a cross-layer approach would result in a feasible and less complex NGN architecture. Especially the demand for additional network elements would be decreased, resulting in the reduction of system and operational costs in both, service and transport infrastructure. This thesis proposes a novel framework for QoS optimisation for media sessions in SIP-based NGN without the need for cross-layer signalling. One key contribution of the framework is the approach to identify and logically group media sessions that encounter similar QoS conditions, which is performed by applying pattern recognition and clustering techniques. Based on this novel methodology, the framework provides functions and mechanisms for comprehensive resource-saving QoS estimation, adaptation of QoS conditions, and support of Call Admission Control. The framework can be integrated with any arbitrary SIP-IP-based real-time communication infrastructure, since it does not require access to any particular QoS control or monitoring functionalities provided within the IP transport network. The proposed framework concept has been deployed and validated in a prototypical simulation environment. Simulation results show MOS (Mean Opinion Score) improvement rates between 53 and 66 percent without any active control of transport network resources. Overall, the proposed framework comes as an effective concept for central controlled QoS optimisation in NGN without the need for cross-layer signalling. As such, by either being run stand-alone or combined with conventional QoS control mechanisms, the framework provides a comprehensive basis for both the reduction of complexity and mitigation of issues coming along with QoS provision in NGN.
147

Model-based transmission reduction and virtual sensing in wireless sensor networks

Goldsmith, D. January 2013 (has links)
This thesis examines the use of modelling approaches in Wireless Sensor Networks (WSNs) at node and sink to: reduce the amount of data that needs to be transmitted by each node and estimate sensor readings for locations where no data is available. First, to contextualise the contributions in this thesis, a framework for WSN monitoring applications (FieldMAP) is proposed. FieldMAP provides a structure for developing monitoring applications that advocates the use of modelling to improve the informational output of WSNs and goes beyond the sense- and-send approach commonly found in current, elded WSN applications. Rather than report raw sensor readings, FieldMAP advocates the use of a state vector to encapsulate the state of the phenomena sensed by the node. Second, the Spanish Inquisition Protocol (SIP) is presented. SIP reduces the amount of data that a sensor node must transmit by combining model-based ltering with Dual-Prediction approaches. SIP makes use of the state vector component of FieldMAP to form a simple predictive model that allows the sink to estimate sensor readings without requiring regular updates from the node. Transmissions are only made when the node detects that the predictive model no longer matches the evolving data stream. SIP is shown to produce up to a 99% reduction in the number of samples that require transmission on certain data sets using a simple linear approach and consistently outperforms comparable algorithms when used to compress the same data streams. Furthermore, the relationship between the user-specied error threshold and number of transmissions required to reconstruct a data set is explored, and a method to estimate the number of transmissions required to reconstruct the data stream at a given error threshold is proposed. When multiple parameters are sensed by a node, SIP allows them to be combined into a single state vector. This is demonstrated to further reduce the number of model updates required compared to processing each sensor stream individually. iii Third, a sink-based, on-line mechanism to impute missing sensor values and predict future readings from sensor nodes is developed and evaluated in the context of an on-line monitoring system for a Water Distribution System (WDS). The mechanism is based on a machine learning approach called Gaussian Process Regression (GPR), and is implemented such that it can exploit correlations between nodes in the network to improve predictions. An on-line windowing algorithm deals with data arriving out of order and provides a feedback mechanism to predict values when data is not received in a timely manner. A novel approach to create virtual sensors that allows a data stream to be predicted where no physical sensor is permanently deployed is developed from the on-line GPR mechanism. The use of correlation in prediction is shown to improve the accuracy of predicted data from 1.55 Pounds per Square Inch (PSI) Root Mean Squared Error (RMSE) to 0.01 PSI RMSE. In-situ evaluation of the Virtual Sensors approach over 36 days showed that an accuracy of 0:75 PSI was maintained. The protocols developed in this thesis present an opportunity to improve the output of environmental monitoring applications. By improving energy consumption, long-lived networks that collect detailed data are made possible. Furthermore, the utility of the data collected by these networks is increased by using it to improve coverage over areas where measurements are not taken or available.
148

Behavioural monitoring via network communications

Alotibi, Gaseb January 2017 (has links)
It is commonly acknowledged that using Internet applications is an integral part of an individual’s everyday life, with more than three billion users now using Internet services across the world; and this number is growing every year. Unfortunately, with this rise in Internet use comes an increasing rise in cyber-related crime. Whilst significant effort has been expended on protecting systems from outside attack, only more recently have researchers sought to develop countermeasures against insider attack. However, for an organisation, the detection of an attack is merely the start of a process that requires them to investigate and attribute the attack to an individual (or group of individuals). The investigation of an attack typically revolves around the analysis of network traffic, in order to better understand the nature of the traffic flows and importantly resolves this to an IP address of the insider. However, with mobile computing and Dynamic Host Control Protocol (DHCP), which results in Internet Protocol (IP) addresses changing frequently, it is particularly challenging to resolve the traffic back to a specific individual. The thesis explores the feasibility of profiling network traffic in a biometric-manner in order to be able to identify users independently of the IP address. In order to maintain privacy and the issue of encryption (which exists on an increasing volume of network traffic), the proposed approach utilises data derived only from the metadata of packets, not the payload. The research proposed a novel feature extraction approach focussed upon extracting user-oriented application-level features from the wider network traffic. An investigation across nine of the most common web applications (Facebook, Twitter, YouTube, Dropbox, Google, Outlook, Skype, BBC and Wikipedia) was undertaken to determine whether such high-level features could be derived from the low-level network signals. The results showed that whilst some user interactions were not possible to extract due to the complexities of the resulting web application, a majority of them were. Having developed a feature extraction process that focussed more upon the user, rather than machine-to-machine traffic, the research sought to use this information to determine whether a behavioural profile could be developed to enable identification of the users. Network traffic of 27 users over 2 months was collected and processed using the aforementioned feature extraction process. Over 140 million packets were collected and processed into 45 user-level interactions across the nine applications. The results from behavioural profiling showed that the system is capable of identifying users, with an average True Positive Identification Rate (TPIR) in the top three applications of 87.4%, 75% and 61.9% respectively. Whilst the initial study provided some encouraging results, the research continued to develop further refinements which could improve the performance. Two techniques were applied, fusion and timeline analysis techniques. The former approach sought to fuse the output of the classification stage to better incorporate and manage the variability of the classification and resulting decision phases of the biometric system. The latter approach sought to capitalise on the fact that whilst the IP address is not reliable over a period of time due to reallocation, over shorter timeframes (e.g. a few minutes) it is likely to reliable and map to the same user. The results for fusion across the top three applications were 93.3%, 82.5% and 68.9%. The overall performance adding in the timeline analysis (with a 240 second time window) on average across all applications was 72.1%. Whilst in terms of biometric identification in the normal sense, 72.1% is not outstanding, its use within this problem of attributing misuse to an individual provides the investigator with an enormous advantage over existing approaches. At best, it will provide him with a user’s specific traffic and at worst allow them to significantly reduce the volume of traffic to be analysed.
149

A credit-based approach to scalable video transmission over a peer-to-peer social network

Asioli, Stefano January 2013 (has links)
The objective of the research work presented in this thesis is to study scalable video transmission over peer-to-peer networks. In particular, we analyse how a credit-based approach and exploitation of social networking features can play a significant role in the design of such systems. Peer-to-peer systems are nowadays a valid alternative to the traditional client-server architecture for the distribution of multimedia content, as they transfer the workload from the service provider to the final user, with a subsequent reduction of management costs for the former. On the other hand, scalable video coding helps in dealing with network heterogeneity, since the content can be tailored to the characteristics or resources of the peers. First of all, we present a study that evaluates subjective video quality perceived by the final user under different transmission scenarios. We also propose a video chunk selection algorithm that maximises received video quality under different network conditions. Furthermore, challenges in building reliable peer-to-peer systems for multimedia streaming include optimisation of resource allocation and design mechanisms based on rewards and punishments that provide incentives for users to share their own resources. Our solution relies on a credit-based architecture, where peers do not interact with users that have proven to be malicious in the past. Finally, if peers are allowed to build a social network of trusted users, they can share the local information they have about the network and have a more complete understanding of the type of users they are interacting with. Therefore, in addition to a local credit, a social credit or social reputation is introduced. This thesis concludes with an overview of future developments of this research work.
150

Architectural evolution through softwarisation : on the advent of software-defined networks

Ocho, Reuel January 2016 (has links)
Digital infrastructures characteristically expand and evolve. Their propensity for growth can be attributed to the self-reinforcing mechanism of positive network externalities, in which the value and attractiveness of any digital infrastructure to users, is generated from and sustained as a function of the size of its existing user community. The expansion of any digital infrastructure, though, is ultimately underpinned by an inherent architectural capacity to support unanticipated change, that may include changes to architecture itself. However, as digital infrastructures scale, their usage grows, and they encounter and become entangled with other digital infrastructures. As such, the capacity of digital infrastructure architecture to accommodate change, under conditions of positive network externalities that attract users, conversely leads to intensified social and technical dependencies that eventually resist certain kinds of change. That is, it leads to sociotechnical ossifications. Changing underlying architecture in existing digital infrastructures, thus, becomes increasingly prohibitive over time. Information Systems (IS) research suggests that architectural change or evolution in digital infrastructures occurs primarily via a process of replacement through two means. An existing digital infrastructure is either completely replaced with one that has an evolved architecture, or intermediary transitory gateways are used to facilitate interoperability between digital infrastructures of incompatible architectures. Recognising the sociotechnical ossifications that resist architectural evolution, this literature has also tended to focus more on social activities of cultivating change of which the outcome is architectural evolution in digital infrastructures, than directly on architectural evolution itself. In doing so it has provided only a partial account of underlying architectural evolution in digital infrastructures. The findings of this research come from an embedded case study in which changes to underlying architecture in existing networking infrastructures were made. Networking infrastructures are a prime instance of sociotechnically ossified digital infrastructures. The case’s primary data sources included interviews with 39 senior networking and infrastructure virtualisation experts from large Internet and Cloud Service Providers, Standards Development Organisations, Network Equipment Vendors, Network Systems Integrators, Virtualisation Software Technology Organisations, Research Institutes, and as well technical documents. A critical realist analysis was used to uncover generative mechanisms that promote underlying architectural evolution in sociotechnically ossified digital infrastructures. This thesis extends IS understanding of architectural evolution in digital infrastructures with the complementary finding of, architectural evolution through softwarisation. In architectural evolution through softwarisation, the architecture of sociotechnically ossified digital infrastructures, is evolved via the exploitation of features inherent to digital entities, which have been overlooked in extant research on architecture in digital infrastructures.

Page generated in 0.0365 seconds