• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 14
  • 2
  • Tagged with
  • 223
  • 223
  • 223
  • 22
  • 21
  • 19
  • 19
  • 18
  • 17
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Making Coding Practical: From Servers to Smartphones

Shojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding. This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption. The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code. In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
182

Making Coding Practical: From Servers to Smartphones

Shojania, Hassan 01 September 2010 (has links)
The fundamental insight of use of coding in computer networks is that information to be transmitted from the source in a session can be inferred, or decoded, by the intended receivers, and does not have to be transmitted verbatim. Several coding techniques have gained popularity over the recent years. Among them is random network coding with random linear codes, in which a node in a network topology transmits a linear combination of incoming, or source, packets to its outgoing links. Theoretically, the high computational complexity of random linear codes (RLC) is well known, and is used to motivate the application of more efficient codes, such as the traditional Reed-Solomon (RS) codes and, more recently, fountain codes (LT codes). Factors like computational complexity, network overhead, and deployment flexibility can make one coding schemes more attractive for one application than the others. While there is no one-fit-all coding solution, random linear coding is very flexible, well known to be able to achieve optimal flow rates in multicast sessions, and universally adopted in all proposed protocols using network coding. However, its practicality has been questioned, due to its high computational complexity. Unfortunately, to date, there has been no commercial real-world system reported in the literature that take advantage of the power of network coding. This research represents the first attempt towards a high-performance design and implementation of network coding. The objective of this work is to explore the computational limits of network coding in off-the-shelf modern processors, and to provide a solid reference implementation to facilitate commercial deployment of network coding. We promote the development of new coding-based systems and protocols through a comprehensive toolkit with coding implementations that are not just reference implementations. Instead, they have attained high-performance and flexibility to find widespread adoption. The final work, packaged as a toolkit code-named Tenor, includes high-performance implementations of a number of coding techniques: random linear network coding (RLC), fountain codes (LT codes), and Reed-Solomon (RS) codes in CPUs (single and multi core(s) for both Intel x86 and IBM POWER families), GPUs (single and multiple), and mobile/embedded devices based on ARMv6 and ARMv7 cores. Tenor is cross-platform with support on Linux, Windows, Mac OS X, and iPhone OS, and supports both 32-bit and 64-bit platforms. The toolkit includes some 23K lines of C++ code. In order to validate the effectiveness of the Tenor toolkit, we build coding-based on-demand media streaming systems with GPU-based servers, thousands of clients emulated on a cluster of computers, and a small number of actual iPhone devices. To facilitate deployment of such large experiments, we develop Blizzard, a high-performance framework, with the main goals of: 1) emulating hundreds of client/peer applications on each physical node; 2) facilitating scalable servers that can efficiently communicate with thousands of clients. Our experiences offer an illustration of Tenor components in action, and their benefits in rapid system development. With Tenor, it is trivial to switch from one coding technique to another, scale up to thousands of clients, and deliver actual video to be played back even on mobile devices.
183

Solution biases and pheromone representation selection in ant colony optimisation

Montgomery, James Unknown Date (has links)
Combinatorial optimisation problems (COPs) pervade human society: scheduling, design, layout, distribution, timetabling, resource allocation and project management all feature problems where the solution is some combination of elements, the overall value of which needs to be either maximised or minimised (i.e., optimised), typically subject to a number of constraints. Thus, techniques to efficiently solve such problems are an important area of research. A popular group of optimisation algorithms are the metaheuristics, approaches that specify how to search the space of solutions in a problem independent way so that high quality solutions are likely to result in a reasonable amount of computational time. Although metaheuristic algorithms are specified in a problem independent manner, they must be tailored to suit each particular problem to which they are applied. This thesis investigates a number of aspects of the application of the relatively new Ant Colony Optimisation (ACO) metaheuristic to different COPs.The standard ACO metaheuristic is a constructive algorithm loosely based on the foraging behaviour of ant colonies, which are able to find the shortest path to a food source by indirect communication through pheromones. ACO’s artificial pheromone represents a model of the solution components that its artificial ants use to construct solutions. Developing an appropriate pheromone representation is a key aspect of the application of ACO to a problem. An examination of existing ACO applications and the constructive approach more generally reveals how the metaheuristic can be applied more systematically across a range of COPs. The two main issues addressed in this thesis are biases inherent in the constructive process and the systematic selection of pheromone representations.The systematisation of ACO should lead to more consistently high performance of the algorithm across different problems. Additionally, it supports the creation of a generalised ACO system, capable of adapting itself to suit many different combinatorial problems without the need for manual intervention.
184

Design and analysis for the 3G IP multimedia subsystem

Alam, Muhammad Tanvir Unknown Date (has links)
The IP Multimedia Subsystem (IMS) is the technology that will merge the Internet (packet switching) with the cellular world (circuit switching). It will make Internet technologies, such as the web, email, instant messaging, presence, and videoconferencing available nearly everywhere. Presence is one of the basic services that is likely to become omnipresent in IMS. It is the service that allows a user to be informed about the reachability, availability, and willingness of communication of another user. Push to talk over Cellular (PoC) is another service in IMS that is intended to provide rapid communications for business and consumer customers of mobile networks. In order to become a truly successful mass-market service for the consumer segment, the only realistic alternative is a standardized Push-to-talk solution providing full interoperability between terminals and operators. Instant Messaging (IM) is the service that allows an IMS user to send some content to another user in near real-time. This service works under IETF’s Message Session Relay protocol (MSRP) to overcome the congestion control problem. We believe the efficiency of these services along with the mobility management in IMS session establishment has not been sufficiently investigated.In this research work, we identify the key issues to improve the existing protocols in IMS for better system behaviour. The work is centred on the three services of IMS: (1) Presence Service, (2) Push-to-Talk over cellular and, (3) Instant Messaging and over the issue of (4) IMS session set up. The existing session establishment scenario of IP Multimedia Subsystem (IMS) suffers from triangular routing for a certain period of time when an end IMS user or terminal is mobile. In this thesis, the performance of three possible session establishment scenarios in a mobile environment is compared by using an analytical model. The model is developed based on the expressions of cost functions, which represents system delay and overhead involved in sessions’ establishment. The other problem areas in optimizing presence service, dimensioning a PoC service and analysing service rates of IM relay extensions in IMS are identified. A presence server becomes overloaded when massive number of IMS terminals joins a network to request presence facility. Performance models are developed in this research to mitigate such load during heavy traffic for the presence service. Queuing analyses for different cases are provided while instant messaging chunks go through two consecutive relay nodes. The specific factors such as blocking probability, stability conditions, optimized subscription lifetime etc. in IMS environment parameters have been investigated. We have also elaborated models to dimension a PoC service for service providers with regards to controlling PoC session access, optimal PoC session timer, path optimization and number of allowable simultaneous PoC sessions for given network grade of service.In a nutshell, the contribution of this dissertation are: (a) a proposed robust scheduler to improve performance of the IMS presence service, (b) several derived models to dimension IMS Push-to-talk over cellular service, (c) a new mechanism to reduce cost for the IMS session set ups in mobile environment and (d) evaluation of message blocking and stability in IMS Instant Messaging (IM) service by applying queuing theories. All of these analyses have resulted in recommendations for the performance enhancements with optimal resource utilization in IMS framework.
185

Collaborative filtering approaches for single-domain and cross-domain recommender systems

Parimi, Rohit January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Doina Caragea / Increasing amounts of content on the Web means that users can select from a wide variety of items (i.e., items that concur with their tastes and requirements). The generation of personalized item suggestions to users has become a crucial functionality for many web applications as users benefit from being shown only items of potential interest to them. One popular solution to creating personalized item suggestions to users is recommender systems. Recommender systems can address the item recommendation task by utilizing past user preferences for items captured as either explicit or implicit user feedback. Numerous collaborative filtering (CF) approaches have been proposed in the literature to address the recommendation problem in the single-domain setting (user preferences from only one domain are used to recommend items). However, increasingly large datasets often prevent experimentation of every approach in order to choose the one that best fits an application domain. The work in this dissertation on the single-domain setting studies two CF algorithms, Adsorption and Matrix Factorization (MF), considered to be state-of-the-art approaches for implicit feedback and suggests that characteristics of a domain (e.g., close connections versus loose connections among users) or characteristics of data available (e.g., density of the feedback matrix) can be useful in selecting the most suitable CF approach to use for a particular recommendation problem. Furthermore, for Adsorption, a neighborhood-based approach, this work studies several ways to construct user neighborhoods based on similarity functions and on community detection approaches, and suggests that domain and data characteristics can also be useful in selecting the neighborhood approach to use for Adsorption. Finally, motivated by the need to decrease computational costs of recommendation algorithms, this work studies the effectiveness of using short-user histories and suggests that short-user histories can successfully replace long-user histories for recommendation tasks. Although most approaches for recommender systems use user preferences from only one domain, in many applications, user interests span items of various types (e.g., artists and tags). Each recommendation problem (e.g., recommending artists to users or recommending tags to users) can be considered unique domains, and user preferences from several domains can be used to improve accuracy in one domain, an area of research known as cross-domain recommender systems. The work in this dissertation on cross-domain recommender systems investigates several limitations of existing approaches and proposes three novel approaches (two Adsorption-based and one MF-based) to improve recommendation accuracy in one domain by leveraging knowledge from multiple domains with implicit feedback. The first approach performs aggregation of neighborhoods (WAN) from the source and target domains, and the neighborhoods are used with Adsorption to recommend target items. The second approach performs aggregation of target recommendations (WAR) from Adsorption computed using neighborhoods from the source and target domains. The third approach integrates latent user factors from source domains into the target through a regularized latent factor model (CIMF). Experimental results on six target recommendation tasks from two real-world applications suggest that the proposed approaches effectively improve target recommendation accuracy as compared to single-domain CF approaches and successfully utilize varying amounts of user overlap between source and target domains. Furthermore, under the assumption that tuning may not be possible for large recommendation problems, this work proposes an approach to calculate knowledge aggregation weights based on network alignment for WAN and WAR approaches, and results show the usefulness of the proposed solution. The results also suggest that the WAN and WAR approaches effectively address the cold-start user problem in the target domain.
186

Extending the battery life of mobile device by computation offloading

Qian, Hao January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Daniel A. Andresen / The need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance.
187

A theory for understanding and quantifying moving target defense

Zhuang, Rui January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Scott A. DeLoach / The static nature of cyber systems gives attackers a valuable and asymmetric advantage - time. To eliminate this asymmetric advantage, a new approach, called Moving Target Defense (MTD) has emerged as a potential solution. MTD system seeks to proactively change system configurations to invalidate the knowledge learned by the attacker and force them to spend more effort locating and re-locating vulnerabilities. While it sounds promising, the approach is so new that there is no standard definition of what an MTD is, what is meant by diversification and randomization, or what metrics to define the effectiveness of such systems. Moreover, the changing nature of MTD violates two basic assumptions about the conventional attack surface notion. One is that the attack surface remains unchanged during an attack and the second is that it is always reachable. Therefore, a new attack surface definition is needed. To address these issues, I propose that a theoretical framework for MTD be defined. The framework should clarify the most basic questions such as what an MTD system is and its properties such as adaptation, diversification and randomization. The framework should reveal what is meant by gaining and losing knowledge, and what are different attack types. To reason over the interactions between attacker and MTD system, the framework should define key concepts such as attack surface, adaptation surface and engagement surface. Based on that, this framework should allow MTD system designers to decide how to use existing configuration choices and functionality diversification to increase security. It should allow them to analyze the effectiveness of adapting various combinations of different configuration aspects to thwart different types of attacks. To support analysis, the frame- work should include an analytical model that can be used by designers to determine how different parameter settings will impact system security.
188

Extension of E([theta]) metric for evaluation of reliability

Mondal, Subhajit January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / David A. Gustafson / The calculation of reliability based on running test cases refers to the probability of the software not generating faulty output consequent to the testing process. The metric used to measure this reliability is referred in terms of E(Θ) value. The concept of E(Θ) gives precise formulae to calculate the probability of failure of software after testing, debug or operational. This report aims at extending the functionalities of E(Θ) into the realm of multiple faults spread across multiple sub-domains. This generalization involves introduction of a new set of formulae for E(Θ) calculation which can account for faults spread over both single as well as multiple sub-domains in a code. The validity of the formulae is verified by matching the obtained theoretical results against the empirical data generated from running a test case simulator. The report further examines the possibility of an upper bound calculation on the derived formulae and its possible ramifications.
189

Scalable and accurate approaches for program dependence analysis, slicing, and verification of concurrent object oriented programs

Ranganath, Venkatesh Prasad January 1900 (has links)
Doctor of Philosophy / Department of Computing and Information Science / John M. Hatcliff / With the advent of multi-core processors and rich language support for concurrency, the paradigm of concurrent programming has arrived; however, the cost of developing and maintaining concurrent programs is still high. Simultaneously, the increase in social ubiquity of computing is reducing the "time-to-market" factor while demanding stronger correctness requirements. These effects are amplified with ever-growing size of software systems. Consequently, there is (will be) a rise in the demand for scalable and accurate techniques to enable faster development and maintenance of correct large scale concurrent software. This dissertation presents a collection of scalable and accurate approaches to tackle the above situation. Primarily, the approaches are focused on discovering dependences (relations) between various parts of the software/program and leveraging the dependences to improve maintenance and development tasks via program slicing (comprehension) and verification. Briefly, the proposed approaches are embodied in the following specific contributions: 1. New trace-based foundation for control dependences. 2. An equivalence class based analysis to efficiently and accurately calculate escape information and intra- and inter-thread dependences. 3. A new parametric data flow style slicing algorithm with various extensions to uniformly and easily realize and reason about most existing forms of static sequential and concurrent slicing. 4. A new generic notion of property/trace sensitivity to represent and reason about richer forms of context sensitivity. 5. Program dependence based partial order reduction techniques to enable efficient and accurate state space exploration in both static and dynamic mode. In an attempt to simplify the approaches, they have been based on the basic concepts/ideas of the affected techniques (e.g. program slicing is a rooted transitive closure of dependence relation). As trace-based reasoning is well suited for concurrent systems, an attempt has been made to explore trace-based reasoning wherever possible. While providing a rigorous theoretical presentation of these techniques, this effort also validates the techniques by implementing them in a robust tool framework called Indus (available from http://indus.projects.cis.ksu.edu) and then providing experimental results that demonstrate the effectiveness of the techniques on various concurrent applications. Given the current trend towards concurrent programming and social ubiquity of computing, the approaches proposed in this dissertation provide a foundation for collectively attacking scalability, accuracy, and soundness challenges in current and emerging systems.
190

Characterizing traffic-aware overlay topologies: a machine learning approach

McBride, Benjamin David January 1900 (has links)
Master of Science / Department of Electrical and Computer Engineering / Caterina Scoglio / Overlay networks are application-layer networks that are constructed using the existing Internet infrastructure. Nodes in an overlay network construct logical links toward other nodes to form an overlay topology. Common routing algorithms, such as the link state and distance vector algorithms, are then used to determine how to route data in the overlay network. Previous work has demonstrated that overlay networks can be used to improve routing performance in the Internet. These quality of service improvements make overlay networks attractive for a variety of network applications. Recently, game-theoretic approaches to constructing overlay network topologies have been proposed. In these approaches, nodes establish logical links toward other nodes in a decentralized and selfish manner. Despite the selfish behavior, it has been shown that desirable global network properties emerge. These approaches, however, neglect the traffic-demand between nodes. In this thesis, a game-theoretical approach is presented to constructing overlay network topologies that considers the traffic-demand between nodes. This thesis shows that the traffic-demand between nodes has a significant effect on the topologies formed. Nodes with statistically higher traffic-demand from others become members of the graph center, while nodes that have statistically higher traffic-demand toward others establish logical links toward members of the graph center. This thesis also shows that a traffic-demand aware overlay network topology is better suited to transport the required traffic in the overlay network. Unfortunately, the game-theoretic approach is intractable. In order to construct larger overlay networks, approximate or heuristic approaches are required. In this thesis, a machine learning approach is proposed that characterizes the attributes of neighbor nodes during the construction of the overlay network topology. The approach proposed uses this knowledge and experience to learn a set of human-readable rules. This rule set is then used to decide whether to construct a logical link toward a node. This thesis shows that the machine learning approach results in similar overlay network topologies as the game-theoretic approach. Additionally, it is shown that the machine learning approach is tractable and scales to larger networks.

Page generated in 0.0417 seconds