Spelling suggestions: "subject:"[een] SCALABILITY"" "subject:"[enn] SCALABILITY""
201 |
Corporate Wireless IP TelephonyGarcía Hijes, Raúl January 2005 (has links)
IP telephony is defined as the transport of telephony calls over an IP network. IP telephony exploits the integration of voice and data networks. However, enterprises are still reluctant to deploy IP telephony despite the potential increase in productivity and reduction of costs. The principal concerns are: can IP telephony provide the same level of performance in terms of security, reliability, and scalability as traditional telephony? If so, are its proclaimed benefits such as flexibility and mobility cost-effective? The aim of this thesis is to analyze how to deploy IP telephony in large corporations - while providing the necessary security and facilitating mobility. Through the different parts of this thesis, we will analyze the applicable technologies, along with their integration and management. We will focus on the essential requirements for an enterprise of scalability, reliability, flexibility, high-availability, and cost-effectiveness. The massive changes brought about due to the deregulation of telecommunications in nearly all countries, the increasingly global nature of business, and the progressively affordable and power technology underlying information and communication technologies have lead to increasing adoption of IP telephony by residential and commercial users. This thesis will examine these technologies in the context of a very large distributed corporation. / IP telefoni är definierat som transporten av telefon samtal genom ett IP nätverk. IP telefoni utnyttjar integrationen av tal och data nätverk. Dock är affärsföretag fortfarande motsträviga till att införa IP telefoni trots potentiell ökning i produktivitet och minskade kostnader. Huvud bekymren är: kan IP telefoni tillhandahålla samma nivå av prestanda med avseende på säkerhet, tillförlitlighet, och skalbarhet som traditionell telefoni? Och i så fall, är dom proklamerade fördelarna flexibilitet och rörlighet kostnadseffektiva? Målet för detta examensarbete är att analysera hur IP telefoni kan införas i stora affärsföretag - medan samtidigt tillhandahålla nödvändig säkerhet och främja rörlighet. Genom olika delar av detta examensarbete, analyserar vi tillämpliga teknologier, inklusive deras integrering och skötsel. Vi kommer att fokusera på de grundläggande kraven för ett affärsföretag gällande skalbarhet, tillförlitlighet, flexibilitet, hög tillgänglighet, och kostnadseffektivitet. Dom massiva förändringarna frambringade i och med avregleringen av telekommunikation i stort sett alla länder, affärsverksamhetens alltmer globala natur, och de progressivt kostnadseffektiva och kraftfulla underliggande teknologier bakom informations och kommunikations system har lett till ökande adoptering av IP telefoni av både privata och kommersiella användare. Detta examensarbete undersöker relevanta teknologier i samband med mycket stora utbredda affärsföretag. / <p>Exchange student from Centro Politecnico Superior (University of Zaragoza, Spain).</p>
|
202 |
Context Sensitive Interaction Interoperability for Distributed Virtual EnvironmentsAhmed, Hussein Mohammed 23 June 2010 (has links)
The number and types of input devices and related interaction technique types are growing rapidly. Innovative input devices such as game controllers are no longer used just for games, propriety consoles and specific applications, they are also used in many distributed virtual environments, especially the so-called serious virtual environments.
In this dissertation a distributed, service based framework is presented to offer context-sensitive interaction interoperability that can support mapping between input devices and suitable application tasks given the attributes (device, applications, users, and interaction techniques) and the current user context without negatively impacting performances of large scale distributed environments.
The mapping is dynamic and context sensitive taking into account the context dimensions of both the virtual and real planes. What device or device component to use, how and when to use them depend on the application, task performed, the user and the overall context, including location and presence of other users. Another use of interaction interoperability is as a testbed for input devices, and interaction techniques making it possible to test reality based interfaces and interaction techniques with legacy applications.
The dissertation provides a description how the framework provides these affordances and a discussion of motivations, goals and the addressed challenges. Several proof of the concept implementations were developed and an evaluation of the framework performance (in terms of system characteristics) demonstrates viability, scalability and negligible delays. / Ph. D.
|
203 |
Analysis and Modeling of World Wide Web TrafficAbdulla, Ghaleb 30 April 1998 (has links)
This dissertation deals with monitoring, collecting, analyzing, and modeling of World Wide Web (WWW) traffic and client interactions. The rapid growth of WWW usage has not been accompanied by an overall understanding of models of information resources and their deployment strategies. Consequently, the current Web architecture often faces performance and reliability problems. Scalability, latency, bandwidth, and disconnected operations are some of the important issues that should be considered when attempting to adjust for the growth in Web usage. The WWW Consortium launched an effort to design a new protocol that will be able to support future demands. Before doing that, however, we need to characterize current users' interactions with the WWW and understand how it is being used.
We focus on proxies since they provide a good medium or caching, filtering information, payment methods, and copyright management. We collected proxy data from our environment over a period of more than two years. We also collected data from other sources such as schools, information service providers, and commercial aites. Sampling times range from days to years. We analyzed the collected data looking for important characteristics that can help in designing a better HTTP protocol. We developed a modeling approach that considers Web traffic characteristics such as self-similarity and long-range dependency. We developed an algorithm to characterize users' sessions. Finally we developed a high-level Web traffic model suitable for sensitivity analysis.
As a result of this work we develop statistical models of parameters such as arrival times, file sizes, file types, and locality of reference. We describe an approach to model long-range and dependent Web traffic and we characterize activities of users accessing a digital library courseware server or Web search tools.
Temporal and spatial locality of reference within examined user communities is high, so caching can be an effective tool to help reduce network traffic and to help solve the scalability problem. We recommend utilizing our findings to promote a smart distribution or push model to cache documents when there is likelihood of repeat accesses. / Ph. D.
|
204 |
Exploring the Boundaries of Operating System in the Era of Ultra-fast Storage TechnologiesRamanathan, Madhava Krishnan 24 May 2023 (has links)
The storage hardware is evolving at a rapid pace to keep up with the exponential rise of data consumption. Recently, ultra-fast storage technologies such as nano-second scale byte- addressable Non-Volatile Memory (NVM), micro-second scale SSDs are being commercialized. However, the OS storage stack has not been evolving fast enough to keep up with these new ultra-fast storage hardware. Hence, the latency due user-kernel context switch caused by system calls and hardware interrupts is no longer negligible as presumed in the era of slower high latency hard disks. Further, the OS storage stack is not designed with multi-core scalability in mind; so with CPU core count continuously increasing, the OS storage stack particularly the Virtual Filesystem (VFS) and filesystem layer are increasingly becoming a scalability bottleneck.
Applications bypass the kernel (kernel-bypass storage stack) completely to eliminate the storage stack from becoming a performance and scalability bottleneck. But this comes at the cost of programmability, isolation, safety, and reliability. Moreover, scalability bottlenecks in the filesystem can not be addressed by simply moving the filesystem to the userspace. Overall, while designing a kernel-bypass storage stack looks obvious and promising there are several critical challenges in the aspects of programmability, performance, scalability, safety, and reliability that needs to be addressed to bypass the traditional OS storage stack.
This thesis proposes a series of kernel-bypass storage techniques designed particularly for fast memory-centric storage. First, this thesis proposes a scalable persistent transactional memory (PTM) programming model to address the programmability and multi-core scalability challenges. Next, this thesis proposes techniques to make the PTM memory safe and fault tolerant. Further, this thesis also proposes a kernel-bypass programming framework to port legacy DRAM-based in-memory database applications to run on persistent memory-centric storage. Finally, this thesis explores an application-driven approach to address the CPU side and storage side bottlenecks in the deep learning model training by proposing a kernel-bypass programming framework to move to compute closer to the storage. Overall, the techniques proposed in this thesis will be a strong foundation for the applications to adopt and exploit the emerging ultra-fast storage technologies without being bottlenecked by the traditional OS storage stack. / Doctor of Philosophy / The storage hardware is evolving at a rapid pace to keep up with the exponential rise of data consumption. Recently, ultra-fast storage technologies such as nano-second scale byte- addressable Non-Volatile Memory (NVM), micro-second scale SSDs are being commercialized. The Operating System (OS) has been the gateway for the applications to access and manage the storage hardware. Unfortunately, the OS storage stack that is designed with slower storage technologies (e.g., hard disk drives) becomes a performance, scalability, and programmability bottleneck for the emerging ultra-fast storage technologies. This has created a large gap between the storage hardware advancements and the system software support for such emerging storage technologies. Consequently, applications are constrained by the limitations of the OS storage stack when they intend to explore these emerging storage technologies.
In this thesis, we propose a series of novel kernel-bypass storage stack designs to address the performance, scalability, and programmability limitations of the conventional OS storage stack. The kernel-bypass storage stack proposed in this thesis is carefully designed with ultra-fast modern storage hardware in mind. Application developers can leverage the kernel-bypass techniques proposed in this thesis to develop new applications or port the legacy applications to use the emerging ultra-fast storage technologies without being constrained by the limitations of the conventional OS storage stack.
|
205 |
On the Scalability of Ad Hoc Dynamic Spectrum Access NetworksAhsan, Umair 10 November 2010 (has links)
Dynamic Spectrum Access allows wireless users to access a wide range of spectrum which increases a node's ability to communicate with its neighbors, and spectral efficiency through opportunistic access to licensed bands. Our study focuses on the scalability of network performance, which we define in terms of network transport capacity and end-to-end throughput per node, as the network density increases. We develop an analytical procedure for performance evaluation of ad hoc DSA networks using Markov models, and analyze the performance of a DSA network with one transceiver per node and a dedicated control channel. We also develop and integrate a detailed model for energy detection in Poisson networks with sensing. We observe that the network capacity scales sub-linearly with the number of DSA users and the end-to-end throughput diminishes, when the number of data channels is fixed. Nevertheless, we show that DSA can improve network performance by allowing nodes to access more spectrum bands while providing a mechanism for spectrum sharing and maintaining network wide connectivity. We also observe that the percentage of relative overhead at the medium access layer does not scale with the number of users. Lastly, we examine the performance impact of primary user density, detection accuracy, and the number of available data channels. The results help to answer the fundamental question of the scaling behavior of network capacity, end-to-end throughput, and network overhead in ad hoc DSA networks. / Master of Science
|
206 |
Representational Capabilities of Feed-forward and Sequential Neural ArchitecturesSanford, Clayton Hendrick January 2024 (has links)
Despite the widespread empirical success of deep neural networks over the past decade, a comprehensive understanding of their mathematical properties remains elusive, which limits the abilities of practitioners to train neural networks in a principled manner. This dissertation provides a representational characterization of a variety of neural network architectures, including fully-connected feed-forward networks and sequential models like transformers.
The representational capabilities of neural networks are most famously characterized by the universal approximation theorem, which states that sufficiently large neural networks can closely approximate any well-behaved target function. However, the universal approximation theorem applies exclusively to two-layer neural networks of unbounded size and fails to capture the comparative strengths and weaknesses of different architectures.
The thesis addresses these limitations by quantifying the representational consequences of random features, weight regularization, and model depth on feed-forward architectures. It further investigates and contrasts the expressive powers of transformers and other sequential neural architectures. Taken together, these results apply a wide range of theoretical tools—including approximation theory, discrete dynamical systems, and communication complexity—to prove rigorous separations between different neural architectures and scaling regimes.
|
207 |
Fundamentals of Quantum Communication Networks: Scalability, Efficiency, and Distributed Quantum Machine LearningChehimi, Mahdi 09 August 2024 (has links)
The future quantum Internet (QI) will transform today's communication networks and user experiences by providing unparalleled security levels, superior quantum computational powers, along with enhanced sensing accuracy and data processing capabilities. These features will be enabled through applications like quantum key distribution (QKD) and quantum machine learning (QML). Towards enabling these applications, the QI requires the development of global quantum communication networks (QCNs) that enable the distribution of entangled resources between distant nodes. This dissertation addresses two major challenges facing QCNs, which are the scalability and coverage of their architectures, and the efficiency of their operations. Additionally, the dissertation studies the near-term deployment of QML applications over today's noisy quantum devices, essential for realizing the future QI. In doing so, the scalability and efficiency challenges facing the different QCN elements are explored, and practical noise-aware and physics-informed approaches are developed to optimize the QCN performance given heterogeneous quantum application-specific quality of service (QoS) user requirements on entanglement rate and fidelity.
Towards achieving this goal, this dissertation makes a number of key contributions. First, the scaling limits of quantum repeaters is investigated, and a holistic optimization framework is proposed to optimize the geographical coverage of quantum repeater networks (QRNs), including the number of quantum repeaters, their placement and separating distances, quantum memory management, and quantum operations scheduling. Then, a novel framework is proposed to address the scalability challenge of free-space optical (FSO) quantum channels in the presence of blockages and environmental effects. Particularly, the utilization of a reconfigurable intelligent surface (RIS) in QCNs is proposed to maintain a line-of-sight (LoS) connection between quantum nodes separated by blockages, and a novel analytical model of quantum noise and end-to-end (e2e) fidelity in such QCNs is developed. The results show enhanced entangled state fidelity and entanglement distribution rates, improving user fairness by around 40% compared to benchmark approaches. The dissertation then investigates the efficiency challenges in a practical use-case of QCNs with a single quantum switch (QS). Particularly, the average quantum memory noise effects are analytically analyzed and their impacts on the allocation of entanglement generation sources and minimization of entanglement distribution delay while optimizing QS entanglement distillation operations are investigated. The results show an enhanced e2e fidelity and a minimized e2e entanglement distribution delay compared to existing approaches, and a unique capability of satisfying all users QoS requirements. This QCN architecture is scaled up with multiple QSs serving heterogeneous user requests, necessary for scalable quantum applications over the QI. Here, a novel efficient matching theory-based framework for optimizing the request-QS association in such QCNs while managing quantum memories and optimizing QS operations is proposed. Finally, after scaling QCNs and ensuring their efficient operations, the dissertation proposes novel distributed QML frameworks that can leverage both classical networks and QCNs to enable collaborative learning between today's noisy quantum devices. In particular, the first quantum federated learning (QFL) frameworks incorporating different quantum neural networks and leveraging quantum and classical data are developed, and the first publicly available federated quantum dataset is introduced. The results show enhanced performance and reductions in the communication overhead and number of training epochs needed until convergence, compared to classical counterpart frameworks. Overall, this dissertation develops robust frameworks and algorithms that advance the theoretical understanding of QCNs and offers practical insights for the future development of the QI and its applications. The dissertation concludes by analyzing some open challenges facing QCNs and proposing a vision for physics-informed QCNs, along with important future directions. / Doctor of Philosophy / In today's digital age, we are generating vast amounts of data through videos, live streams, and various online activities. This explosion of data brings not only incredible opportunities for innovation but also heightened security concerns. The current Internet infrastructure struggles to keep up with the demand for speed and security. In this regard, the quantum Internet (QI) emerges as a revolutionary technology poised to make the communication and data sharing processes faster and more secure than ever before. The QI requires the development of quantum communication networks (QCNs) that will be seamlessly integrated with today's existing communication systems that form today's Internet. This way, the QI enables ultra-secure communication and advanced computing applications that can transform various sectors, from finance to healthcare. However, building such global QCNs, requires overcoming significant challenges, including the sensitive nature and limitations of quantum devices. In this regard, the goal of this dissertation is to develop scalable and efficient QCNs that overcome the different challenges facing different QCN elements and enable a wide coverage and robust performance towards realizing the QI at a global scale.
Simultaneously, machine learning (ML), which is driving significant advancements and transforming industries in today's world. Here, quantum technologies are anticipated to make a breakthrough in ML through quantum machine learning (QML) models that can handle today's large and complex data. However, quantum computers are still limited in scale and efficiency, often being noisy and unreliable. Throughout this dissertation, these limitations of QML are addressed by developing frameworks that allow multiple quantum computers to work together collaboratively in a distributed manner over classical networks and QCNs. By leveraging distributed QML, it is possible to achieve remarkable advancements in privacy and data utilization. For instance, distributed QML can enhance navigation systems by providing more accurate and secure route planning or revolutionize healthcare by enabling secure and efficient analysis of medical data. In summary, this dissertation addresses the critical challenges of building scalable and efficient QCNs to support the QI and develops distributed QML frameworks to enable near-term utilization of QML in transformative applications. By doing so, it paves the way for a future where quantum technology is integral to our daily lives, enhancing security, efficiency, and innovation across various domains.
|
208 |
Scalable Estimation on Linear and Nonlinear Regression Models via Decentralized Processing: Adaptive LMS Filter and Gaussian Process Regression / 分散処理による線形・非線形回帰モデルでのスケーラブルな推定:適応LMSフィルタとガウス過程回帰Nakai, Ayano 24 November 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23588号 / 情博第782号 / 新制||情||133(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 下平 英寿, 准教授 櫻間 一徳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
209 |
Trust-based Service Management of Internet of Things Systems and Its ApplicationsGuo, Jia 18 April 2018 (has links)
A future Internet of Things (IoT) system will consist of a huge quantity of heterogeneous IoT devices, each capable of providing services upon request. It is of utmost importance for an IoT device to know if another IoT service is trustworthy when requesting it to provide a service. In this dissertation research, we develop trust-based service management techniques applicable to distributed, centralized, and hybrid IoT environments.
For distributed IoT systems, we develop a trust protocol called Adaptive IoT Trust. The novelty lies in the use of distributed collaborating filtering to select trust feedback from owners of IoT nodes sharing similar social interests. We develop a novel adaptive filtering technique to adjust trust protocol parameters dynamically to minimize trust estimation bias and maximize application performance. Our adaptive IoT trust protocol is scalable to large IoT systems in terms of storage and computational costs. We perform a comparative analysis of our adaptive IoT trust protocol against contemporary IoT trust protocols to demonstrate the effectiveness of our adaptive IoT trust protocol. For centralized or hybrid cloud-based IoT systems, we propose the notion of Trust as a Service (TaaS), allowing an IoT device to query the service trustworthiness of another IoT device and also report its service experiences to the cloud. TaaS preserves the notion that trust is subjective despite the fact that trust computation is performed by the cloud. We use social similarity for filtering recommendations and dynamic weighted sum to combine self-observations and recommendations to minimize trust bias and convergence time against opportunistic service and false recommendation attacks. For large-scale IoT cloud systems, we develop a scalable trust management protocol called IoT-TaaS to realize TaaS. For hybrid IoT systems, we develop a new 3-layer hierarchical cloud structure for integrated mobility, service, and trust management. This architecture supports scalability, reconfigurability, fault tolerance, and resiliency against cloud node failure and network disconnection. We develop a trust protocol called IoT-HiTrust leveraging this 3-layer hierarchical structure to realize TaaS.
We validate our trust-based IoT service management techniques developed with real-world IoT applications, including smart city air pollution detection, augmented map travel assistance, and travel planning, and demonstrate that our trust-based IoT service management techniques outperform contemporary non-trusted and trust-based IoT service management solutions. / Ph. D. / A future Internet of Things (IoT) system will consist of a huge quantity of heterogeneous IoT devices, each capable of providing services upon request. It is of utmost importance for an IoT device to know if another IoT service is trustworthy when requesting it to provide a service. In this dissertation research, we develop trust-based service management techniques applicable to distributed, centralized, and hybrid IoT environments.
We have developed a distributed trust protocol called Adaptive IoT Trust for distributed IoT applications, a centralized trust protocol called IoT-TaaS for centralized IoT applications with cloud access, and a hierarchical trust management protocol called IoT-HiTrust for hybrid IoT applications. We have verified that desirable properties, including solution quality, accuracy, convergence, resiliency, and scalability have been achieved.
Furthermore, we validate our trust-based IoT service management techniques developed with real-world IoT applications, including smart city air pollution detection, augmented map travel assistance, and travel planning, and demonstrate that our trust-based IoT service management techniques outperform contemporary non-trusted and trust-based IoT service management solutions.
|
210 |
Towards a Polyalgorithm for Land Use and Land Cover Change DetectionSaxena, Rishu 23 February 2018 (has links)
Earth observation satellites (EOS) such as Landsat provide image datasets that can be immensely useful in numerous application domains. One way of analyzing satellite images for land use and land cover change (LULCC) is time series analysis (TSA). Several algorithms for time series analysis have been proposed by various groups in remote sensing; more algorithms (that can be adapted) are available in the general time series literature. However, in spite of an abundance of algorithms, the choice of algorithm to be used for analyzing an image stack is presently an open question. A concurrent issue is the prohibitive size of Landsat datasets, currently of the order of petabytes and growing. This makes them computationally unwieldy --- both in storage and processing. An EOS image stack typically consists of multiple images of a fixed area on the Earth's surface (same latitudes and longitudes) taken at different time points. Experiments on multicore servers indicate that carrying out meaningful time series analysis on one such interannual, multitemporal stack with existing state of the art codes can take several days.
This work proposes using multiple algorithms to analyze a given image stack in a polyalgorithmic framework. A polyalgorithm combines several basic algorithms, each meant to solve the same problem, producing a strategy that unites the strengths and circumvents the weaknesses of constituent algorithms. The foundation of the proposed TSA based polyalgorithm is laid using three algorithms (LandTrendR, EWMACD, and BFAST). These algorithms are precisely described mathematically, and chosen to be fundamentally distinct from each other in design and in the phenomena they capture. Analysis of results representing success, failure, and parameter sensitivity for each algorithm is presented. Scalability issues, important for real simulations, are also discussed, along with scalable implementations, and speedup results. For a given pixel, Hausdorff distance is used to compare the distance between the change times (breakpoints) obtained from two different algorithms. Timesync validation data, a dataset that is based on human interpretation of Landsat time series in concert with historical aerial photography, is used for validation. The polyalgorithm yields more accurate results than EWMACD and LandTrendR alone, but counterintuitively not better than BFAST alone. This nascent work will be directly useful in land use and land cover change studies, of interest to terrestrial science research, especially regarding anthropogenic impacts on the environment, and in much broader applications such as health monitoring and urban transportation. / M. S. / Numerous manmade satellites circling around the Earth regularly take pictures (images) of the Earth’s surface from up above. These images naturally provide information regarding the land cover of any given piece of land at the moment of capture (for e.g., whether the land area in the picture is covered with forests or with agriculture or housing). Therefore, for a fixed land area, if a person looks at a chronologically arranged series of images, any significant changes in land use can be identified. Identifying such changes is of critical importance, especially in this era where deforestation, urbanization, and global warming are major concerns.
The goal of this thesis is to investigate the design of methodologies (algorithms) that can efficiently and accurately use satellite images for answering questions regarding land cover trend and change. Experience shows that the state-of-the-art methodologies produce great results for the region they were originally designed on but their performance on other regions is unpredictable. In this work, therefore, a ‘polyalgorithm’ is proposed. A ‘polyalgorithm’ utilizes multiple simple methodologies and strategically combines them so that the outcome is better than the individual components. In this introductory work, three component methodologies are utilized; each component methodology is capable of capturing phenomenon different from the other two. Mathematical formulation of each component methodology is presented. Initial strategy for combining the three component algorithms is proposed. The outcomes of each component methodology as well the polyalgorithm are tested on human interpreted data. The strengths and limitations of each methodology are also discussed. Efficiency of the codes used for implementing the polyalgorithm is also discussed; this is important because the satellite data that needs to be processed is known to be huge (petabytes sized already and growing). This nascent work will be directly useful especially in understanding the impact of human activities on the environment. It will also be useful in other applications such as health monitoring and urban transportation.
|
Page generated in 0.0654 seconds