• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 5
  • 5
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

IMPROVING MICROSERVICES OBSERVABILITY IN CLOUD-NATIVE INFRASTRUCTURE USING EBPF

Bhavye Sharma (15345346) 26 April 2023 (has links)
<p>Microservices have emerged as a popular pattern for developing large-scale applications in cloud environments for their flexibility, scalability, and agility benefits. However, microservices make management more complex due to their scale, multiple languages, and distributed nature. Orchestration and automation tools like Kubernetes help deploy microservices running simultaneously, but it can be difficult for an operator to understand their behaviors, interdependencies, and interactions. In such a complex and dynamic environment, performance problems (e.g., slow application responses and high resource usage)  require significant human effort spent on diagnosis and recovery. Moreover, manual diagnosis of cloud microservices tends to be tedious, time-consuming, and impractical. Effective and automated performance analysis and anomaly detection require an observable system, which means an application's internal state can be inferred by observing and tracking metrics, traces, and logs. Traditional APM uses libraries and SDKs to improve application monitoring and tracing but has additional overheads of rewriting, recompiling, and redeploying the applications' code base. Therefore, there is a critical need for a standardized automated microservices observability solution that does not require rewriting or redeploying the application to keep up with the agility of microservices.</p> <p><br></p> <p>This thesis studies observability for microservices and implements an automated Extended Berkeley Packet Filter (eBPF) based observability solution. eBPF is a Linux feature that allows us to write extensions to the Linux kernel for security and observability use cases. eBPF does not require modifying the application layer and instrumenting the individual microservices. Instead, it instruments the kernel-level API calls, which are common across all hosts in the cluster. eBPF programs provide observability information from the lowest-level system calls and can export data without additional performance overhead. The Prometheus time-series database is leveraged to store all the captured metrics and traces for analysis. With the help of our tool, a DevOps engineer can easily identify abnormal behavior of microservices and enforce appropriate countermeasures. Using Chaos Mesh, we inject anomalies at the network and host layer, which we can identify with root cause identification using the proposed solution. The Chameleon cloud testbed is used to deploy our solution and test its capabilities and limitations.</p>
2

Smart Security System Based on Edge Computing and Face Recognition

Heejae Han (9226565) 27 April 2023 (has links)
<p>Physical security is one of the most basic human needs. People care about it for various reasons; for the safety and security of personnel, to protect private assets, to prevent crime, and so forth. With the recent proliferation of AI, various smart physical security systems are getting introduced to the world. Many researchers and engineers are working on developing AI-driven physical security systems that have the capability to identify potential security threats by monitoring and analyzing data collected from various sensors. One of the most popular ways to detect unauthorized entrance to restricted space is using face recognition. With a collected stream of images and a proper algorithm, security systems can recognize faces detected from the image and send an alert when unauthorized faces are recognized. In recent years, there has been active research and development on neural networks for face recognition, e.g. FaceNet is one of the advanced algorithms. However, not much work has been done to showcase what kind of end-to-end system architecture is effective for running heavy-weight computational loads such as neural network inferences. Thus, this study explores different hardware options that can be used in security systems powered by a state-of-the-art face recognition algorithm and proposes that an edge computing based approach can significantly reduce the overall system latency and enhance the system reactiveness. To analyze the pros and cons of the proposed system, this study presents two different end-to-end system architectures. The first system is an edge computing-based system that operates most of the computational tasks at the edge node of the system, and the other is a traditional application server-based system that performs core computational tasks at the application server. Both systems adopt domain-specific hardware, Tensor Processing Units, to accelerate neural network inference. This paper walks through the implementation details of each system and explores its effectiveness. It provides a performance analysis of each system with regard to accuracy and latency and outlines the pros and cons of each system.</p> <p><br></p>
3

ENABLING REAL TIME INSTRUMENTATION USING RESERVOIR SAMPLING AND BIN PACKING

Sai Pavan Kumar Meruga (16496823) 30 August 2023 (has links)
<p><em>Software Instrumentation is the process of collecting data during an application’s runtime,</em></p> <p><em>which will help us debug, detect errors and optimize the performance of the binary. The</em></p> <p><em>recent increase in demand for low latency and high throughput systems has introduced new</em></p> <p><em>challenges to the process of Software Instrumentation. Software Instrumentation, especially</em></p> <p><em>dynamic, has a huge impact on systems performance in scenarios where there is no early</em></p> <p><em>knowledge of data to be collected. Naive approaches collect too much or too little</em></p> <p><em>data, negatively impacting the system’s performance.</em></p> <p><em>This thesis investigates the overhead added by reservoir sampling algorithms at different</em></p> <p><em>levels of granularity in real-time instrumentation of distributed software systems. Also, this thesis describes the implementation of sampling techniques and algorithms to reduce the overhead caused by instrumentation.</em></p>
4

CyberWater: An open framework for data and model integration

Ranran Chen (18423792) 03 June 2024 (has links)
<p dir="ltr">Workflow management systems (WMSs) are commonly used to organize/automate sequences of tasks as workflows to accelerate scientific discoveries. During complex workflow modeling, a local interactive workflow environment is desirable, as users usually rely on their rich, local environments for fast prototyping and refinements before they consider using more powerful computing resources.</p><p dir="ltr">This dissertation delves into the innovative development of the CyberWater framework based on Workflow Management Systems (WMSs). Against the backdrop of data-intensive and complex models, CyberWater exemplifies the transition of intricate data into insightful and actionable knowledge and introduces the nuanced architecture of CyberWater, particularly focusing on its adaptation and enhancement from the VisTrails system. It highlights the significance of control and data flow mechanisms and the introduction of new data formats for effective data processing within the CyberWater framework.</p><p dir="ltr">This study presents an in-depth analysis of the design and implementation of Generic Model Agent Toolkits. The discussion centers on template-based component mechanisms and the integration with popular platforms, while emphasizing the toolkit’s ability to facilitate on-demand access to High-Performance Computing resources for large-scale data handling. Besides, the development of an asynchronously controlled workflow within CyberWater is also explored. This innovative approach enhances computational performance by optimizing pipeline-level parallelism and allows for on-demand submissions of HPC jobs, significantly improving the efficiency of data processing.</p><p dir="ltr">A comprehensive methodology for model-driven development and Python code integration within the CyberWater framework and innovative applications of GPT models for automated data retrieval are introduced in this research as well. It examines the implementation of Git Actions for system automation in data retrieval processes and discusses the transformation of raw data into a compatible format, enhancing the adaptability and reliability of the data retrieval component in the adaptive generic model agent toolkit component.</p><p dir="ltr">For the development and maintenance of software within the CyberWater framework, the use of tools like GitHub for version control and outlining automated processes has been applied for software updates and error reporting. Except that, the user data collection also emphasizes the role of the CyberWater Server in these processes.</p><p dir="ltr">In conclusion, this dissertation presents our comprehensive work on the CyberWater framework's advancements, setting new standards in scientific workflow management and demonstrating how technological innovation can significantly elevate the process of scientific discovery.</p>
5

<b>Systems Integration Model for AI Solutions with Explainability Requirements</b>

Sandra Smyth (20978657) 02 April 2025 (has links)
<p dir="ltr">Automated processes are helping companies become more efficient at an accelerated pace. According to Burke et al. (2017), automation processes are dynamic, and such dynamism is data-driven. The resulting insights of data processing allow the implementation of real-time decision-making tasks that companies use to adjust their business strategies, keep their market share, and compete for more. However, those data-driven environments do not work in isolation; these are digital-data-driven setups where connectivity and integration are key elements for such collaboration.</p><p dir="ltr">As Daniel (2023) explained, a fundamental requirement for integrating systems is understanding how each connected system works. This understanding process includes a comprehensive picture of its functionality, architecture, protocols, data structures, and other components to help design the integration of such systems. However, automated decision-making models based on artificial intelligence (AI) algorithms are considered a “black box” that lacks transparency and interpretability.</p><p dir="ltr">Explainability is a concept derived from the EU General Data Protection Regulation implemented in May 2018 by the European Commission. This law requires describing the logic behind automated decision-making processes that can affect data subjects’ interests. As Selbst and Powles (2017) suggested, AI solutions can defy human understanding due to the complexity of their models. Thus, knowing how a system works is difficult to accomplish when AI solutions are involved.</p><p dir="ltr">With the approval of the EU Artificial Intelligent Act by the EU Commission in March 2024, the explainability requirements initially applicable only to those decision-making systems that processed personal data have been extended to all AI algorithms that processed data for systems categorized as high-risk (R. Jain, 2024). New legal obligations are listed in the EU AI Act for high-risk systems; some of these new obligations consist of adapting or producing AI systems to be transparent, explainable, and designed to allow human oversight.</p><p dir="ltr">Under the EU AI Act, high-risk systems are those that could negatively affect the safety or fundamental rights of an individual, group of individuals, society, or the environment in general. Kempf and Rauer (2024) explained that malfunctioning essential systems could risk people’s lives and health or disrupt social and economic activities. Thus, according to the EU AI Act (2024), critical infrastructure systems such as water supply, gas, and electricity are contained in this high-risk category.</p><p dir="ltr">Within the energy sector, and as Niet et al., (2021) defined, the power grid’s ‘System Operators’ are the ones who plan, build, and maintain the electricity distribution or transmission network and provide a fair electricity market and network connections. As per Articles 13 and 14 of the EU AI Act, the systems operator, or any deployer in charge of overseeing an AI system, should be trained and equipped with transparent and explainability elements to understand the capabilities and limitations of the AI solutions so they can stop, confirm, or overwrite the recommendations made by such a model.</p><p dir="ltr">The present dissertation completed a qualitative study, starting with exploratory research to explain the different concepts involved in the study and their relationships. Thus, document analysis, GT-Grounded Theory, and triangulation were used as the primary qualitative research methods to comprehensively explain the challenges regarding the Systems Integration (SI) of AI solutions that require Explainability (XAI) modules. As part of data triangulation methods, informal conversations with subject matter experts were conducted to share the findings of this research and gather insights related to the current stage of XAI's applicability in the energy sector.</p><p dir="ltr">The scope of the population and sample for this qualitative research was composed of various types of data sources, including regulations, guidelines, standards, frameworks, reports in newspapers, dissertations, journals, business journals, and government publications. A total of 902 biographic references were collected in Zotero and then transferred to NVivo for data analysis. The study responded to the following research questions:</p><p dir="ltr">1. What XAI requirements need to be incorporated as part of enterprise and systems integration frameworks for high-risk AI implementations?</p><p dir="ltr">2. What are the critical operational challenges in integrating Explainability modules with AI systems and business processes?</p><p dir="ltr">The purpose of this study was to identify missing elements from the Enterprise Integration framework proposed by Lam and Shankararaman (2007), that are necessary to comply with XAI legal and ethical requirements delineated by the EU GDPR and EU AI Act. The findings included elements such as (a) monitoring of AI industry regulation and establishment of an AI policy, (b) executing fundamental rights impact assessments, (c) defining clauses to share accountability with third-party contributors of the solution, (d) changing the project management approach to be data-centric and (e) defining post-deployment processes to monitor and improve the performance of AI models. This contribution aspiration is to reduce implementation costs by offering a standard set of steps to follow on AI integration projects, facilitating communication between the project team and AI subject matter experts.</p><p dir="ltr">During this study, Lam and Shankararaman’s framework was reviewed against new legal explainability, and transparency obligations imposed by the EU GDPR and the EU AI Act. Then, the missing elements to operationalize those legal obligations were extracted from AI frameworks such as (a) the NIST AI Risk Management Framework (AI RMF1.0), (b) the standard ISO/IEC 42001:2023 Artificial Intelligence Management Systems, (c) the Singapore Model AI Governance Framework, and (d) the Japanese Machine Learning Quality Management Guideline. Finally, a new version of an Enterprise Integration framework incorporating those missing elements is offered, which could be used to guide the gathering of explainability and transparency requirements for enterprise and system integration of high-risk AI solutions, specifically for the electricity sector.</p><p dir="ltr">This paper explores the concept of AI explainability from technical and regulatory perspectives. The researcher expects that the presented findings will contribute to and trigger industry and academic discussions related to understanding this emerging topics' challenges and guide those in charge of implementing and operationalizing XAI solutions.</p>

Page generated in 0.0967 seconds