• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 678
  • 143
  • 109
  • 36
  • 34
  • 26
  • 16
  • 10
  • 9
  • 8
  • 7
  • 5
  • 5
  • 5
  • 2
  • Tagged with
  • 1200
  • 1200
  • 372
  • 279
  • 276
  • 257
  • 245
  • 217
  • 207
  • 163
  • 159
  • 139
  • 137
  • 126
  • 122
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Enhancing safety in IoT systems: A model-based assessment of a smart irrigation system using fault tree analysis

Abdulhamid, Alhassan, Rahman, M.M., Kabir, Sohag, Ghafir, Ibrahim 20 August 2024 (has links)
Yes / The agricultural industry has the potential to undergo a revolutionary transformation with the use of Internet of Things (IoT) technology. Crop monitoring can be improved, waste reduced, and efficiency increased. However, there are risks associated with system failures that can lead to significant losses and food insecurity. Therefore, a proactive approach is necessary to ensure the effective safety assessment of new IoT systems before deployment. It is crucial to identify potential causes of failure and their severity from the conceptual design phase of the IoT system within smart agricultural ecosystems. This will help prevent such risks and ensure the safety of the system. This study examines the failure behaviour of IoT-based Smart Irrigation Systems (SIS) to identify potential causes of failure. This study proposes a comprehensive Model-Based Safety Analysis (MBSA) framework to model the failure behaviour of SIS and generate analysable safety artefacts of the system using System Modelling Language (SysML). The MBSA approach provides meticulousness to the analysis, supports model reuse, and makes the development of a Fault Tree Analysis (FTA) model easier, thereby reducing the inherent limitations of informal system analysis. The FTA model identifies component failures and their propagation, providing a detailed understanding of how individual component failures can lead to the overall failure of the SIS. This study offers valuable insights into the interconnectedness of various component failures by evaluating the SIS failure behaviour through the FTA model. This study generates multiple minimal cut sets, which provide actionable insights into designing dependable IoT-based SIS. This analysis identifies potential weak points in the design and provides a foundation for safety risk mitigation strategies. This study emphasises the significance of a systematic and model-driven approach to improving the dependability of IoT systems in agriculture, ensuring sustainable and safe implementation.
532

Perspectives on the future of manufacturing within the Industry 4.0 era

Hughes, L., Dwivedi, Y.K., Rana, Nripendra P., Williams, M.D., Raghaven, V. 06 December 2019 (has links)
Yes / The technological choices facing the manufacturing industry are vast and complex as the industry contemplates the increasing levels of digitization and automation in readiness for the modern competitive age. These changes broadly categorized as Industry 4.0, offer significant transformation challenges and opportunities, impacting a multitude of operational aspects of manufacturing organizations. As manufacturers seek to deliver increased levels of productivity and adaptation by innovating many aspects of their business and operational processes, significant challenges and barriers remain. The roadmap toward Industry 4.0 is complex and multifaceted, as manufacturers seek to transition toward new and emerging technologies, whilst retaining operational effectiveness and a sustainability focus. This study approaches many of these significant themes by presenting a critical evaluation of the core topics impacting the next generation of manufacturers, challenges and key barriers to implementation. These factors are further evaluated via the presentation of a new Industry 4.0 framework and alignment of I4.0 themes with the UN Sustainability Goals.
533

Impact of internet of things (IoT) in disaster management: a task-technology fit perspective

Sinha, A., Kumar, P., Rana, Nripendra P., Dwivedi, Y.K. 25 September 2020 (has links)
Yes / Disaster management aims to mitigate the potential damage from the disasters, ensure immediate and suitable assistance to the victims, and attain effective and rapid recovery. These objectives require a planned and effective rescue operation post such disasters. Different types of information about the impact of the disaster are, hence, required for planning an effective and immediate relief operation. The IoT technology available today is quite mature and has the potential to be very useful in disaster situations. This paper analyzes the requirements for planning rescue operation for such natural disasters and proposes an IoT based solution to cater the identified requirements. The proposed solution is further validated using the task-technology fit (TTF) approach for analyzing the significance of the adoption of IoT technology for disaster management. Results from the exploratory study established the core dimensions of the task requirements and the TTF constructs. Results from the confirmatory factor analysis using PLS path modelling, further, suggest that both task requirements and IoT technology have significant impact on the IoT TTF in the disaster management scenario. This paper makes significant contributions in the development of appropriate constructs for modeling TTF for IoT Technology in the context of disaster management.
534

Challenges for adopting and implementing IoT in smart cities: An integrated MICMAC-ISM approach

Janssen, M., Luthra, S., Mangla, S., Rana, Nripendra P., Dwivedi, Y.K. 25 September 2020 (has links)
Yes / The wider use of Internet of Things (IoT) makes it possible to create smart cities. The purpose of this paper is to identify key IoT challenges and understand the relationship between these challenges to support the development of smart cities. Design/methodology/approach: Challenges were identified using literature review, and prioritised and elaborated by experts. The contextual interactions between the identified challenges and their importance were determined using Interpretive Structural Modelling (ISM). To interrelate the identified challenges and promote IoT in the context of smart cities, the dynamics of interactions of these challenges were analysed using an integrated Matrice d’Impacts Croisés Multiplication Appliqués à un Classement (MICMAC)-ISM approach. MICMAC is a structured approach to categorise variables according to their driving power and dependence. Findings: Security and privacy, business models, data quality, scalability, complexity and governance were found to have strong driving power and so are key challenges to be addressed in sustainable cities projects. The main driving challenges are complexity and lack of IoT governance. IoT adoption and implementation should therefore focus on breaking down complexity in manageable parts, supported by a governance structure. Practical implications: This research can help smart city developers in addressing challenges in a phase-wise approach by first ensuring solid foundations and thereafter developing other aspects. Originality/value: A contribution originates from the integrated MICMAC-ISM approach. ISM is a technique used to identify contextual relationships among definite elements, whereas MICMAC facilitates the classification of challenges based on their driving and dependence power. The other contribution originates from creating an overview of challenges and theorising the contextual relationships and dependencies among the challenges.
535

Information Freshness Optimization in Real-time Network Applications

Liu, Zhongdong 12 June 2024 (has links)
In recent years, the remarkable development in ubiquitous communication networks and smart portable devices spawned a wide variety of real-time applications that require timely information updates (e.g., autonomous vehicular systems, industrial automation systems, and live streaming services). These real-time applications all have one thing in common: they desire their knowledge of the information source to be as fresh as possible. In order to measure the freshness of information, a new metric, called the Age-of-Information (AoI) is proposed. AoI is defined as the time elapsed since the generation time of the freshest delivered update. This metric is influenced by both the inter-arrival time and the delay of the updates. As a result of these dependencies, the AoI metric exhibits distinct characteristics compared to traditional delay and throughput metrics. In this dissertation, our goal is to optimize AoI under various real-time network applications. Firstly, we investigate a fundamental problem of how exactly various scheduling policies impact AoI performance. Though there is a large body of work studying the AoI performance under different scheduling policies, the use of the update-size information and its combinations with other information (such as arrival-time information and service preemption) to reduce AoI has still not been explored yet. Secondly, as a recently introduced measure of freshness, the relationship between AoI and other performance metrics remains largely ambiguous. We analyze the tradeoffs between AoI and additional performance metrics, including service performance and update cost, within real-world applications. This dissertation is organized into three parts. In the first part, we realize that scheduling policies leveraging the update-size information can substantially reduce the delay, one of the key components of AoI. However, it remains largely unknown how exactly scheduling policies (especially those making use of update-size information) impact the AoI performance. To this end, we conduct a systematic and comparative study to investigate the impact of scheduling policies on the AoI performance in single-server queues and provide useful guidelines for the design of AoI-efficient scheduling policies. In the second part, we analyze the tradeoffs between AoI and other performance metrics in real-world systems. Specifically, we focus on the following two important tradeoffs. (i) The tradeoff between service performance and AoI that arises in the data-driven real-time applications (e.g., Google Maps and stock trading applications). In these applications, the computing resource is often shared for processing both updates from information sources and queries from end users. Hence there is a natural tradeoff between service performance (e.g., response time to queries) and AoI (i.e., the freshness of data in response to user queries). To address this tradeoff, we begin by introducing a simple single-server two-queue model that captures the coupled scheduling between updates and queries. Subsequently, we design threshold-based scheduling policies to prioritize either updates or queries. Finally, we conduct a rigorous analysis of the performance of these threshold-based scheduling policies. (ii) The tradeoff between update cost and AoI that appear in the crowdsensing-based applications (e.g., Google Waze and GasBuddy). On the one hand, users are not satisfied if the responses to their requests are stale; on the other side, there is a cost for the applications to update their information regarding certain points of interest since they typically need to make monetary payments to incentivize users. To capture this tradeoff, we first formulate an optimization problem with the objective of minimizing the sum of the staleness cost (which is a function of the AoI) and the update cost, then we obtain a closed-form optimal threshold-based policy by reformulating the problem as a Markov decision process (MDP). In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under an (arbitrary) time-varying wireless channel without and with machine learning (ML) advice. We consider a discrete-time system where a resource-constrained source transmits time-sensitive data to a destination over a time-varying wireless channel. Each transmission incurs a fixed cost, while not transmitting results in a staleness cost measured by the AoI. The source needs to balance the tradeoff between these transmission and staleness costs. To tackle this challenge, we develop a robust online algorithm aimed at minimizing the sum of transmission and staleness costs, ensuring a worst-case performance guarantee. While online algorithms are robust, they tend to be overly conservative and may perform poorly on average in typical scenarios. In contrast, ML algorithms, which leverage historical data and prediction models, generally perform well on average but lack worst-case performance guarantees. To harness the advantages of both approaches, we design a learning-augmented online algorithm that achieves two key properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: providing a worst-case performance guarantee even when ML predictions are inaccurate. / Doctor of Philosophy / In recent years, the rapid growth of communication networks and smart devices has spurred the emergence of real-time applications like autonomous vehicles and industrial automation systems. These applications share a common need for timely information. The freshness of information can be measured using a new metric called Age-of-Information (AoI). This dissertation aims to optimize AoI across various real-time network applications, organized into three parts. In the first part, we explore how scheduling policies (particularly those considering update size) impact the AoI performance. Through a systematic and comparative study in single-server queues, we provide useful guidelines for the design of AoI-efficient scheduling policies. The second part explores the tradeoff between update cost and AoI in crowdsensing applications like Google Waze and GasBuddy, where users demand fresh responses to their requests; however, updating information incurs update costs for applications. We aim to minimize the sum of staleness cost (a function of AoI) and update cost. By reformulating the problem as a Markov decision process (MDP), we design a simple threshold-based policy and prove its optimality. In the third part, we study the minimization of data freshness and transmission costs (e.g., energy cost) under a time-varying wireless channel. We first develop a robust online algorithm that achieves a competitive ratio of 3, ensuring a worst-case performance guarantee. Furthermore, when advice is available, e.g., predictions from machine learning (ML) models, we design a learning-augmented online algorithm that exhibits two desired properties: (i) consistency: closely approximating the optimal offline algorithm when the ML prediction is accurate and trusted; (ii) robustness: guaranteeing worst-case performance even with inaccurate ML prediction. While this dissertation marks a significant advancement in AoI research, numerous open problems remain. For instance, our learning-augmented online algorithm treats ML predictions as external inputs. Exploring the co-design and training of ML and online algorithms to improve performance could yield interesting insights. Additionally, while AoI typically assesses update importance based solely on timestamps, the content of updates also holds significance. Incorporating considerations of both age and semantics of information is imperative in future research.
536

An intelligent edge computing based semantic gateway for healthcare systems interoperability and collaboration

Sigwele, Tshiamo, Hu, Yim Fun, Ali, M., Hou, Jiachen, Susanto, Misfa, Fitriawan, H. 20 December 2019 (has links)
Yes / The use of Information and Communications Technology (ICTs) in healthcare has the potential of minimizing medical errors, reducing healthcare cost and improving collaboration between healthcare systems which can dramatically improve the healthcare service quality. However interoperability within different healthcare systems (clinics/hospitals/pharmacies) remains an issue of further research due to a lack of collaboration and exchange of healthcare information. To solve this problem, cross healthcare system collaboration is required. This paper proposes a conceptual semantic based healthcare collaboration framework based on Internet of Things (IoT) infrastructure that is able to offer a secure cross system information and knowledge exchange between different healthcare systems seamlessly that is readable by both machines and humans. In the proposed framework, an intelligent semantic gateway is introduced where a web application with restful Application Programming Interface (API) is used to expose the healthcare information of each system for collaboration. A case study that exposed the patient's data between two different healthcare systems was practically demonstrated where a pharmacist can access the patient's electronic prescription from the clinic. / British Council Institutional Links grant under the BEIS-managed Newton Fund.
537

Internet of Things and Safety Assurance of Cooperative Cyber-Physical Systems: Opportunities and Challenges

Kabir, Sohag 06 April 2022 (has links)
Yes / The rise of artificial intelligence in parallel with the fusion of the physical and digital worlds is sustained by the development and progressive adoption of cyber-physical systems (CPSs) and the Internet of Things (IoT). Cooperative and autonomous CPSs have been shown to have significant economic and societal potential in numerous domains, where human lives and the environment are at stake. To unlock the full potential of such systems, it is necessary to improve stakeholders' confidence in such systems, by providing safety assurances. Due to the open and adaptive nature of such systems, special attention was invested in the runtime assurance, based on the real-time monitoring of the system behaviour. IoT-enabled multi-agent systems have been widely used for different types of monitoring applications. In this paper, we discuss the opportunities for applying IoT-based solutions for the cooperative CPSs safety assurance through an illustrative example. Future research directions have been drawn based on the identification of the current challenges.
538

Unsupervised Learning for Feature Selection: A Proposed Solution for Botnet Detection in 5G Networks

Lefoane, Moemedi, Ghafir, Ibrahim, Kabir, Sohag, Awan, Irfan U. 01 August 2022 (has links)
Yes / The world has seen exponential growth in deploying Internet of Things (IoT) devices. In recent years, connected IoT devices have surpassed the number of connected non-IoT devices. The number of IoT devices continues to grow and they are becoming a critical component of the national infrastructure. IoT devices' characteristics and inherent limitations make them attractive targets for hackers and cyber criminals. Botnet attack is one of the serious threats on the Internet today. This article proposes pattern-based feature selection methods as part of a machine learning (ML) based botnet detection system. Specifically, two methods are proposed: the first is based on the most dominant pattern feature values and the second is based on Maximal Frequent Itemset (MFI) mining. The proposed feature selection method uses Gini Impurity (GI) and an unsupervised clustering method to select the most influential features automatically. The evaluation results show that the proposed methods have improved the performance of the detection system. The developed system has a True Positive Rate (TPR) of 100% and a False Positive Rate (FPR) of 0% for best performing models. In addition, the proposed methods reduce the computational cost of the system as evidenced by the detection speed of the system.
539

Latent Semantic Analysis and Graph Theory for Alert Correlation: A Proposed Approach for IoT Botnet Detection

Lefoane, Moemedi, Ghafir, Ibrahim, Kabir, Sohag, Awan, Irfan, El Hindi, K., Mahendran, A. 16 July 2024 (has links)
Yes / In recent times, the proliferation of Internet of Things (IoT) technology has brought a significant shift in the digital transformation of various industries. The enabling technologies have accelerated this adoption. The possibilities unlocked by IoT have been unprecedented, leading to the emergence of smart applications that have been integrated into national infrastructure. However, the popularity of IoT technology has also attracted the attention of adversaries, who have leveraged the inherent limitations of IoT devices to launch sophisticated attacks, including Multi-Stage attacks (MSAs) such as IoT botnet attacks. These attacks have caused significant losses in revenue across industries, amounting to billions of dollars. To address this challenge, this paper proposes a system for IoT botnet detection that comprises two phases. The first phase aims to identify IoT botnet traffic, the input to this phase is the IoT traffic, which is subjected to feature selection and classification model training to distinguish malicious traffic from normal traffic. The second phase analyses the malicious traffic from stage one to identify different botnet attack campaigns. The second stage employs an alert correlation approach that combines the Latent Semantic Analysis (LSA) unsupervised learning and graph theory based techniques. The proposed system was evaluated using a publicly available real IoT traffic dataset and yielded promising results, with a True Positive Rate (TPR) of over 99% and a False Positive Rate (FPR) of 0%. / Researchers Supporting Project, King Saud University, Riyadh, Saudi Arabia, under Grant RSPD2024R953
540

Size-Adaptive Convolutional Neural Network with Parameterized-Swish Activation for Enhanced Object Detection

Yashwanth Raj Venkata Krishnan (18322572) 03 June 2024 (has links)
<p> In computer vision, accurately detecting objects of varying sizes is essential for various applications, such as autonomous vehicle navigation and medical imaging diagnostics. Addressing the variance in object sizes presents a significant challenge requiring advanced computational solutions for reliable object recognition and processing. This research introduces a size-adaptive Convolutional Neural Network (CNN) framework to enhance detection performance across different object sizes. By dynamically adjusting the CNN’s configuration based on the observed distribution of object sizes, the framework employs statistical analysis and algorithmic decision-making to improve detection capabilities. Further innovation is presented through the Parameterized-Swish activation function. Distinguished by its dynamic parameters, this function is designed to better adapt to varying input patterns. It exceeds the performance of traditional activation functions by enabling faster model convergence and increasing detection accuracy, showcasing the effectiveness of adaptive activation functions in enhancing object detection systems. The implementation of this model has led to notable performance improvements: a 11.4% increase in mean Average Precision (mAP) and a 40.63% increase in frames per second (FPS) for small objects, demonstrating enhanced detection speed and accuracy. The model has achieved a 48.42% reduction in training time for medium-sized objects while still improving mAP, indicating significant efficiency gains without compromising precision. Large objects have seen a 16.9% reduction in training time and a 76.04% increase in inference speed, showcasing the model’s ability to expedite processing times substantially. Collectively, these advancements contribute to a more than 12% increase in detection efficiency and accuracy across various scenarios, highlighting the model’s robustness and adaptability in addressing the critical challenge of size variance in object detection. </p>

Page generated in 0.0829 seconds