• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6800
  • 683
  • 671
  • 671
  • 671
  • 671
  • 671
  • 671
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10978
  • 10978
  • 6695
  • 1946
  • 989
  • 862
  • 543
  • 532
  • 524
  • 507
  • 506
  • 468
  • 457
  • 448
  • 403
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Security Enhancements in Voice Over Ip Networks

Iranmanesh, Seyed Amir 13 November 2017 (has links)
Voice delivery over IP networks including VoIP (Voice over IP) and VoLTE (Voice over LTE) are emerging as the alternatives to the conventional public telephony networks. With the growing number of subscribers and the global integration of 4/5G by operations, VoIP/VoLTE as the only option for voice delivery becomes an attractive target to be abused and exploited by malicious attackers. This dissertation aims to address some of the security challenges in VoIP/VoLTE. When we examine the past events to identify trends and changes in attacking strategies, we find that spam calls, caller-ID spoofing, and DoS attacks are the most imminent threats to VoIP deployments. Compared to email spam, voice spam will be much more obnoxious and time consuming nuisance for human subscribers to filter out. Since the threat of voice spam could become as serious as email spam, we first focus on spam detection and propose a content-based approach to protect telephone subscribers' voice mailboxes from voice spam. Caller-ID has long been used to enable the callee parties know who is calling, verify his identity for authentication and his physical location for emergency services. VoIP and other packet switched networks such as all-IP Long Term Evolution (LTE) network provide flexibility that helps subscribers to use arbitrary caller-ID. Moreover, interconnecting between IP telephony and other Circuit-Switched (CS) legacy telephone networks has also weakened the security of caller-ID systems. We observe that the determination of true identity of a calling device helps us in preventing many VoIP attacks, such as caller-ID spoofing, spamming and call flooding attacks. This motivates us to take a very different approach to the VoIP problems and attempt to answer a fundamental question: is it possible to know the type of a device a subscriber uses to originate a call? By exploiting the impreciseness of the codec sampling rate in the caller's RTP streams, we propose a fuzzy rule-based system to remotely identify calling devices. Finally, we propose a caller-ID based public key infrastructure for VoIP and VoLTE that provides signature generation at the calling party side as well as signature verification at the callee party side. The proposed signature can be used as caller-ID trust to prevent caller-ID spoofing and unsolicited calls. Our approach is based on the identity-based cryptography, and it also leverages the Domain Name System (DNS) and proxy servers in the VoIP architecture, as well as the Home Subscriber Server (HSS) and Call Session Control Function (CSCF) in the IP Multimedia Subsystem (IMS) architecture. Using OPNET, we then develop a comprehensive simulation testbed for the evaluation of our proposed infrastructure. Our simulation results show that the average call setup delays induced by our infrastructure are hardly noticeable by telephony subscribers and the extra signaling overhead is negligible. Therefore, our proposed infrastructure can be adopted to widely verify caller-ID in telephony networks.
172

Simulators in Vehicular Networks Research and Performance Evaluation of 802.11p in Vehicular Networks

Puttagunta, Harsha 29 September 2021 (has links)
No description available.
173

Selective Subtraction: An Extension of Background Subtraction

Bhutta, Adeel 01 January 2020 (has links) (PDF)
Background subtraction or scene modeling techniques model the background of the scene using the stationarity property and classify the scene into two classes of foreground and background. In doing so, most moving objects become foreground indiscriminately, except for perhaps some waving tree leaves, water ripples, or a water fountain, which are typically "learned" as part of the background using a large training set of video data. Traditional techniques exhibit a number of limitations including inability to model partial background or subtract partial foreground, inflexibility of the model being used, need for large training data and computational inefficiency. In this thesis, we present our work to address each of these limitations and propose algorithms in two major areas of research within background subtraction namely single-view and multi-view based techniques. We first propose the use of both spatial and temporal properties to model a dynamic scene and show how Mapping Convergence framework within Support Vector Mapping Convergence (SVMC) can be used to minimize training data. We also introduce a novel concept of background as the objects other than the foreground, which may include moving objects in the scene that cannot be learned from a training set because they occur only irregularly and sporadically, e.g. a walking person. We propose a "selective subtraction" method as an alternative to standard background subtraction, and show that a reference plane in a scene viewed by two cameras can be used as the decision boundary between foreground and background. In our definition, the foreground may actually occur behind a moving object. Our novel use of projective depth as a decision boundary allows us to extend the traditional definition of background subtraction and propose a much more powerful framework. Furthermore, we show that the reference plane can be selected in a very flexible manner, using for example the actual moving objects in the scene, if needed. We present diverse set of examples to show that: (i) the technique performs better than standard background subtraction techniques without the need for training, camera calibration, disparity map estimation, or special camera configurations; (ii) it is potentially more powerful than standard methods because of its flexibility of making it possible to select in real-time what to filter out as background, regardless of whether the object is moving or not, or whether it is a rare event or a frequent one; (iii) the technique can be used for a variety of situations including when images are captured using stationary cameras or hand-held cameras and for both indoor and outdoor scenes. We provide extensive results to show the effectiveness of the proposed framework in a variety of very challenging environments.
174

Energy-Aware Real-Time Scheduling on Heterogeneous and Homogeneous Platforms in the Era of Parallel Computing

Bhuiyan, Ashik Ahmed 01 January 2021 (has links) (PDF)
Multi-core processors increasingly appear as an enabling platform for embedded systems, e.g., mobile phones, tablets, computerized numerical controls, etc. The parallel task model, where a task can execute on multiple cores simultaneously, can efficiently exploit the multi-core platform's computational ability. Many computation-intensive systems (e.g., self-driving cars) that demand stringent timing requirements often evolve in the form of parallel tasks. Several real-time embedded system applications demand predictable timing behavior and satisfy other system constraints, such as energy consumption. Motivated by the facts mentioned above, this thesis studies the approach to integrating the dynamic voltage and frequency scaling (DVFS) policy with real-time embedded system application's internal parallelism to reduce the worst-case energy consumption (WCEC), an essential requirement for energy-constrained systems. First, we propose an energy-sub-optimal scheduler, assuming the per-core speed tuning feature for each processor. Then we extend our solution to adapt the clustered multi-core platform, where at any given time, all the processors in the same cluster run at the same speed. We also present an analysis to exploit a task's probabilistic information to improve the average-case energy consumption (ACEC), a common non-functional requirement of embedded systems. Due to the strict requirement of temporal correctness, the majority of the real-time system analysis considered the worst-case scenario, leading to resource over-provisioning and cost. The mixed-criticality (MC) framework was proposed to minimize energy consumption and resource over-provisioning. MC scheduling has received considerable attention from the real-time system research community, as it is crucial to designing safety-critical real-time systems. This thesis further addresses energy-aware scheduling of real-time tasks in an MC platform, where tasks with varying criticality levels (i.e., importance) are integrated into a common platform. We propose an algorithm GEDF-VD for scheduling MC tasks with internal parallelism in a multiprocessor platform. We also prove the correctness of GEDF-VD, provide a detailed quantitative evaluation, and reported extensive experimental results. Finally, we present an analysis to exploit a task's probabilistic information at their respective criticality levels. Our proposed approach reduces the average-case energy consumption while satisfying the worst-case timing requirement.
175

Big Data Processing Attribute Based Access Control Security

Tall, Anne 01 January 2022 (has links) (PDF)
The purpose of this research is to analyze the security of next-generation big data processing (BDP) and examine the feasibility of applying advanced security features to meet the needs of modern multi-tenant, multi-level data analysis. The research methodology was to survey of the status of security mechanisms in BDP systems and identify areas that require further improvement. Access control (AC) security services were identified as priority area, specifically Attribute Based Access Control (ABAC). The exemplar BDP system analyzed is the Apache Hadoop ecosystem. We created data generation software, analysis programs, and posted the detailed the experiment configuration on GitHub. Overall, our research indicates that before a BDP system, such as Hadoop, can be used in operational environment significant security configurations are required. We believe that the tools are available to achieve a secure system, with ABAC, using Apache Ranger and Apache Atlas. However, these systems are immature and require verification by an independent third party. We identified the following specific actions for overall improvement: consistent provisioning of security services through a data analyst workstation, a common backplane of security services, and a management console. These areas are partially satisfied in the current Hadoop ecosystem, continued AC improvements through the open source community, and rigorous independent testing should further address remaining security challenges. Robust security will enable further use of distributed, cluster BDP, such as Apache Hadoop and Hadoop-like systems, to meet future government and business requirements.
176

Long Short-Term Memory with Spin-Based Binary and Non-Binary Neurons

Vangala, Meghana Reddy 01 January 2021 (has links) (PDF)
Research in the field of neural networks has shown advancement in the device technology and machine learning application platforms of use. Some of the major applications of neural network prominent in recent scenarios include image recognition, machine translation, text classification and object categorization. With these advancements, there is a need for more energy-efficient and low area overhead circuits in the hardware implementations. Previous works have concentrated primarily on CMOS technology-based implementations which can face challenges of high energy consumption, memory wall, and volatility complications for standby modes. We herein developed a low-power and area-efficient hardware implementation for Long Short-Term Memory (LSTM) networks as a type of Recurrent Neural Network (RNN). To achieve energy efficiency while maintaining comparable accuracy commensurate with the ideal case, the LSTM network herein uses Resistive Random-Access Memory (ReRAM) based synapses along with spin-based non-binary neurons. The proposed neuron has a novel activation mechanism that mimics the ideal hyperbolic tangent (tanh) and sigmoid activation functions with five levels of output accuracy. Using ideal, binary, and the proposed non-binary neurons, we investigated the performance of an LSTM network for name prediction dataset. The comparison of the results shows that our proposed neuron can achieve up to 85% accuracy and perplexity of 1.56, which attains performance similar to algorithmic expectations of near-ideal neurons. The simulations show that our proposed neuron achieves up to 34-fold improvement in energy efficiency and 2-fold area reduction compared to the CMOS-based non-binary designs.
177

Performance Enhancement of Time Delay and Convolutional Neural Networks Employing Sparse Representation in the Transform Domains

Kalantari Khandani, Masoumeh 01 May 2021 (has links) (PDF)
Deep neural networks are quickly advancing and increasingly used in many applications; however, these networks are often extremely large and require computing and storage power beyond what is available in most embedded and sensor devices. For example, IoT (Internet of Things) devices lack powerful processors or graphical processing units (GPUs) that are commonly used in deep networks. Given the very large-scale deployment of such low power devices, it is desirable to design methods for efficient reduction of computational needs of neural networks. This can be done by reducing input data size or network sizes. Expectedly, such reduction comes at the cost of degraded performance. In this work, we examine how sparsifying the input to a neural network can significantly improve the performance of artificial neural networks (ANN) such as time delay neural networks (TDNN) and convolutional neural networks (CNNs). We show how TDNNs can be enhanced using a sparsifying transform layer that significantly improves learning time and forecasting performance for time series. We mathematically prove that the improvement is the result of sparsification of the input of a fully connected layer of a TDNN. Experiments with several datasets and transforms such as discrete cosine transform (DCT), discrete wavelet transform (DWT) and PCA (Principal Component Analysis) are used to show the improvement and the reason behind it. We also show that this improved performance can be traded off for network size reduction of a TDNN. Similarly, we show that the performance of reduced size CNNs can be improved for image classification when domain transforms are employed in the input. The improvement in CNN performance is found to be related to the better preservation of information when sparsifying transforms are used. We evaluate the proposed concept with low complexity CNNs and common datasets of Fashion MNIST and CIFAR. We constrain the size of CNNs in our tests to under 200K learnable parameters, as opposed to millions in deeper networks. We emphasize that finding optimal hyper parameters or network configurations is not the objective of this study; rather, we focus on studying the impact of projecting data to new domains on the performance of reduced size inputs and networks. It is shown that input size reduction of up to 75% is possible, without loss of classification accuracy in some cases.
178

Optimal Feature Learning for Facial Expression Recognition

Ali, Kamran 01 December 2021 (has links) (PDF)
A great deal of research has been done to improve the performance of Facial Expression Recognition (FER) algorithms, but extracting optimal features to represent expressions remains a challenging task. The biggest drawback is that most work on FER ignores the inter-subject variations in facial attributes of individuals present in data. Hence, the representation extracted for the recognition of expressions is polluted by identity-related features that negatively affect the generalization capability of a FER technique on unseen identities. To overcome the effect of subject-identity bias, previous research shows the effectiveness of extracting identity-invariant expression features for FER. However, most of those identity-invariant expression representation learning methods rely on hand-engineered feature extraction techniques. Apart from the inter-subject variations, other challenges in learning optimal FER representation are illumination and head-pose variation present in data. We believe the key to dealing with these problems present in facial expression datasets lies in FER techniques that disentangle the expression representation from the identity features. Therefore, in this dissertation, we first discuss our Reenactment-based Expression-Representation Learning Generative Adversarial Network (REL-GAN) that disentangles expression features from the identity information by transferring the expression of one image to the identity of another image (known as face reenactment). Second, we present our Human-to-Animation conditional Generative Adversarial Network (HA-GAN) that overcomes the challenges posed by the illumination and identity variations present in these datasets by estimating a many-to-one identity mapping function employing adversarial learning. Third, we present a Transfer-based Expression Recognition Generative Adversarial Network (TER-GAN) that learns an identity-invariant expression representation without requiring any hand-engineered identity-invariant feature extraction technique. Fourth, we discuss the effectiveness of using 3D expression parameters in optimal expression feature learning algorithms. We then present our Action Unit-based Attention Net (AUA-Net) which is trained in a weakly supervised manner to generate expression attention maps for FER.
179

Energy and Area Efficient Machine Learning Architectures using Spin-Based Neurons

Pourmeidani, Hossein 01 January 2022 (has links) (PDF)
Recently, spintronic devices with low energy barrier nanomagnets such as spin orbit torque-Magnetic Tunnel Junctions (SOT-MTJs) and embedded magnetoresistive random access memory (MRAM) devices are being leveraged as a natural building block to provide probabilistic sigmoidal activation functions for RBMs. In this dissertation research, we use the Probabilistic Inference Network Simulator (PIN-Sim) to realize a circuit-level implementation of deep belief networks (DBNs) using memristive crossbars as weighted connections and embedded MRAM-based neurons as activation functions. Herein, a probabilistic interpolation recoder (PIR) circuit is developed for DBNs with probabilistic spin logic (p-bit)-based neurons to interpolate the probabilistic output of the neurons in the last hidden layer which are representing different output classes. Moreover, the impact of reducing the Magnetic Tunnel Junction's (MTJ's) energy barrier is assessed and optimized for the resulting stochasticity present in the learning system. In p-bit based DBNs, different defects such as variation of the nanomagnet thickness can undermine functionality by decreasing the fluctuation speed of the p-bit realized using a nanomagnet. A method is developed and refined to control the fluctuation frequency of the output of a p-bit device by employing a feedback mechanism. The feedback can alleviate this process variation sensitivity of p-bit based DBNs. This compact and low complexity method which is presented by introducing the self-compensating circuit can alleviate the influences of process variation in fabrication and practical implementation. Furthermore, this research presents an innovative image recognition technique for MNIST dataset on the basis of p-bit-based DBNs and TSK rule-based fuzzy systems. The proposed DBN-fuzzy system is introduced to benefit from low energy and area consumption of p-bit-based DBNs and high accuracy of TSK rule-based fuzzy systems. This system initially recognizes the top results through the p-bit-based DBN and then, the fuzzy system is employed to attain the top-1 recognition results from the obtained top outputs. Simulation results exhibit that a DBN-Fuzzy neural network not only has lower energy and area consumption than bigger DBN topologies while also achieving higher accuracy.
180

Security issues in networked embedded devices

Chasaki, Danai 01 January 2012 (has links)
Embedded devices are ubiquitous; they are present in various sectors of everyday life: smart homes, automobiles, health care, telephony, industrial automation, networking etc. Embedded systems are well known for their dependability, and that is one of the reasons that they are preferred over general purpose machines in various applications. Traditional embedded computing is changing nowadays mainly due to the increasing number of heterogeneous embedded devices that are, more often than not, interconnected. Security in the field of networked embedded systems is becoming particularly important, because: 1) Connected embedded devices can be attacked remotely. 2) They are resource constrained. This means, that due to their limited computational capabilities, a full-blown operating system that runs virus scanners and advanced intrusion detection techniques cannot be supported. The two facts lead us to the conclusion that a new set of vulnerabilities emerges in the networked embedded system area, which cannot be tackled using traditional security solutions. This work is focused on embedded systems that are used in the network domain. A very exciting instance of an embedded system that requires high performance, has limited processing resources and communicates with other embedded devises is a network processor (NP). Powerful network processors are central components of modern routers, which help them achieve flexibility and perform tasks with advanced processing requirements. In my work, I identified a new class of vulnerabilities specific to routers. The same set of vulnerabilities can apply to any other type of networked embedded device that is not traditionally programmable, but is gradually shifting towards programmability. Security in the networking field is a crucial concern. Many attacks in existing networks are based on security vulnerabilities in end-systems or in the end-to-end protocols that they use. Inside the network, most practical attacks have focused on the control plane where routing information and other control data are exchanged. With the emergence of router systems that use programmable embedded processors, the data plane of the network also becomes a potential target for attacks. This trend towards attacks on the forwarding component in router systems is likely to speed up in next-generation networks, where virtualization requires even higher levels of programmability in the data path. This dissertation demonstrates a real attack scenario on a programmable router and discusses how similar attacks can be realized. Specifically, we present an attack example that can launch a devastating denial-of-service attack by sending just a single packet. We show that vulnerable packet processing code can be exploited on a Click modular router as well as on a custom packet processor on the NetFPGA platform. Several defenses to target this specific type of attacks are presented, which are broadly applicable to a large scale of embedded devices. Security vulnerabilities can be addressed efficiently using hardware based extensions. For example, defense techniques based on processor monitoring can help in detecting and avoiding such attacks. We believe that this work is an important step at providing comprehensive security solutions that can protect the data path of current and future networks.

Page generated in 0.1115 seconds