• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 74
  • 7
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 107
  • 51
  • 42
  • 32
  • 26
  • 25
  • 22
  • 19
  • 18
  • 17
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Nonlinear model predictive control using automatic differentiation

Al Seyab, Rihab Khalid Shakir January 2006 (has links)
Although nonlinear model predictive control (NMPC) might be the best choice for a nonlinear plant, it is still not widely used. This is mainly due to the computational burden associated with solving online a set of nonlinear differential equations and a nonlinear dynamic optimization problem in real time. This thesis is concerned with strategies aimed at reducing the computational burden involved in different stages of the NMPC such as optimization problem, state estimation, and nonlinear model identification. A major part of the computational burden comes from function and derivative evaluations required in different parts of the NMPC algorithm. In this work, the problem is tackled using a recently introduced efficient tool, the automatic differentiation (AD). Using the AD tool, a function is evaluated together with all its partial derivative from the code defining the function with machine accuracy. A new NMPC algorithm based on nonlinear least square optimization is proposed. In a first–order method, the sensitivity equations are integrated using a linear formula while the AD tool is applied to get their values accurately. For higher order approximations, more terms of the Taylor expansion are used in the integration for which the AD is effectively used. As a result, the gradient of the cost function against control moves is accurately obtained so that the online nonlinear optimization can be efficiently solved. In many real control cases, the states are not measured and have to be estimated for each instance when a solution of the model equations is needed. A nonlinear extended version of the Kalman filter (EKF) is added to the NMPC algorithm for this purpose. The AD tool is used to calculate the required derivatives in the local linearization step of the filter automatically and accurately. Offset is another problem faced in NMPC. A new nonlinear integration is devised for this case to eliminate the offset from the output response. In this method, an integrated disturbance model is added to the process model input or output to correct the plant/model mismatch. The time response of the controller is also improved as a by–product. The proposed NMPC algorithm has been applied to an evaporation process and a two continuous stirred tank reactor (two–CSTR) process with satisfactory results to cope with large setpoint changes, unmeasured severe disturbances, and process/model mismatches. When the process equations are not known (black–box) or when these are too complicated to be used in the controller, modelling is needed to create an internal model for the controller. In this thesis, a continuous time recurrent neural network (CTRNN) in a state–space form is developed to be used in NMPC context. An efficient training algorithm for the proposed network is developed using AD tool. By automatically generating Taylor coefficients, the algorithm not only solves the differentiation equations of the network but also produces the sensitivity for the training problem. The same approach is also used to solve online the optimization problem of the NMPC. The proposed CTRNN and the predictive controller were tested on an evaporator and two–CSTR case studies. A comparison with other approaches shows that the new algorithm can considerably reduce network training time and improve solution accuracy. For a third case study, the ALSTOM gasifier, a NMPC via linearization algorithm is implemented to control the system. In this work a nonlinear state–space class Wiener model is used to identify the black–box model of the gasifier. A linear model of the plant at zero–load is adopted as a base model for prediction. Then, a feedforward neural network is created as the static gain for a particular output channel, fuel gas pressure, to compensate its strong nonlinear behavior observed in open–loop simulations. By linearizing the neural network at each sampling time, the static nonlinear gain provides certain adaptation to the linear base model. The AD tool is used here to linearize the neural network efficiently. Noticeable performance improvement is observed when compared with pure linear MPC. The controller was able to pass all tests specified in the benchmark problem at all load conditions.
12

Robust fault analysis for permanent magnet DC motor in safety critical applications

Abed, Wathiq January 2015 (has links)
Robust fault analysis (FA) including the diagnosis of faults and predicting their level of severity is necessary to optimise maintenance and improve reliability of Aircraft. Early diagnosis of faults that might occur in the supervised process renders it possible to perform important preventative actions. The proposed diagnostic models were validated in two experimental tests. The first test concerned a single localised and generalised roller element bearing fault in a permanent magnet brushless DC (PMBLDC) motor. Rolling element bearing defect is one of the main reasons for breakdown in electrical machines. Vibration and current are analysed under stationary and non-stationary load and speed conditions, for a variety of bearing fault severities, and for both local and global bearing faults. The second test examined the case of an unbalance rotor due to blade faults in a thruster, motor based on a permanent magnet brushed DC (PMBDC) motor. A variety of blade fault conditions were investigated, over a wide range of rotation speeds. The test used both discrete wavelet transform (DWT) to extract the useful features, and then feature reduction techniques to avoid redundant features. This reduces computation requirements and the time taken for classification by the application of an orthogonal fuzzy neighbourhood discriminant analysis (OFNDA) approach. The real time monitoring of motor operating conditions is an advanced technique that presents the real performance of the motor, so that the dynamic recurrent neural network (DRNN) proposed predicts the conditions of components and classifies the different faults under different operating conditions. The results obtained from real time simulation demonstrate the effectiveness and reliability of the proposed methodology in accurately classifying faults and predicting levels of fault severity.
13

Sequencing Behavior in an Intelligent Pro-active Co-Driver System

January 2020 (has links)
abstract: Driving is the coordinated operation of mind and body for movement of a vehicle, such as a car, or a bus. Driving, being considered an everyday activity for many people, still has an issue of safety. Driver distraction is becoming a critical safety problem. Speed, drunk driving as well as distracted driving are the three leading factors in the fatal car crashes. Distraction, which is defined as an excessive workload and limited attention, is the main paradigm that guides this research area. Driver behavior analysis can be used to address the distraction problem and provide an intelligent adaptive agent to work closely with the driver, fay beyond traditional algorithmic computational models. A variety of machine learning approaches has been proposed to estimate or predict drivers’ fatigue level using car data, driver status or a combination of them. Three important features of intelligence and cognition are perception, attention and sensory memory. In this thesis, I focused on memory and attention as essential parts of highly intelligent systems. Without memory, systems will only show limited intelligence since their response would be exclusively based on spontaneous decision without considering the effect of previous events. I proposed a memory-based sequence to predict the driver behavior and distraction level using neural network. The work started with a large-scale experiment to collect data and make an artificial intelligence-friendly dataset. After that, the data was used to train a deep neural network to estimate the driver behavior. With a focus on memory by using Long Short Term Memory (LSTM) network to increase the level of intelligence in two dimensions: Forgiveness of minor glitches, and accumulation of anomalous behavior., I reduced the model error and computational expense by adding attention mechanism on the top of LSTM models. This system can be generalized to build and train highly intelligent agents in other domains. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2020
14

Detekce ohně a kouře z obrazového signálu / Image based smoke and fire detection

Ďuriš, Denis January 2020 (has links)
This diploma thesis deals with the detection of fire and smoke from the image signal. The approach of this work uses a combination of convolutional and recurrent neural network. Machine learning models created in this work contain inception modules and blocks of long short-term memory. The research part describes selected models of machine learning used in solving the problem of fire detection in static and dynamic image data. As part of the solution, a data set containing videos and still images used to train the designed neural networks was created. The results of this approach are evaluated in conclusion.
15

Forecasting Atmospheric Turbulence Conditions From Prior Environmental Parameters Using Artificial Neural Networks: An Ensemble Study

Grose, Mitchell 18 May 2021 (has links)
No description available.
16

Omni SCADA intrusion detection

Gao, Jun 11 May 2020 (has links)
We investigate deep learning based omni intrusion detection system (IDS) for supervisory control and data acquisition (SCADA) networks that are capable of detecting both temporally uncorrelated and correlated attacks. Regarding the IDSs developed in this paper, a feedforward neural network (FNN) can detect temporally uncorrelated attacks at an F1 of 99.967±0.005% but correlated attacks as low as 58±2%. In contrast, long-short term memory (LSTM) detects correlated attacks at 99.56±0.01% while uncorrelated attacks at 99.3±0.1%. Combining LSTM and FNN through an ensemble approach further improves the IDS performance with F1 of 99.68±0.04% regardless the temporal correlations among the data packets. / Graduate
17

Machine Learning, Game Theory Algorithms, and Medium Access Protocols for 5G and Internet-of-Thing (IoT) Networks

Elkourdi, Mohamed 25 March 2019 (has links)
In the first part of this dissertation, a novel medium access protocol for the Internet of Thing (IoT) networks is introduced. The Internet of things (IoT), which is the network of physical devices embedded with sensors, actuators, and connectivity, is being accelerated into the mainstream by the emergence of 5G wireless networking. This work presents an uncoordinated non-orthogonal random-access protocol, which is an enhancement to the recently introduced slotted ALOHA- NOMA (SAN) protocol that provides high throughput, while being matched to the low complexity requirements and the sporadic traffic pattern of IoT devices. Under ideal conditions it has been shown that slotted ALOHA-NOMA (SAN), using power- domain orthogonality, can significantly increase the throughput using SIC (Successive Interference Cancellation) to enable correct reception of multiple simultaneous transmitted signals. For this ideal performance, the enhanced SAN receiver adaptively learns the number of active devices (which is not known a priori) using a form of multi-hypothesis testing. For small numbers of simultaneous transmissions, it is shown that there can be substantial throughput gain of 5.5 dB relative to slotted ALOHA (SA) for 0.07 probability of transmission and up to 3 active transmitters. As a further enhancement to SAN protocol, the SAN with beamforming (BF-SAN) protocol was proposed. The BF-SAN protocol uses beamforming to significantly improve the throughput to 1.31 compared with 0.36 in conventional slotted ALOHA when 6 active IoT devices can be successfully separated using 2×2 MIMO and a SIC (Successive Interference Cancellation) receiver with 3 optimum power levels. The simulation results further show that the proposed protocol achieves higher throughput than SAN with a lower average channel access delay. In the second part of this dissertation a novel Machine Learning (ML) approach was applied for proactive mobility management in 5G Virtual Cell (VC) wireless networks. Providing seamless mobility and a uniform user experience, independent of location, is an important challenge for 5G wireless networks. The combination of Coordinated Multipoint (CoMP) networks and Virtual- Cells (VCs) are expected to play an important role in achieving high throughput independent of the mobile’s location by mitigating inter-cell interference and enhancing the cell-edge user throughput. User- specific VCs will distinguish the physical cell from a broader area where the user can roam without the need for handoff, and may communicate with any Base Station (BS) in the VC area. However, this requires rapid decision making for the formation of VCs. In this work, a novel algorithm based on a form of Recurrent Neural Networks (RNNs) called Gated Recurrent Units (GRUs) is used for predicting the triggering condition for forming VCs via enabling Coordinated Multipoint (CoMP) transmission. Simulation results show that based on the sequences of Received Signal Strength (RSS) values of different mobile nodes used for training the RNN, the future RSS values from the closest three BSs can be accurately predicted using GRU, which is then used for making proactive decisions on enabling CoMP transmission and forming VCs. Finally, the work in the last part of this dissertation was directed towards applying Bayesian games for cell selection / user association in 5G Heterogenous networks to achieve the 5G goal of low latency communication. Expanding the cellular ecosystem to support an immense number of connected devices and creating a platform that accommodates a wide range of emerging services of different traffic types and Quality of Service (QoS) metrics are among the 5G’s headline features. One of the key 5G performance metrics is ultra-low latency to enable new delay-sensitive use cases. Some network architectural amendments are proposed to achieve the 5G ultra-low latency objective. With these paradigm shifts in system architecture, it is of cardinal importance to rethink the cell selection / user association process to achieve substantial improvement in system performance over conventional maximum signal-to- interference plus noise ratio (Max-SINR) and Cell Range Expansion (CRE) algorithms employed in Long Term Evolution- Advanced (LTE- Advanced). In this work, a novel Bayesian cell selection / user association algorithm, incorporating the access nodes capabilities and the user equipment (UE) traffic type, is proposed in order to maximize the probability of proper association and consequently enhance the system performance in terms of achieved latency. Simulation results show that Bayesian game approach attains the 5G low end-to-end latency target with a probability exceeding 80%.
18

Methods and Algorithms to Enhance the Security, Increase the Throughput, and Decrease the Synchronization Delay in 5G Networks

Mazin, Asim 11 March 2019 (has links)
This dissertation presents several novel approaches to enhance security, and increase the throughput, and decrease the delay synchronization in 5G networks. First, a new physical layer paradigm was proposed for secure key exchange between the legitimate communication parties in the presence of a passive eavesdropper was presented. The proposed method ensures secrecy via pre-equalization and guarantees reliable communications using Low-Density Parity Check (LDPC) codes. One of the main findings of this research is to demonstrate through simulations that the diversity order of the eavesdropper will be zero unless the main and eavesdropping channels are almost correlated, while the probability of a key mismatch between the legitimate transmitter and receiver will be low. Simulation results demonstrate that the proposed approach achieves very low secret key mismatch between the legitimate users while ensuring very high error probability at the eavesdropper. Next, a novel medium access control (MAC) protocol Slotted Aloha-NOMA (SAN), directed to Machine to Machine (M2M) communication applications in the 5G Internet of Things (IoT) networks was proposed. SAN is matched to the low-complexity implementation and sporadic traffic requirements of M2M applications. Substantial throughput gains are achieved by enhancing Slotted Aloha with non-orthogonal multiple access (NOMA) and a Successive Interference Cancellation (SIC) receiver that can simultaneously detect multiple transmitted signals using power domain multiplexing. The gateway SAN receiver adaptively learns the number of active devices using a form of multi-hypothesis testing and a novel procedure enables the transmitters to independently select distinct power levels. Simulation results show that the throughput of SAN exceeds that of conventional Slotted Aloha by 80% and that of CSMA/CA by 20% with a probability of transmission of 0.03, with a slightly increased average delay owing to the novel power level selection mechanism. Finally, beam sweeping pattern prediction, based on the dynamic distribution of user traffic, using a form of recurrent neural networks (RNNs) called Gated Recurrent Unit (GRU) is proposed. The spatial distribution of users is inferred from data in call detail records (CDRs) of the cellular network. Results show that the user's spatial distribution and their approximate location (direction) can be accurately predicted based on CDRs data using GRU, which is then used to calculate the sweeping pattern in the angular domain during cell search. Furthermore, the data-driven proposed beam sweeping pattern prediction was compared to random starting point sweeping (RSP) to measure the synchronization delay distribution. Results demonstrate the data- drive beam sweeping pattern prediction enables the UE to initially assess the gNB in approximately 0.41 of a complete scanning cycle that is required by the RSP scheme with probability 0.9 in a sparsely distributed UE scenario.
19

Comparing LSTM and GRU for Multiclass Sentiment Analysis of Movie Reviews.

Sarika, Pawan Kumar January 2020 (has links)
Today, we are living in a data-driven world. Due to a surge in data generation, there is a need for efficient and accurate techniques to analyze data. One such kind of data which is needed to be analyzed are text reviews given for movies. Rather than classifying the reviews as positive or negative, we will classify the sentiment of the reviews on the scale of one to ten. In doing so, we will compare two recurrent neural network algorithms Long short term memory(LSTM) and Gated recurrent unit(GRU). The main objective of this study is to compare the accuracies of LSTM and GRU models. For training models, we collected data from two different sources. For filtering data, we used porter stemming and stop words. We coupled LSTM and GRU with the convolutional neural networks to increase the performance. After conducting experiments, we have observed that LSTM performed better in predicting border values. Whereas, GRU predicted every class equally. Overall GRU was able to predict multiclass text data of movie reviews slightly better than LSTM. GRU was computationally expansive when compared to LSTM.
20

TRAJECTORY PATTERN IDENTIFICATION AND CLASSIFICATION FOR ARRIVALS IN VECTORED AIRSPACE

Chuhao Deng (11184909) 26 July 2021 (has links)
<div> <div> <div> <p>As the demand and complexity of air traffic increase, it becomes crucial to maintain the safety and efficiency of the operations in airspaces, which, however, could lead to an increased workload for Air Traffic Controllers (ATCs) and delays in their decision-making processes. Although terminal airspaces are highly structured with the flight procedures such as standard terminal arrival routes and standard instrument departures, the aircraft are frequently instructed to deviate from such procedures by ATCs to accommodate given traffic situations, e.g., maintaining the separation from neighboring aircraft or taking shortcuts to meet scheduling requirements. Such deviation, called vectoring, could even increase the delays and workload of ATCs. This thesis focuses on developing a framework for trajectory pattern identification and classification that can provide ATCs, in vectored airspace, with real-time information of which possible vectoring pattern a new incoming aircraft could take so that such delays and workload could be reduced. This thesis consists of two parts, trajectory pattern identification and trajectory pattern classification. </p> <p>In the first part, a framework for trajectory pattern identification is proposed based on agglomerative hierarchical clustering, with dynamic time warping and squared Euclidean distance as the dissimilarity measure between trajectories. Binary trees with fixes that are provided in the aeronautical information publication data are proposed in order to catego- rize the trajectory patterns. In the second part, multiple recurrent neural network based binary classification models are trained and utilized at the nodes of the binary trees to compute the possible fixes an incoming aircraft could take. The trajectory pattern identifi- cation framework and the classification models are illustrated with the automatic dependent surveillance-broadcast data that were recorded between January and December 2019 in In- cheon international airport, South Korea . </p> </div> </div> </div>

Page generated in 0.0573 seconds