31 |
Task Offloading and Resource Allocation Using Deep Reinforcement LearningZhang, Kaiyi 01 December 2020 (has links)
Rapid urbanization poses huge challenges to people's daily lives, such as traffic congestion, environmental pollution, and public safety. Mobile Internet of things (MIoT) applications serving smart cities bring the promise of innovative and enhanced public services such as air pollution monitoring, enhanced road safety and city resources metering and management. These applications rely on a number of energy constrained MIoT units (MUs) (e.g., robots and drones) to continuously sense, capture and process data and images from their environments to produce immediate adaptive actions (e.g., triggering alarms, controlling machinery and communicating with citizens). In this thesis, we consider a scenario where a battery constrained MU executes a number of time-sensitive data processing tasks whose arrival times and sizes are stochastic in nature. These tasks can be executed locally on the device, offloaded to one of the nearby edge servers or to a cloud data center within a mobile edge computing (MEC) infrastructure. We first formulate the problem of making optimal offloading decisions that minimize the cost of current and future tasks as a constrained Markov decision process (CMDP) that accounts for the constraints of the MU battery and the limited reserved resources on the MEC infrastructure by the application providers. Then, we relax the CMDP problem into regular Markov decision process (MDP) using Lagrangian primal-dual optimization. We then develop advantage actor-critic (A2C) algorithm, one of the model-free deep reinforcement learning (DRL) method to train the MU to solve the relaxed problem. The training of the MU can be carried-out once to learn optimal offloading policies that are repeatedly employed as long as there are no large changes in the MU environment. Simulation results are presented to show that the proposed algorithm can achieve performance improvement over offloading decisions schemes that aim at optimizing instantaneous costs.
|
32 |
Novel computational methods for promoter identification and analysisUmarov, Ramzan 02 March 2020 (has links)
Promoters are key regions that are involved in differential transcription regulation
of protein-coding and RNA genes. The gene-specific architecture of promoter
sequences makes it extremely difficult to devise a general strategy for their computational
identification. Accurate prediction of promoters is fundamental for interpreting
gene expression patterns, and for constructing and understanding genetic regulatory
networks. In the last decade, genomes of many organisms have been sequenced and
their gene content was mostly identified. Promoters and transcriptional start sites
(TSS), however, are still left largely undetermined and efficient software able to accurately
predict promoters in newly sequenced genomes is not yet available in the
public domain. While there are many attempts to develop computational promoter
identification methods, reliable tools to analyze long genomic sequences are still lacking.
In this dissertation, I present the methods I have developed for prediction of promoters
for different organisms. The first two methods, TSSPlant and PromCNN,
achieved state-of-the-art performance for discriminating promoter and non-promoter
sequences for plant and eukaryotic promoters respectively. For TSSPlant, a large
number of features were crafted and evaluated to train an optimal classifier. Prom-
CNN was built using a deep learning approach that extracts features from the data
automatically. The trained model demonstrated the ability of a deep learning approach
to grasp complex promoter sequence characteristics.
For the latest method, DeeReCT-PromID, I focus on prediction of the exact positions
of the TSSs inside the eukaryotic genomic sequences, testing every possible location. This is a more difficult task, requiring not only an accurate classifier, but also
appropriate selection of unique predictions among multiple overlapping high scoring
genomic segments. The new method significantly outperform the previous promoter
prediction programs by considerably reducing the number of false positive predictions.
Specifically, to reduce the false positive rate, the models are adaptively and
iteratively trained by changing the distribution of samples in the training set based
on the false positive errors made in the previous iteration.
The new methods are used to gain insights into the design principles of the core
promoters. Using model analysis, I have identified the most important core promoter
elements and their effect on the promoter activity. Furthermore, the importance of
each position inside the core promoter was analyzed and validated using a large single
nucleotide polymorphisms data set. I have developed a novel general approach to
detect long range interactions in the input of a deep learning model, which was used
to find related positions inside the promoter region. The final model was applied
to the genomes of different species without a significant drop in the performance,
demonstrating a high generality of the developed method.
|
33 |
Inférer des objets sémantiques du Web structuré / Deriving semantic objects from the structured webOita, Marilena 29 October 2012 (has links)
Cette thèse se concentre sur l'extraction et l'analyse des objets du Web, selon différents points de vue: temporel, structurel, sémantique. Nous commençons par une étude qui porte sur la compréhension des différentes stratégies et meilleures pratiques pour inférer les aspects temporels des pages Web. Pour cette finalité, on présente plus en détail une approche qui utilise des statistiques sur les flux du Web. Nous continuons par la présentation de deux techniques basées sur des mots-clés pour l'extraction d'objets, dans le cadre des pages Web générées dynamiquement par des systèmes de gestion du contenu. Les objets que nous étudions dans ce contexte correspondent à des articles du Web. Les mots-clés, acquis automatiquement, guident le processus d'identification d'objets, soit au niveau d'une seule page Web (SIGFEED) soit sur plusieurs pages différentes qui partagent le même modèle (FOREST). Nous décrivons également un cadre général qui vise à découvrir le modèle sémantique des objets du Web caché. Dans ce contexte, l'objets sont représentés par des enregistrements de données. Ce cadre utilise FOREST pour l'identification des enregistrements dans la page et se base sur l'alignement des instances extraites et des objets mêmes, par rapport à des similitudes de type représentées avec rdf:type dans un graphe étiqueté. Ce graphe est ensuite aligné avec une ontologie générique comme YAGO, pour la découverte des types et leur relations par rapport à l'entité de base qui est résumé par le formulaire Web. / This thesis focuses on the extraction and analysis of Web data objects, investigated from different points of view: temporal, structural, semantic. We first survey different strategies and best practices for deriving temporal aspects of Web pages, together with a more in-depth study on Web feeds for this particular purpose, and other statistics. Next, in the context of dynamically-generated Web pages by content management systems, we present two keyword-based techniques that perform article extraction from such pages. Keywords, automatically acquired, guide the process of object identification, either at the level of a single Web page (SIGFEED), or across different pages sharing the same template (FOREST). We finally present, in the context of the deep Web, a generic framework that aims at discovering the semantic model of a Web object (here, data record) by, first, using FOREST for the extraction of objects, and second, representing the implicit rdf:type similarities between the object attributes and the entity of the form as relationships that, together with the instances extracted from the objects, form a labeled graph. This graph is further aligned to an ontology like YAGO for the discovery of the unknown types and relations.
|
34 |
Privacy-Preserving Facial Recognition Using Biometric-CapsulesPhillips, Tyler S. 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, developers have used the proliferation of biometric sensors in smart devices, along with recent advances in deep learning, to implement an array of biometrics-based recognition systems. Though these systems demonstrate remarkable performance and have seen wide acceptance, they present unique and pressing security and privacy concerns. One proposed method which addresses these concerns is the elegant, fusion-based Biometric-Capsule (BC) scheme. The BC scheme is provably secure, privacy-preserving, cancellable and interoperable in its secure feature fusion design.
In this work, we demonstrate that the BC scheme is uniquely fit to secure state-of-the-art facial verification, authentication and identification systems. We compare the performance of unsecured, underlying biometrics systems to the performance of the BC-embedded systems in order to directly demonstrate the minimal effects of the privacy-preserving BC scheme on underlying system performance. Notably, we demonstrate that, when seamlessly embedded into a state-of-the-art FaceNet and ArcFace verification systems which achieve accuracies of 97.18% and 99.75% on the benchmark LFW dataset, the BC-embedded systems are able to achieve accuracies of 95.13% and 99.13% respectively. Furthermore, we also demonstrate that the BC scheme outperforms or performs as well as several other proposed secure biometric methods.
|
35 |
A 7-year restrospective review of the microbiology of deep neck infections in adults at Chris Hani Baragwanath Academic HospitalAhmed, Sumaya January 2018 (has links)
A Dissertation submitted to the Faculty of Health Sciences, University of
the Witwatersrand, Johannesburg, in partial fulfilment of the
requirements for the degree of Master of Medicine in
Otorhinolaryngology, Johannesburg, 2018 / This study is a seven year (01/07/08 - 30/06/15) retrospective review of the
microbiology of deep neck infections in 52 adult patients at Chris Hani
Baragwanath academic hospital. Micro-organisms isolated from patients with
deep neck infections were analysed, including their antibiotic susceptibility
patterns. The effectiveness of empiric usage of amoxicillin – clavulanic acid
against commonly identified microbes and recommended alternative antibiotic
usage were reviewed.The register records of 70 microscopy, culture, and
antibiotic sensitivity results of specimens taken intraoperatively, in patients
with deep neck infections who underwent surgical intervention, were analysed.
Aerobic identified gram negative bacilli and streptococcus species; and
anaerobic Prevotella, were the most frequently isolated microorganisms.
Microbial sensitivity and resistance to amoxicillin – clavulanic acid was
reported in 15% (n = 8) of patients with deep neck infections. Hence, the
effectiveness of empiric usage of amoxicillin – clavulanic acid, against
microbes commonly involved in deep neck infections in adults at Chris Hani
Baragwanath academic hospital; cannot be proved nor disproved and is thus
recommended as an option; alternative empiric antibiotic usage likewise
cannot be recommended. Further periodic surveillance of microbial profiles
and associated antimicrobial susceptibility results, in larger population
samples of patients with deep neck infections; utilizing standardized protocols,
is suggested. / XL2018
|
36 |
Applications of Deep Learning to Video EnhancementShi, Zhihao January 2022 (has links)
Deep learning, usually built upon artificial neural networks, was proposed in 1943, but poor computational capability restricted its development at that time. With the advancement of computer architecture and chip design, deep learning gains sufficient computational power and has revolutionized many areas in computer vision. As a fundamental research area of computer vision, video enhancement often serves as the first step of many modern vision systems and facilitates numerous downstream vision tasks. This thesis provides a comprehensive study of video enhancement, especially in the sense of video frame interpolation and space-time video super-resolution.
For video frame interpolation, two novel methods, named GDConvNet and VFIT, are proposed. In GDConvNet, a novel mechanism named generalized deformable convolution is introduced in order to overcome the inaccuracy flow estimation issue in the flow-based methods and the rigidity issue of kernel shape in the kernel-based methods. This mechanism can effectively learn motion information in a data-driven manner and freely select sampling points in space-time. Our GDConvNet, built upon this mechanism, is shown to achieve state-of-the-art performance. As for VFIT, the concept of local attention is firstly introduced to video interpolation, and a novel space-time separation window-based self-attention scheme is further devised, which not only saves costs but acts as a regularization term to improve the performance.
Based on the new scheme, VFIT is presented as the first Transformer-based video frame interpolation framework. In addition, a multi-scale frame synthesis scheme is developed to fully realize the potential of Transformers. Extensive experiments on a variety of benchmark datasets demonstrate the superiority and liability of VFIT.
For space-time video super-resolution, a novel unconstrained space-time video super-resolution network is proposed to solve the common issues of the existing methods that either fail to explore the intrinsic relationship between temporal and spatial information or lack flexibility in the choice of final temporal/spatial resolution. To this end, several new ideas are introduced, such as integration of multi-level representations and generalized pixshuffle. Various experiments validate the proposed method in terms of its complete freedom in choosing output resolution, as well as superior performance over the state-of-the-art methods. / Thesis / Doctor of Philosophy (PhD)
|
37 |
INTELLIGENT RESOURCE PROVISIONING FOR NEXT-GENERATION CELLULAR NETWORKSYu, Lixing 07 September 2020 (has links)
No description available.
|
38 |
Non-competitive and competitive deep learning for imaging applicationsZhou, Xiao 05 July 2022 (has links)
While generative adversarial networks (GAN) have been widely applied in various settings, the competitive deep learning frameworks such as GANs were not as popular in medical image processing and even less widely applied on high resolution data due to the issues related to their stability. In this dissertation, we examined optimal ways of modeling a generalizable competitive framework that can alleviate the inherent stability issues while still meeting additional objectives such as to achieve prediction accuracy of a classification task or to satisfy other performance metrics on high dimensional data sets.
The first part of the thesis is focused on exploring better network performance in a non-competitive setting with a closed-form solution. (1) We introduced Pyramid Encoder in seq2seq models and observed a significant increase in computational and memory efficiency while achieving a similar repair rate to their non-pyramid counterparts. (2) We proposed a mixed spatio-temporal neural network for real-time prediction of crimes, establishing the feasibility of a convolutional neural network (CNN) in the spatio-temporal domain. (3) We developed and validated an interpretable deep learning framework for Alzheimer’s disease (AD) classification as a clinically adaptable strategy to generate neuroimaging signatures for AD diagnosis and as a generalizable approach for linking deep learning to pathophysiological processes in human disease. (4) We designed and validated an end-to-end survival model for prediction of progression from mild cognitive impairment (MCI) to AD, and identified regions salient to predicting progression from MCI to AD. (5) Additionally, we applied a supervised learning framework in Parrondo's Paradox that maps playing history directly to the decision space, and learned to combine two individually-losing games to have a positive expectation.
The second part is focused on the design and analysis of neural models in a competitive setting without a closed-form solution. We extended the models from tackling a single objective to multiple tasks, while also moving from two-dimensional images to three-dimensional magnetic resonance imaging scans of the human brain. (1) We experimented with domain-specific inpainting with a concurrently pre-trained GAN to recover noised or cropped images. (2) We developed a GAN model to enhance MRI-driven AD classification performance using generative adversarial learning. (3) Finally, we proposed a competitive framework that could recover 3D medical data from 2D slices, while retaining disease-related information. / 2023-07-04T00:00:00Z
|
39 |
Proposing a Three-Stage Model to Quantify Bradykinesia on a Symptom Severity Level Using Deep LearningJaber, R., Qahwaji, Rami S.R., Buckley, John, Abd-Alhameed, Raed 23 March 2022 (has links)
No / Typically characterised as a movement disorder, bradykinesia can be represented according to the degree of motor impairment. The assessment criteria for Parkinson’s disease (PD) is therefore well defined due to its symptomatic nature. Diagnosing and monitoring the progression of bradykinesia is currently heavily reliant on clinician’s visual judgment. One of the most common forms of examining bradykinesia involves rapid finger tapping and is aimed to determine the patient’s ability to initiate and sustain movement effectively. This consists of the patient repeatedly tapping their index finger and thumb together. Object detection algorithm, YOLO, was trained to track the separation between the index finger and thumb. Bounding boxes (BB) were used to determine their relative position on a frame-to-frame basis to produce a time series signal. Key movement characteristics were extracted to determine regularity of movement in finger tapping amongst Parkinson’s patients and controls.
|
40 |
On Mixup Training of Neural NetworksLiu, Zixuan 14 December 2022 (has links)
Deep neural networks are powerful tools of machine learning. Despite their capabilities of fitting the training data, they tend to perform undesirably on the unseen data. To improve the generalization of the deep neural networks, a variety of regularization techniques have been proposed. This thesis studies a simple yet effective regularization scheme, Mixup, which has been proposed recently. Briefly speaking, Mixup creates synthetic examples by linearly interpolating random pairs of the real examples and uses the synthetic examples for training. Although Mixup has been empirically shown to be effective on various classification tasks for neural network models, its working mechanism and possible limitations have not been well understood.
One potential problem of Mixup is known as manifold intrusion, in which the synthetic examples "intrude" the data manifolds of the real data, resulting in the conflicts between the synthetic labels and the ground-truth labels of the synthetic examples. The first part of this thesis investigates the strategies for resolving the manifold intrusion problem. We focus on two strategies. The first strategy, which we call "relabelling", attempts to find better labels for the synthetic data; the second strategy, which we call "cautious mixing", carefully selects the interpolating parameters to generate the synthetic examples. Through extensive experiments over several design choices, we observe that the "cautious mixing" strategy appears to perform better.
The second part of this thesis reports a previously unobserved phenomenon in Mixup training: on a number of standard datasets, the performance of the Mixup-trained models starts to decay after training for a large number of epochs, giving rise to a U-shaped generalization curve. This behavior is further aggravated when the size of the original dataset is reduced. To help understand such a behavior of Mixup, we show theoretically that Mixup training may introduce undesired data-dependent label noises to the synthetic data. Via analyzing a least-square regression problem with a random feature model, we explain why noisy labels may cause the U-shaped curve to occur: Mixup improves generalization through fitting the clean patterns at the early training stage, but as training progresses, the model becomes over-fitting to the noise in the synthetic data. Extensive experiments are performed on a variety of benchmark datasets, validating this explanation.
|
Page generated in 0.0389 seconds