• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 11
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 89
  • 89
  • 56
  • 56
  • 36
  • 31
  • 30
  • 29
  • 28
  • 28
  • 27
  • 27
  • 27
  • 27
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Forensic computing : a deterministic model for validation and verification through an ontological examination of forensic functions and processes

Beckett, Jason January 2010 (has links)
This dissertation contextualises the forensic computing domain in terms of validation of tools and processes. It explores the current state of forensic computing comparing it to the traditional forensic sciences. The research then develops a classification system for the disciplines functions to establish the extensible base for which a validation system is developed. / Thesis (PhD)--University of South Australia, 2010
72

The Design of an Oncology Knowledge Base from an Online Health Forum

Omar Ramadan (12446526) 22 April 2022 (has links)
<p>Knowledge base completion is an important task that allows scientists to reason over knowledge bases and discover new facts. In this thesis, a patient-centric knowledge base</p> <p>is designed and constructed using medical entities and relations extracted from the health forum r/cancer. The knowledge base stores information in binary relation triplets. It is enhanced with an is-a relation that is able to represent the hierarchical relationship between different medical entities. An enhanced Neural Tensor Network that utilizes the frequency of occurrence of relation triplets in the dataset is then developed to infer new facts from</p> <p>the enhanced knowledge base. The results show that when the enhanced inference model uses the enhanced knowledge base, a higher accuracy (73.2 %) and recall@10 (35.4%) are obtained. In addition, this thesis describes a methodology for knowledge base and associated</p> <p>inference model design that can be applied to other chronic diseases.</p>
73

SENSOR FUSION IN NEURAL NETWORKS FOR OBJECT DETECTION

Sheetal Prasanna (12447189) 12 July 2022 (has links)
<p>Object detection is an increasingly popular tool used in many fields, especially in the<br> development of autonomous vehicles. The task of object detections involves the localization<br> of objects in an image, constructing a bounding box to determine the presence and loca-<br> tion of the object, and classifying each object into its appropriate class. Object detection<br> applications are commonly implemented using convolutional neural networks along with the<br> construction of feature pyramid networks to extract data.<br> Another commonly used technique in the automotive industry is sensor fusion. Each<br> automotive sensor – camera, radar, and lidar – have their own advantages and disadvantages.<br> Fusing two or more sensors together and using the combined information is a popular method<br> of balancing the strengths and weakness of each independent sensor. Together, using sensor<br> fusion within an object detection network has been found to be an effective method of<br> obtaining accurate models. Accurate detections and classifications of images is a vital step<br> in the development of autonomous vehicles or self-driving cars.<br> Many studies have proposed methods to improve neural networks or object detection<br> networks. Some of these techniques involve data augmentation and hyperparameter opti-<br> mization. This thesis achieves the goal of improving a camera and radar fusion network by<br> implementing various techniques within these areas. Additionally, a novel idea of integrating<br> a third sensor, the lidar, into an existing camera and radar fusion network is explored in this<br> research work.<br> The models were trained on the Nuscenes dataset, one of the biggest automotive datasets<br> available today. Using the concepts of augmentation, hyperparameter optimization, sensor<br> fusion, and annotation filters, the CRF-Net was trained to achieve an accuracy score that<br> was 69.13% higher than the baseline</p>
74

MULTI-SOURCE AND SOURCE-PRIVATE CROSS-DOMAIN LEARNING FOR VISUAL RECOGNITION

Qucheng Peng (12426570) 12 July 2022 (has links)
<p>Domain adaptation is one of the hottest directions in solving annotation insufficiency problem of deep learning. General domain adaptation is not consistent with the practical scenarios in the industry. In this thesis, we focus on two concerns as below.</p> <p>  </p> <p>  First is that labeled data are generally collected from multiple domains. In other words, multi-source adaptation is a more common situation. Simply extending these single-source approaches to the multi-source cases could cause sub-optimal inference, so specialized multi-source adaptation methods are essential. The main challenge in the multi-source scenario is a more complex divergence situation. Not only the divergence between target and each source plays a role, but the divergences among distinct sources matter as well. However, the significance of maintaining consistency among multiple sources didn't gain enough attention in previous work. In this thesis, we propose an Enhanced Consistency Multi-Source Adaptation (EC-MSA) framework to address it from three perspectives. First, we mitigate feature-level discrepancy by cross-domain conditional alignment, narrowing the divergence between each source and target domain class-wisely. Second, we enhance multi-source consistency via dual mix-up, diminishing the disagreements among different sources. Third, we deploy a target distilling mechanism to handle the uncertainty of target prediction, aiming to provide high-quality pseudo-labeled target samples to benefit the previous two aspects. Extensive experiments are conducted on several common benchmark datasets and demonstrate that our model outperforms the state-of-the-art methods.</p> <p>  </p> <p>  Second is that data privacy and security is necessary in practice. That is, we hope to keep the raw data stored locally while can still obtain a satisfied model. In such a case, the risk of data leakage greatly decreases. Therefore, it is natural for us to combine the federated learning paradigm with domain adaptation. Under the source-private setting, the main challenge for us is to expose information from the source domain to the target domain while make sure that the communication process is safe enough. In this thesis, we propose a method named Fourier Transform-Assisted Federated Domain Adaptation (FTA-FDA) to alleviate the difficulties in two ways. We apply Fast Fourier Transform to the raw data and transfer only the amplitude spectra during the communication. Then frequency space interpolations between these two domains are conducted, minimizing the discrepancies while ensuring the contact of them and keeping raw data safe. What's more, we make prototype alignments by using the model weights together with target features, trying to reduce the discrepancy in the class level. Experiments on Office-31 demonstrate the effectiveness and competitiveness of our approach, and further analyses prove that our algorithm can help protect privacy and security.</p>
75

COMPARING PSO-BASED CLUSTERING OVER CONTEXTUAL VECTOR EMBEDDINGS TO MODERN TOPIC MODELING

Samuel Jacob Miles (12462660) 26 April 2022 (has links)
<p>Efficient topic modeling is needed to support applications that aim at identifying main themes from a collection of documents. In this thesis, a reduced vector embedding representation and particle swarm optimization (PSO) are combined to develop a topic modeling strategy that is able to identify representative themes from a large collection of documents. Documents are encoded using a reduced, contextual vector embedding from a general-purpose pre-trained language model (sBERT). A modified PSO algorithm (pPSO) that tracks particle fitness on a dimension-by-dimension basis is then applied to these embeddings to create clusters of related documents. The proposed methodology is demonstrated on three datasets across different domains. The first dataset consists of posts from the online health forum r/Cancer. The second dataset is a collection of NY Times abstracts and is used to compare</p> <p>the proposed model to LDA. The third is a standard benchmark dataset for topic modeling which consists of a collection of messages posted to 20 different news groups. It is used to compare state-of-the-art generative document models (i.e., ETM and NVDM) to pPSO. The results show that pPSO is able to produce interpretable clusters. Moreover, pPSO is able to capture both common topics as well as emergent topics. The topic coherence of pPSO is comparable to that of ETM and its topic diversity is comparable to NVDM. The assignment parity of pPSO on a document completion task exceeded 90% for the 20News-Groups dataset. This rate drops to approximately 30% when pPSO is applied to the same Skip-Gram embedding derived from a limited, corpus specific vocabulary which is used by ETM and NVDM.</p>
76

EXPLORING SUCCESS FACTORS FOR ICT SUPPORT TO REMOTE LEARNING IN HEIS

Craig William Keith (14375424) 25 July 2023 (has links)
<p>COVID-19 forced mass transitions to remote working across industries, significantly so in Higher Education Institutes (HEIs). ICT divisions were significantly tested as the provided service and support for remote work/learning. The purpose of this research is to characterize successful ICT practices in support of remote work/learning within HEIs. </p> <p><br></p> <p>This study investigates the current literature on HEIs, remote work ICT support, and Critical Success Factors (CSFs). Gaps in the current knowledge inform investigation into the factors of successful support as identified by HEI ICT professionals. A narrative literature review is conducted to explore the research on HEIs, remote work ICT support, and CSFs. Thereafter, subject matter experts are interviewed through a semi-structured interview approach. Content analysis is employed to characterize successful ICT support to remote work within HEIs. </p> <p><br></p> <p>While ICT support took on many different approaches in HEIs across North America, several themes emerged as consistent to providing successful ICT support to remote learning. The characteristics of successful support to remote work/learning are organized under the following themes: leadership qualities, customer emphasis, RW ICT tools, organizational factors, and combating digital inequity. This study offers practitioners areas of consideration to examine their plans and policies. </p> <p><br></p> <p>Future research is proposed to include studies on other emergency events, the impacts of covid lockdown on future policies, military education, and demographic specific research. Remote work practices and strategies vary greatly by industry and organizational structure. This research focuses on HEIs thus generalizability may be limited. </p>
77

A Parallel Computing Approach for Identifying Retinitis Pigmentosa Modifiers in Drosophila Using Eye Size and Gene Expression Data

Chawin Metah (15361576) 29 April 2023 (has links)
<p>For many years, researchers have developed ways to diagnose degenerative disease in the retina by utilizing multiple gene analysis techniques. Retinitis pigmentosa (RP) disease can cause either partially or totally blindness in adults. For that reason, it is crucial to find a way to pinpoint the causes in order to develop a proper medication or treatment. One of the common methods is genome-wide analysis (GWA). However, it cannot fully identify the genes that are indirectly related to the changes in eye size. In this research, RNA sequencing (RNA-seq) analysis is used to link the phenotype to genotype, creating a pool of candidate genes that might associate with the RP. This will support future research in finding a therapy or treatment to cure such disease in human adults.</p> <p><br></p> <p>Using the Drosophila Genetic Reference Panel (DGRP) – a gene reference panel of fruit fly – two types of datasets are involved in this analysis: eye-size data and gene expression data with two replicates for each strain. This allows us to create a phenotype-genotype map. In other words, we are trying to trace the genes (genotype) that exhibit the RP disease guided by comparing their eye size (phenotype). The basic idea of the algorithm is to discover the best replicate combination that maximizes the correlation between gene expression and eye-size. Since there are 2N possible replicate combinations, where N is the number of selected strains, the original implementation of sequential algorithm was computationally intensive.</p> <p><br></p> <p>The original idea of finding the best replicate combination was proposed by Nguyen et al. (2022). In this research, however, we restructured the algorithms to distribute the tasks of finding the best replicate combination and run them in parallel. The implementation was done using the R programming language, utilizing doParallel and foreach packages, and able to execute on a multicore machine. The program was tested on both a laptop and a server, and the experimental results showed an outstanding improvement in terms of the execution time. For instance, while using 32 processes, the results reported up to 95% reduction in execution time when compared with the sequential version of the code. Furthermore, with the increment of computational capabilities, we were able to explore and analyze more extreme eye-size lines using three eye-size datasets representing different phenotype models. This further improved the accuracy of the results where the top candidate genes from all cases showed connection to RP.</p>
78

EXPLORATION OF NOVEL EDUCATIONAL TOOLS BASED ON VISUALIZATION

Abel Andres Reyes Angulo (11237160) 06 August 2021 (has links)
<div>The dynamic on how teaching is performed has changed abruptly in the past few years. Even before the COVID-19 pandemic, class modalities were changing. Instructors were adopting new modalities for lectures, like online and hybrid classes, and the use of collaborative resources were getting more popular over time. The current situation was just a catalyst of an event that was already started, which is the beginning of a new era for education.</div><div><br></div><div>This new education era implies new areas of study and the implementation of tools that promote an efficient learning process by adapting to everything involved in this change. Sciences, technology, engineering, and mathematics education (STEM) and healthcare fields are areas with noticeable demand for professionals in industry around the world. Therefore, the need to have more people academically prepared in these areas is highly prioritized. New tools to be used for learning to complement the mentioned field must show features related to the adoption of new technologies as well as the fact that this is currently a digital era. Emergent specialities like artificial intelligence and data science are traditionally being taught at the university level, due to the complexity of some concepts and the background needed to develop skills related to these areas. However, with the current technology available, tools can be used as complementary learning resources for complex subjects. Visualization helps the users to learn by sharpening the sense of sight and making evident things that are hard to illustrate by words or numbers. Therefore, the use of software for education based on visualization could be the new tools needed for these emergent specialities aligned to this new educational era. Features like intractability, gaming, and multimedia resources can help to make these tools more robust and completed.</div><div><br></div><div>In this work, the implementations of novel educational tools based on visualization for emergent specialization areas like machine learning in STEM and pathophysiology in heathcare were explored. This work summarizes the implementation of three different projects to illustrate the general purpose of this work, showing the relevance of the mentioned areas and proposes educational tools based on visualization, adapting the proposal for each speciality and having in mind different target populations. The projects related to each of the proposed tools includes the analysis to elaborate the content within the tool, the review of the software development, and the testing sessions to identify strengths and weaknesses of the tools. The tools are intended to be designed as frameworks in such a way that the deliverable content could be customized over the time and cover different educational needs.</div>
79

<b>The Impact of Quantum Information Science and Technology on National Security</b>

Eliot Jung (18424185) 23 April 2024 (has links)
<p dir="ltr">Quantum information science and technology has been at the forefront of science and technology since MIT mathematician Peter Shor discovered a quantum algorithm to factor large numbers in 1994. Advancement in quantum theory also advances practical technological applications. Quantum technology can be applied both in civilian society and the military field from encryption, artificial intelligence, sensing, to communications. This multi-purpose applicability, therefore, has the potential to alter international security as scientifically advanced nation-states vie for quantum supremacy. This research examines the applications of quantum science and how these applications can potentially impact international security. Because nation-states fund and support quantum science research, sources of method will include academic journals and online resources as well as government reports. Practical applications of quantum technology, including quantum computing, quantum sensing, and quantum communication, will constitute the primary scope of this research.</p>
80

Visualizing the Ethiopian Commodity Market

Rogstadius, Jakob January 2009 (has links)
<p>The Ethiopia Commodity Exchange (ECX), like many other data intensive organizations, is having difficulties making full use of the vast amounts of data that it collects. This MSc thesis identifies areas within the organization where concepts from the academic fields of information visualization and visual analytics can be applied to address this issue.Software solutions are designed and implemented in two areas with the purpose of evaluating the approach and to demonstrate to potential users, developers and managers what can be achieved using this method. A number of presentation methods are proposed for the ECX website, which previously contained no graphing functionality for market data, to make it easier for users to find trends, patterns and outliers in prices and trade volumes of commodieties traded at the exchange. A software application is also developed to support the ECX market surveillance team by drastically improving its capabilities of investigating complex trader relationships.Finally, as ECX lacked previous experiences with visualization, one software developer was trained in computer graphics and involved in the work, to enable continued maintenance and future development of new visualization solutions within the organization.</p>

Page generated in 0.094 seconds