• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 258
  • 99
  • 75
  • 71
  • 64
  • 49
  • 48
  • 47
  • 47
  • 43
  • 39
  • 35
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

A Tiny Diagnostic Dataset and Diverse Modules for Learning-Based Optical Flow Estimation

Xie, Shuang 18 September 2019 (has links)
Recent work has shown that flow estimation from a pair of images can be formulated as a supervised learning task to be resolved with convolutional neural networks (CNN). However, the basic straightforward CNN methods estimate optical flow with motion and occlusion boundary blur. To tackle this problem, we propose a tiny diagnostic dataset called FlowClevr to quickly evaluate various modules that can use to enhance standard CNN architectures. Based on the experiments of the FlowClevr dataset, we find that a deformable module can improve model prediction accuracy by around 30% to 100% in most tasks and more significantly reduce boundary blur. Based on these results, we are able to design modifications to various existing network architectures improving their performance. Compared with the original model, the model with the deformable module clearly reduces boundary blur and achieves a large improvement on the MPI sintel dataset, an omni-directional stereo (ODS) and a novel omni-directional optical flow dataset.
22

Neuro-fuzzy architectures based on complex fuzzy logic

Sara, Aghakhani Unknown Date
No description available.
23

Neuro-fuzzy architectures based on complex fuzzy logic

Sara, Aghakhani 06 1900 (has links)
Complex fuzzy logic is a new type of multi-valued logic, in which truth values are drawn from the unit disc of the complex plane; it is thus a generalization of the familiar infinite-valued fuzzy logic. At the present time, all published research on complex fuzzy logic is theoretical in nature, with no practical applications demonstrated. The utility of complex fuzzy logic is thus still very debatable. In this thesis, the performance of ANCFIS is evaluated. ANCFIS is the first machine learning architecture to fully implement the ideas of complex fuzzy logic, and was designed to solve the important machine-learning problem of time-series forecasting. We then explore extensions to the ANCFIS architecture. The basic ANCFIS system uses batch (offline) learning, and was restricted to univariate time series prediction. We have developed both an online version of the univariate ANCFIS system, and a multivariate extension to the batch ANCFIS system. / Software Engineering and Intelligent Systems
24

Large-scale analysis of microarray data to identify molecular signatures of mouse pluripotent stem cells

McGlinchey, Aidan James January 2018 (has links)
Publicly-available microarray data constitutes a huge resource for researchers in biological science. A wealth of microarray data is available for the model organism – the mouse. Pluripotent embryonic stem (ES) cells are able to give rise to all of the adult tissues of the organism and, as such, are much-studied for their myriad applications in regenerative medicine. Fully differentiated, somatic cells can also be reprogrammed to pluripotency to give induced pluripotent stem cells (iPSCs). ES cells progress through a range of cellular states between ground state pluripotent stem cells, through the primed state ready for differentiation, to actual differentiation. Microarray data available in public, online repositories is annotated with several important fields, although this accompanying annotation often contains issues which can impact its usefulness to human and / or programmatic interpretation for downstream analysis. This thesis assembles and makes available to the research community the largest-to-date pluripotent mouse ES cell (mESC) microarray dataset and details the manual annotation of those samples for several key fields to allow further investigation of the pluripotent state in mESCs. Microarray samples from a given laboratory or experiment are known to be similar to each other due to batch effects. The same has been postulated about samples which use the same cell line. This work therefore precedes the investigation of transcriptional events in mESCs with an investigation into whether a sample's cell line or source laboratory is a greater contributor to the similarity between samples in this collected pluripotent mESC dataset using a method employing Random Submatrix Total Variability, and so named RaSToVa. Further, an extension of the same permutation and analysis method is developed to enable Discovery of Annotation-Linked Gene Expression Signatures (DALGES), and this is applied to the gathered data to provide the first large-scale analysis of transcriptional profiles and biological pathway activity of three commonly-used mESC cell lines and a selection of iPSC samples, seeking insight into potential biological differences that may result from these. This work then goes on to re-order the pluripotent mESC data by markers of known pluripotency states, from ground state pluripotency through primed pluripotency to earliest differentiation and analyses changes in gene expression and biological pathway activity across this spectrum, using differential expression and a window-scanning approach, seeking to recapitulate transcriptional patterns known to occur in mESCs, revealing the existence of putative “early” and “late” naïve pluripotent states and thereby identifying several lines of enquiry for in-laboratory investigation.
25

Efficient techniques for streaming cross document coreference resolution

Shrimpton, Luke William January 2017 (has links)
Large text streams are commonplace; news organisations are constantly producing stories and people are constantly writing social media posts. These streams should be analysed in real-time so useful information can be extracted and acted upon instantly. When natural disasters occur people want to be informed, when companies announce new products financial institutions want to know and when celebrities do things their legions of fans want to feel involved. In all these examples people care about getting information in real-time (low latency). These streams are massively varied, people’s interests are typically classified by the entities they are interested in. Organising a stream by the entity being referred to would help people extract the information useful to them. This is a difficult task: fans of ‘Captain America’ films will not want to be incorrectly told that ‘Chris Evans’ (the main actor) was appointed to host ‘Top Gear’ when it was a different ‘Chris Evans’. People who use local idiosyncrasies such as referring to their home county (‘Cornwall’) as ‘Kernow’ (the Cornish for ‘Cornwall’ that has entered the local lexicon) should not be forced to change their language when finding out information about their home. This thesis addresses a core problem for real-time entity-specific NLP: Streaming cross document coreference resolution (CDC), how to automatically identify all the entities mentioned in a stream in real-time. This thesis address two significant problems for streaming CDC: There is no representative dataset and existing systems consume more resources over time. A new technique to create datasets is introduced and it was applied to social media (Twitter) to create a large (6M mentions) and challenging new CDC dataset that contains a much more variend range of entities than typical newswire streams. Existing systems are not able to keep up with large data streams. This problem is addressed with a streaming CDC system that stores a constant sized set of mentions. New techniques to maintain the sample are introduced significantly out-performing existing ones maintaining 95% of the performance of a non-streaming system while only using 20% of the memory.
26

Let there be light... Characterizing the Effects of Adverse Lighting on Semantic Segmentation of Wound Images and Mitigation using a Deep Retinex Model

Iyer, Akshay B. 14 May 2020 (has links)
Wound assessment using a smartphone image has recently emerged as a novel way to provide actionable feedback to patients and caregivers. Wound segmentation is an important step in image-based wound assessment, after which the wound area can be analyzed. Semantic segmentation algorithms for wounds assume favorable lighting conditions. However, smartphone wound imaging in natural environments can encounter adverse lighting that can cause several errors during semantic segmentation of wound images, which in turn affects the wound analysis. In this work, we study and characterize the effects of adverse lighting on the accuracy of semantic segmentation of wound images. Our findings inform a deep learning-based approach to mitigate the adverse effects. We make three main contributions in this work. First, we create the first large-scale Illumination Varying Dataset (IVDS) of 55440 images of a wound moulage captured under systematically varying illumination conditions and with different camera types and settings. Second, we characterize the effects of changing light intensity on U-Net’s wound semantic segmentation accuracy and show the luminance of images to be highly correlated with the wound segmentation performance. Especially, we show low-light conditions to deteriorate segmentation performance highly. Third, we improve the wound Dice scores of U-Net for low-light images to up to four times the baseline values using a deep learning mitigation method based on the Retinex theory. Our method works well in typical illumination levels observed in homes/clinics as well for a wide gamut of lighting like very dark conditions (20 Lux), medium-intensity lighting (750 - 1500 Lux), and even very bright lighting (6000 Lux).
27

Homology-based in silico identification of putative protein-ligand interactions in the malaria parasite

Szolkiewicz, Michal Jerzy January 2014 (has links)
Malaria is still one of the most proli c communicable diseases in the world with more than 200 million infections annually, its greatest e ect is felt in the poor nations with-in sub-saharan Africa and south-east Asia. It is especially fatal for women and children where out of the 660 000 fatalities in 2010, 86% were below the age of 5. In the past decade the global fatality rate due to malaria has been signi cantly reduced, primarily due to proliferation of vector control using treated nets and indoor residual spraying of DDT. There have, however, been few innovations in anti-malarial therapeutics and with the threat of the spread of drug resistant strains a need still exists to develop novel drugs to combat malaria infections. One of the major hinderances to drug development is the huge cost of the drug development process, where candidate failures late in development are extremely costly. This is where post-genomic information has the potential of adding great value. By using all available data pertaining to a disease, one gains higher discerning power to select good drug candidates and identify risks early in development before serious investments are made. This need provided the motivation for the development of Discovery; a tool to aid in the identi cation of protein targets and viable lead compounds for the treatment of malaria. Discovery was developed at the University of Pretoria to be a platform for a large spectrum of biological data focused on the malaria causing Plasmodium parasite. It conglomerates various data types into a web-based interface that allows searching using logical lters or by using protein or chemical start points. In 2010 it was decided to rebuild Discovery to improve it's functionality and optimize query times. Also, since its inception various new datasources became available speci cally related to bio-active molecules, these include the ChEMBL database and TCAMS dataset of bio-active molecules and the focus of this project was the integration of said datasets into Discovery. Large quantities of high quality bioactivity data have never been available in the public domain and this has opened up the opportunity to gain even greater insight into the activity of chemical compounds in malaria. Due to conserved structural/functional similarities of proteins between di erent species it is possible to derive predictions about a malaria protein or a chemicals activity in malaria due to experiments carried out on other organisms. These comparisons can be leveraged to highlight potential new compounds that were previously not considered or prevent wasting resources persuing potential compounds that pose threats of toxicity to humans. This project has resulted in a web based system that allows one to search through the chemical space of the malaria parasite. Allowing them to view sets of predicted protein-ligand interactions for a given protein based on that proteins similarity to those existing in the bio-active molecule databases. / Dissertation (MSc)--University of Pretoria, 2014. / gm2014 / Biochemistry / unrestricted
28

Object Detection for Aerial View Images: Dataset and Learning Rate

Qi, Yunlong 05 1900 (has links)
In recent years, deep learning based computer vision technology has developed rapidly. This is not only due to the improvement of computing power, but also due to the emergence of high-quality datasets. The combination of object detectors and drones has great potential in the field of rescue and disaster relief. We created an image dataset specifically for vision applications on drone platforms. The dataset contains 5000 images, and each image is carefully labeled according to the PASCAL VOC standard. This specific dataset will be very important for developing deep learning algorithms for drone applications. In object detection models, loss function plays a vital role. Considering the uneven distribution of large and small objects in the dataset, we propose adjustment coefficients based on the frequencies of objects of different sizes to adjust the loss function, and finally improve the accuracy of the model.
29

Testing Fuzzy Extractors for Face Biometrics: Generating Deep Datasets

Tambay, Alain Alimou 11 November 2020 (has links)
Biometrics can provide alternative methods for security than conventional authentication methods. There has been much research done in the field of biometrics, and efforts have been made to make them more easily usable in practice. The initial application for our work is a proof of concept for a system that would expedite some low-risk travellers’ arrival into the country while preserving the user’s privacy. This thesis focuses on the subset of problems related to the generation of cryptographic keys from noisy data, biometrics in our case. This thesis was built in two parts. In the first, we implemented a key generating quantization-based fuzzy extractor scheme for facial feature biometrics based on the work by Dodis et al. and Sutcu, Li, and Memon. This scheme was modified to increased user privacy, address some implementation-based issues, and add testing-driven changes to tailor it towards its expected real-world usage. We show that our implementation does not significantly affect the scheme's performance, while providing additional protection against malicious actors that may gain access to the information stored on a server where biometric information is stored. The second part consists of the creation of a process to automate the generation of deep datasets suitable for the testing of similar schemes. The process led to the creation of a larger dataset than those available for free online for minimal work, and showed that these datasets can be further expanded with only little additional effort. This larger dataset allowed for the creation of more representative recognition challenges. We were able to show that our implementation performed similarly to other non-commercial schemes. Further refinement will be necessary if this is to be compared to commercial applications.
30

Visualization of spatio-temporal data in two dimensional space

Baskaran, Savitha 15 November 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Spatio-temporal data is becoming very popular in the recent times, as there are large number of datasets that collect both location and temporal information in the real time. The main challenge is that extracting useful insights from such large data set is extremely complex and laborious. In this thesis, we have proposed a novel 2D technique to visualize the spatio-temporal big data. The visualization of the combined interaction between the spatial and temporal data is of high importance to uncover the insights and identify the trends within the data. Maps have been a successful way to represent the spatial information. Addition- ally, in this work, colors are used to represent the temporal data. Every data point has the time information which is converted into relevant color, based on the HSV color model. The variation in the time is represented by transition from one color to another and hence provide smooth interpolation. The proposed solution will help the user to quickly understand the data and gain insights.

Page generated in 0.0426 seconds