• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 79
  • 14
  • 12
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 179
  • 179
  • 89
  • 45
  • 36
  • 31
  • 29
  • 24
  • 24
  • 23
  • 23
  • 19
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Probably Approximately Correct (PAC) exploration in reinforcement learning

Strehl, Alexander L. January 2007 (has links)
Thesis (Ph. D.)--Rutgers University, 2007. / "Graduate Program in Computer Science." Includes bibliographical references (p. 133-136).
32

Revisiting output coding for sequential supervised learning /

Hao, Guohua. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 38-40). Also available on the World Wide Web.
33

Support vector classification analysis of resting state functional connectivity fMRI

Craddock, Richard Cameron. January 2009 (has links)
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010. / Committee Chair: Hu, Xiaoping; Committee Co-Chair: Vachtsevanos, George; Committee Member: Butera, Robert; Committee Member: Gurbaxani, Brian; Committee Member: Mayberg, Helen; Committee Member: Yezzi, Anthony. Part of the SMARTech Electronic Thesis and Dissertation Collection.
34

Parameter incremental learning algorithm for neural networks

Wan, Sheng, January 1900 (has links)
Thesis (Ph. D.)--West Virginia University, 2005. / Title from document title page. Document formatted into pages; contains x, 97 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 81-83).
35

Revealing the Determinants of Acoustic Aesthetic Judgment Through Algorithmic

Jenkins, Spencer Daniel 03 July 2019 (has links)
This project represents an important first step in determining the fundamental aesthetically relevant features of sound. Though there has been much effort in revealing the features learned by a deep neural network (DNN) trained on visual data, little effort in applying these techniques to a network trained on audio data has been performed. Importantly, these efforts in the audio domain often impose strong biases about relevant features (e.g., musical structure). In this project, a DNN is trained to mimic the acoustic aesthetic judgment of a professional composer. A unique corpus of sounds and corresponding professional aesthetic judgments is leveraged for this purpose. By applying a variation of Google's "DeepDream" algorithm to this trained DNN, and limiting the assumptions introduced, we can begin to listen to and examine the features of sound fundamental for aesthetic judgment. / Master of Science / The question of what makes a sound aesthetically “interesting” is of great importance to many, including biologists, philosophers of aesthetics, and musicians. This project serves as an important first step in determining the fundamental aesthetically relevant features of sound. First, a computer is trained to mimic the aesthetic judgments of a professional composer; if the composer would deem a sound “interesting,” then so would the computer. During this training, the computer learns for itself what features of sound are important for this classification. Then, a variation of Google’s “DeepDream” algorithm is applied to allow these learned features to be heard. By carefully considering the manner in which the computer is trained, this algorithmic “dreaming” allows us to begin to hear aesthetically salient features of sound.
36

Segmenting Skin Lesion Attributes in Dermoscopic Images Using Deep Learing Algorithm for Melanoma Detection

Dong, Xu 09 1900 (has links)
Melanoma is the most deadly form of skin cancer worldwide, which causes the 75% of deaths related to skin cancer. National Cancer Institute estimated that 91,270 new case and 9,320 deaths are expected in 2018 caused by melanoma. Early detection of melanoma is the key for the treatment. The image technique to diagnose skin cancer is dermoscopy, which leads to improved diagnose accuracy compared to traditional ABCD criteria. But reading and examining dermoscopic images is a time-consuming and complex process. Therefore, computerized analysis methods of dermoscopic images have been developed to assist the visual interpretation of dermoscopic images. The automatic segmentation of skin lesion attributes is a key step in computerized analysis of dermoscopic images. The International Skin Imaging Collaboration (ISIC) hosted the 2018 Challenges to help the diagnosis of melanoma based on dermoscopic images. In this thesis, I develop a deep learning based approach to automatically segment the attributes from dermoscopic skin lesion images. The approach described in the thesis achieved the Jaccard index of 0.477 on the official test dataset, which ranked 5th place in the challenge. / Master of Science / Melanoma is the most deadly form of skin cancer worldwide, which causes the 75% of deaths related to skin cancer. Early detection of melanoma is the key for the treatment. The image technique to diagnose skin cancer is called dermoscopy. It has become increasingly conveniently to use dermoscopic device to image the skin in recent years. Dermoscopic lens are available in the market for individual customer. When coupling the dermoscopic lens with smartphones, people are be able to take dermoscopic images of their skin even at home. However, reading and examining dermoscopic images is a time-consuming and complex process. It requires specialists to examine the image, extract the features, and compare with criteria to make clinical diagnosis. The time-consuming image examination process becomes the bottleneck of fast diagnosis of melanoma. Therefore, computerized analysis methods of dermoscopic images have been developed to promote the melanoma diagnosis and to increase the survival rate and save lives eventually. The automatic segmentation of skin lesion attributes is a key step in computerized analysis of dermoscopic images. In this thesis, I developed a deep learning based approach to automatically segment the attributes from dermoscopic skin lesion images. The segmentation result from this approach won 5th place in a public competition. It has the potential to be utilized in clinic application in the future.
37

End-To-End Text Detection Using Deep Learning

Ibrahim, Ahmed Sobhy Elnady 19 December 2017 (has links)
Text detection in the wild is the problem of locating text in images of everyday scenes. It is a challenging problem due to the complexity of everyday scenes. This problem possesses a great importance for many trending applications, such as self-driving cars. Previous research in text detection has been dominated by multi-stage sequential approaches which suffer from many limitations including error propagation from one stage to the next. Another line of work is the use of deep learning techniques. Some of the deep methods used for text detection are box detection models and fully convolutional models. Box detection models suffer from the nature of the annotations, which may be too coarse to provide detailed supervision. Fully convolutional models learn to generate pixel-wise maps that represent the location of text instances in the input image. These models suffer from the inability to create accurate word level annotations without heavy post processing. To overcome these aforementioned problems we propose a novel end-to-end system based on a mix of novel deep learning techniques. The proposed system consists of an attention model, based on a new deep architecture proposed in this dissertation, followed by a deep network based on Faster-RCNN. The attention model produces a high-resolution map that indicates likely locations of text instances. A novel aspect of the system is an early fusion step that merges the attention map directly with the input image prior to word-box prediction. This approach suppresses but does not eliminate contextual information from consideration. Progressively larger models were trained in 3 separate phases. The resulting system has demonstrated an ability to detect text under difficult conditions related to illumination, resolution, and legibility. The system has exceeded the state of the art on the ICDAR 2013 and COCO-Text benchmarks with F-measure values of 0.875 and 0.533, respectively. / Ph. D.
38

CloudCV: Deep Learning and Computer Vision on the Cloud

Agrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content. The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science
39

Learning Schemes for Adaptive Spectrum Sharing Radar

Thornton, Charles E. III 08 June 2020 (has links)
Society's newfound dependence on wireless transmission systems has driven demand for access to the electromagnetic (EM) spectrum to an all-time high. In particular, wireless applications related to the fifth generation (5G) of cellular technology along with statically allocated radar systems have contributed to the increasing scarcity of the sub 6 GHz frequency bands. As a result, development of Dynamic Spectrum Access (DSA) techniques for sharing these frequencies has become a critical research area for the greater wireless community. Since among incumbent systems, radars are the largest consumers of spectrum in the sub 6 GHz regime, and are being used increasingly for civilian applications such as traffic control, adaptive cruise control, and collision avoidance, the need for radars which can adaptively tune specific transmission parameters in an intelligent manner to promote coexistence with other systems has arisen. Thus, fully-aware, dynamic, cognitive radar has been proposed as target for radars to evolve towards. In this thesis, we extend current research thrusts towards cognitive radar to utilize Reinforcement Learning (RL) techniques which allow a radar system to learn desired behavior using information obtained from past transmissions. Since radar systems inherently interact with their electromagnetic environment, it is natural to view the use of reinforcement learning techniques as a straightforward extension to previous adaptive techniques. However, in designing learning algorithms for radar systems, we must carefully define goal-driven rewards, formalize the learning process, and consider an appropriate amount of environmental information. In this thesis, we apply well-established and emerging reinforcement learning approaches to meet the demands of modern radar coexistence problems. In particular, function estimation using deep neural networks is examined, as Deep RL presents a scalable learning framework which allows many environmental states to be considered in the decision-making process. We then show how these techniques can be used to improve traditional radar performance metrics, such as interference avoidance, spectral efficiency, and target detectibility with simulated and experimental results. We also compare the learning techniques to each other and naive approaches, such as fixed bandwidth radar and avoiding interference reactively. Finally, online learning strategies are considered which aim to balance the fundamental learning trade-off between exploration and exploitation. We show that online learning techniques can be used to select individual waveforms or applied as a high-level controller in a hierarchical learning scheme based on the biologically inspired concept of metacognition. The general use of RL techniques provides a robust framework for decision making under uncertainty that is more flexible than previously proposed cognitive radar strategies. Further, the wide array of RL models and algorithms allow the underlying structure to be applied to both small and large-scale radar scenarios. / Master of Science / Society's newfound dependence on wireless transmission systems has driven demand for control of the electromagnetic (EM) spectrum to an all-time high. In particular, federal spectrum auctions and the fifth generation of wireless technologies have contributed to the scarcity of frequency bands below 6GHz. These frequencies are widely used by both radar and communications systems due to favorable propagation characteristics. However, current radar systems typically occupy a fixed bandwidth and are tend to be poorly equipped to share their allocated spectrum with other users, which has become a necessity given the growth of wireless traffic. In this thesis, we study learning algorithms which enable a radar to optimize its electromagnetic pulses based on feedback from received signals. In particular, we are interested in reinforcement learning algorithms which allow a radar to learn optimal behavior based on rewards defined by a human. Using these algorithms, radar system designers can choose which metrics may be most important for a given radar application which can then be optimized for the given setting. However, scaling reinforcement learning to real-world problems such as radar optimization is often difficult due to the massive scope of the problem. Here we attempt to identify potential issues with implementation of each algorithm and narrow in on algorithms that are well-suited for real-time radar operation.
40

Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction

Vekhande, Swapnil Sudhir 14 June 2019 (has links)
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality. / Master of Science / Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.

Page generated in 0.0926 seconds