• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Classification of glomerular pathological findings using deep learning and nephrologist-AI collective intelligence approach / 深層学習および腎臓内科医と人工知能との集合知アプローチを用いた糸球体病理所見の分類

Uchino, Eiichiro 24 September 2021 (has links)
京都大学 / 新制・論文博士 / 博士(医学) / 乙第13440号 / 論医博第2239号 / 新制||医||1054(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 黒田 知宏, 教授 松田 道行, 教授 長船 健二 / 学位規則第4条第2項該当 / Doctor of Medical Science / Kyoto University / DFAM
562

TRACE: A Differentiable Approach to Line-Level Stroke Recovery for Offline Handwritten Text

Archibald, Taylor Neil 01 December 2020 (has links)
Stroke order and velocity are helpful features in the fields of signature verification, handwriting recognition, and handwriting synthesis. Recovering these features from offline handwritten text is a challenging and well-studied problem. We propose a new model called TRACE (Trajectory Recovery by an Adaptively-trained Convolutional Encoder). TRACE is a differentiable approach using a convolutional recurrent neural network (CRNN) to infer temporal stroke information from long lines of offline handwritten text with many characters. TRACE is perhaps the first system to be trained end-to-end on entire lines of text of arbitrary width and does not require the use of dynamic exemplars. Moreover, the system does not require images to undergo any pre-processing, nor do the predictions require any post-processing. Consequently, the recovered trajectory is differentiable and can be used as a loss function for other tasks, including synthesizing offline handwritten text. We demonstrate that temporal stroke information recovered by TRACE from offline data can be used for handwriting synthesis and establish the first benchmarks for a stroke trajectory recovery system trained on the IAM online database.
563

DEEP LEARNING FOR STATISTICAL DATA ANALYSIS: DIMENSION REDUCTION AND CAUSAL STRUCTURE INFERENCE

Siqi Liang (11799653) 19 December 2021 (has links)
<div>During the past decades, deep learning has been proven to be an important tool for statistical data analysis. Motivated by the promise of deep learning in tackling the curse of dimensionality, we propose three innovative methods which apply deep learning techniques to high-dimensional data analysis in this dissertation.</div><div><br></div><div>Firstly, we propose a nonlinear sufficient dimension reduction method, the so-called split-and-merge deep neural networks (SM-DNN), which employs the split-and-merge technique on deep neural networks to obtain nonlinear sufficient dimension reduction of the input data and then learn a deep neural network on the dimension reduced data. We show that the DNN-based dimension reduction is sufficient for data drawn from exponential family, which retains all information on response contained in the explanatory data. Our numerical experiments indicate that the SM-DNN method can lead to significant improvement in phenotype prediction for a variety of real data examples. In particular, with only rare variants, we achieved a remarkable prediction accuracy of over 74\% for the Early-Onset Myocardial Infarction (EOMI) exome sequence data. </div><div><br></div><div>Secondly, we propose another nonlinear SDR method based on a new type of stochastic neural network under a rigorous probabilistic framework and show that it can be used for sufficient dimension reduction for high-dimensional data. The proposed stochastic neural network can be trained using an adaptive stochastic gradient Markov chain Monte Carlo algorithm. Through extensive experiments on real-world classification and regression problems, we show that the proposed method compares favorably with the existing state-of-the-art sufficient dimension reduction methods and is computationally more efficient for large-scale data.</div><div><br></div><div>Finally, we propose a structure learning method for learning the causal structure hidden in the high-dimensional data, which consists of two stages:</div><div>we first conduct Bayesian sparse learning for variable screening to build a primary graph, and then we perform conditional independence tests to refine the primary graph. </div><div>Extensive numerical experiments and quantitative tests confirm the generality, effectiveness and power of the proposed methods.</div>
564

Accelerating Emerging Neural Workloads

Jacob R Stevens (11805797) 20 December 2021 (has links)
<div> Due to a combination of algorithmic advances, wide-spread availability of rich data sets, and tremendous growth in compute availability, Deep Neural Networks (DNNs) have seen considerable success in a wide variety of fields, achieving state-of-the art accuracy in a number of perceptual domains, such as text, video and audio processing. Recently, there have been many efforts to extend this success in the perceptual, Euclidean-based domain to non-perceptual tasks, such as task planning or reasoning, as well as to non-Euclidean domains, such as graphs. While several DNN accelerators have been proposed in the past decade, they largely focus on traditional DNN workloads, such as Multi-layer Perceptions (MLPs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs). These accelerators are ill-suited to the unique computational needs of the emerging neural networks. In this dissertation, we aim to fix this gap by proposing novel hardware architectures that are specifically tailored to emerging neural workloads.</div><div><br></div><div>First, we consider memory-augmented neural networks (MANNs), a new class of neural networks that exhibits capabilities such as one-shot learning and task planning that are well beyond those of traditional DNNs. MANNs augment a traditional DNN with an external differentiable memory that is used to store dynamic state. This dissertation proposes a novel accelerator that targets the main bottleneck of MANNs: the soft reads and writes to this external memory, each of which requires access to all the memory locations.</div><div><br></div><div>We then focus on Transformer networks, which have become very popular for Natural Language Processing (NLP). A key to the success of these networks is a technique called self-attention, which employs a softmax operation. Softmax is poorly supported in modern, matrix multiply-focused accelerators since it accounts for a very small fraction of traditional DNN workloads. We propose a hardware/software co-design approach to realize softmax efficiently by utilize a suite of approximate computing techniques.</div><div><br></div><div>Next, we address graph neural networks (GNNs). GNNs are achieving state-of-the-art results in a variety of fields such as physics modeling, chemical synthesis, and electronic design automation. These GNNs are a hybrid between graph processing workloads and DNN workloads; they utilize DNN-based feature extractors to form hidden representations for each node in a graph and then combine these representations through some form of a graph traversal. As a result, existing hardware specialized for either graph processing workloads or DNN workloads is insufficient. Instead, we design a novel architecture that balances the needs of these two heterogeneous compute patterns. We also propose a novel feature dimension-blocking dataflow to further increase performance by mitigating the memory bottleneck.</div><div><br></div><div>Finally, we address the growing difficulty in tightly coupling new DNNs and a hardware platform. Given the extremely large DNN-HW design space consisting of DNN selection, hardware operating condition, and DNN-to-HW mapping, it is infeasible to exhaustively search this space by running each sample on a physical hardware device. This has led to the need for highly accurate, machine learning-based performance models which can \emph{predict} the latency/power/energy even faster than direct execution. We first present a taxonomy to characterize the possible approaches to these performance estimators. Based on the insights from this taxonomy, we present a new performance estimator that combines coarse-grained and fine-grained to achieve superior accuracy with a limited number of training samples. Finally, we propose a flexible framework for creating these DNN-HW performance estimators.</div><div><br></div><div>In summary, this dissertation identifies the growing gap between current hardware and new emerging neural networks. We first propose three novel hardware architectures that address this gap for MANNs, Transformers, and GNNs. We then propose a novel hardware-aware DNN estimator and framework to ease addressing this gap for new networks in the future.</div>
565

Deep Parameter Selection For Classic Computer Vision Applications

Whitney, Michael 13 December 2021 (has links)
A trend in computer vision today is to retire older, so-called "classic'' methods in favor of ones based on deep neural networks. This has led to tremendous improvements in many areas, but for some problems deep neural solutions may not yet exist or be of practical application. For this and other reasons, classic methods are still widely used in a variety of applications. This paper explores the possibility of using deep neural networks to improve these older methods instead of replace them. In particular, it addresses the issue of parameter selection in these algorithms by using a neural network to predict effective settings on a per-input basis. Specifically, we look at a straightforward and well-understood algorithm with one primary parameter: interactive graph-cut segmentation. This parameter balances region/boundary influences and heavily influences the resulting segmentation. Many approach tuning this parameter by using an ad hoc or empirically selected static setting, while others pre-analyze images to determine effective settings on a per-image basis. Tuning this parameter for each image, or even for each target selection within an image, is highly sensitive to properties of the image and object, suggesting that a network might be able to recognize these properties and predict settings that would improve performance. We employ a lightweight network with minimal layers to avoid adding significant computational overhead with this pre-analysis step. The network predicts the segmentation performance for each of a set of discretely sampled values for this parameter and selects the one with the highest predicted performance. Results demonstrate that this per-image prediction and tuning performs better than a single empirically selected setting.
566

Using LSTM Neural Networks To Predict Daily Stock Returns

Cavallie Mester, Jon William January 2021 (has links)
Long short-term memory (LSTM) neural networks have been proven to be effective for time series prediction, even in some instances where the data is non-stationary. This lead us to examine their predictive ability of stock market returns, as the development of stock prices and returns tend to be a non-stationary time series. We used daily stock trading data to let an LSTM train models at predicting daily returns for 60 stocks from the OMX30 and Nasdaq-100 indices. Subsequently, we measured their accuracy, precision, and recall. The mean accuracy was 49.75 percent, meaning that the observed accuracy was close to the accuracy one would observe by randomly selecting a prediction for each day and lower than the accuracy achieved by blindly predicting all days to be positive. Finally, we concluded that further improvements need to be made for models trained by LSTMs to have any notable predictive ability in the area of stock returns.
567

Výroba víčka / Production of cap

Braško, Zdenko January 2018 (has links)
The aim of this thesis is to propose and design the production of the cap. The cap is welded to the tube of roller regal and is used for closing the tube. They together make a fixed assembly. There is a bearing molded from inside the cap and shaft passes through the center of the cap. The main function of whole assembly is to rotate the tube around its own axis. The cap will made from deep - drawing steel DC04 with a thickness of 2 mm. After consideration various possible variants for the cap production, there was chosen deep drawing without reduction of thickness as the most profitable technology. The shape of the component was modified from the technology review. There is proposed a tool that will be used for production the caps. The caps will be produced within three operations: the first two operations form a central cup and the third operation makes the final shape of the cap. Drawings for the tool are created and delivered as part of this thesis. Based on the necessary force and work calculation, it was chosen eccentric press machine S 160 E from the ŠMERAL company. With the 25 000 series, the costs of production one piece are 51,65 CZK.
568

Automatická detekce událostí ve fotbalových zápasech / An automatic football match event detection

Dvonč, Tomáš January 2020 (has links)
This diploma thesis describes methods suitable for automatic detection of events from video sequences focused on football matches. The first part of the work is focused on the analysis and creation of procedures for extracting informations from available data. The second part deals with the implementation of selected methods and neural network algorithm for corner kick detection. Two experiments were performed in this work. The first captures static information from one image and the second is focused on detection from spatio-temporal data. The output of this work is a program for automatic event detection, which can be used to interpret the results of the experiments. This work may figure as a basis to gain new knowledge about the issue and also to the further development of detection events from football.
569

Embracing Visual Experience and Data Knowledge: Efficient Embedded Memory Design for Big Videos and Deep Learning

Edstrom, Jonathon January 2019 (has links)
Energy efficient memory designs are becoming increasingly important, especially for applications related to mobile video technology and machine learning. The growing popularity of smart phones, tablets and other mobile devices has created an exponential demand for video applications in today’s society. When mobile devices display video, the embedded video memory within the device consumes a large amount of the total system power. This issue has created the need to introduce power-quality tradeoff techniques for enabling good quality video output, while simultaneously enabling power consumption reduction. Similarly, power efficiency issues have arisen within the area of machine learning, especially with applications requiring large and fast computation, such as neural networks. Using the accumulated data knowledge from various machine learning applications, there is now the potential to create more intelligent memory with the capability for optimized trade-off between energy efficiency, area overhead, and classification accuracy on the learning systems. In this dissertation, a review of recently completed works involving video and machine learning memories will be covered. Based on the collected results from a variety of different methods, including: subjective trials, discovered data-mining patterns, software simulations, and hardware power and performance tests, the presented memories provide novel ways to significantly enhance power efficiency for future memory devices. An overview of related works, especially the relevant state-of-the-art research, will be referenced for comparison in order to produce memory design methodologies that exhibit optimal quality, low implementation overhead, and maximum power efficiency. / National Science Foundation / ND EPSCoR / Center for Computationally Assisted Science and Technology (CCAST)
570

A Closer Look at Neighborhoods in Graph Based Point Cloud Scene Semantic Segmentation Networks

Itani, Hani 11 1900 (has links)
Large scale semantic segmentation is considered as one of the fundamental tasks in 3D scene understanding. Point clouds provide a basic and rich geometric representation of scenes and tangible objects. Convolutional Neural Networks (CNNs) have demonstrated an impressive success in processing regular discrete data such as 2D images and 1D audio. However, CNNs do not directly generalize to point cloud processing due to their irregular and un-ordered nature. One way to extend CNNs to point cloud understanding is to derive an intermediate euclidean representation of a point cloud by projecting onto image domain, voxelizing, or treating points as vertices of an un-directed graph. Graph-CNNs (GCNs) have demonstrated to be a very promising solution for deep learning on irregular data such as social networks, biological systems, and recently point clouds. Early works in literature for graph based point networks relied on constructing dynamic graphs in the node feature space to define a convolution kernel. Later works constructed hierarchical static graphs in 3D space for an encoder-decoder framework inspired from image segmentation. This thesis takes a closer look at both dynamic and static graph neighborhoods of graph- based point networks for the task of semantic segmentation in order to: 1) discuss a potential cause for why going deep in dynamic GCNs does not necessarily lead to an improved performance, and 2) propose a new approach in treating points in a static graph neighborhood for an improved information aggregation. The proposed method leads to an efficient graph based 3D semantic segmentation network that is on par with current state-of-the-art methods on both indoor and outdoor scene semantic segmentation benchmarks such as S3DIS and Semantic3D.

Page generated in 0.0515 seconds