• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 30
  • 10
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 258
  • 99
  • 75
  • 71
  • 64
  • 49
  • 48
  • 47
  • 47
  • 43
  • 39
  • 35
  • 34
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High performance computing and algorithm development: application of dataset development to algorithm parameterization.

Jonas, Mario Ricardo Edward January 2006 (has links)
A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's).
2

High performance computing and algorithm development: application of dataset development to algorithm parameterization.

Jonas, Mario Ricardo Edward January 2006 (has links)
A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's).
3

High performance computing and algorithm development: application of dataset development to algorithm parameterization

Jonas, Mario Ricardo Edward January 2006 (has links)
Magister Scientiae - MSc / A number of technologies exist that captures data from biological systems. In addition, several computational tools, which aim to organize the data resulting from these technologies, have been created. The ability of these tools to organize the information into biologically meaningful results, however, needs to be stringently tested. The research contained herein focuses on data produced by technology that records short Expressed Sequence Tags (EST's). / South Africa
4

A Human Kinetic Dataset and a Hybrid Model for 3D Human Pose Estimation

Wang, Jianquan 12 November 2020 (has links)
Human pose estimation represents the skeleton of a person in color or depth images to improve a machine’s understanding of human movement. 3D human pose estimation uses a three-dimensional skeleton to represent the human body posture, which is more stereoscopic than a two-dimensional skeleton. Therefore, 3D human pose estimation can enable machines to play a role in physical education and health recovery, reducing labor costs and the risk of disease transmission. However, the existing datasets for 3D pose estimation do not involve fast motions that would cause optical blur for a monocular camera but would allow the subjects’ limbs to move in a more extensive range of angles. The existing models cannot guarantee both real-time performance and high accuracy, which are essential in physical education and health recovery applications. To improve real-time performance, researchers have tried to minimize the size of the model and have studied more efficient deployment methods. To improve accuracy, researchers have tried to use heat maps or point clouds to represent features, but this increases the difficulty of model deployment. To address the lack of datasets that include fast movements and easy-to-deploy models, we present a human kinetic dataset called the Kivi dataset and a hybrid model that combines the benefits of a heat map-based model and an end-to-end model for 3D human pose estimation. We describe the process of data collection and cleaning in this thesis. Our proposed Kivi dataset contains large-scale movements of humans. In the dataset, 18 joint points represent the human skeleton. We collected data from 12 people, and each person performed 38 sets of actions. Therefore, each frame of data has a corresponding person and action label. We design a preliminary model and propose an improved model to infer 3D human poses in real time. When validating our method on the Invariant Top-View (ITOP) dataset, we found that compared with the initial model, our improved model improves the mAP@10cm by 29%. When testing on the Kivi dataset, our improved model improves the mAP@10cm by 15.74% compared to the preliminary model. Our improved model can reach 65.89 frames per second (FPS) on the TensorRT platform.
5

Savings, risk coping, and poverty dynamics of rural households in developing countries

Imai, Katsushi January 2000 (has links)
No description available.
6

Selected results from clustering and analyzing stock market trade data

Zhang, Zhihan January 1900 (has links)
Master of Science / Department of Statistics / Michael Higgins / The amount of data generated from stock market trading is massive. For example, roughly 10 million trades are performed each day on the NASDAQ stock exchange. A significant proportion of these trades are made by high-frequency traders. These entities make on the order of thousands or more trades a day. However, the stock-market factors that drive the decisions of high-frequency traders are poorly understood. Recently, hybridized threshold clustering (HTC) has been proposed as a way of clustering large-to-massive datasets. In this report, we use three months of NASDAQ HFT data---a dataset containing information on all trades of 120 different stocks including identifiers on whether the buyer and/or seller were high-frequency traders---to investigate the trading patterns of high-frequency traders, and we explore the use of HTC to identify these patterns. We find that, while HTC can be successfully performed on the NASDAQ HFT dataset, the amount of information gleaned from this clustering is limited. Instead, we show that an understanding of the habits of high-frequency traders may be gained by looking at \textit{janky} trades---those in which the number of shares traded is not a multiple of 10. We demonstrate evidence that janky trades are more common for high-frequency traders. Additionally, we suggest that a large number of small, janky trades may help signal that a large trade will happen shortly afterward.
7

Efektyvaus manipuliavimo duomenimis informacinėse medicinos sistemose tyrimas / The research of effective data manipulation in medicine systems

Kučinskas, Mindaugas 25 May 2006 (has links)
This work is aimed to research the most effective way of using ADO.NET data access components in medicine system development process. This document we start from evaluating current situation of e-medicine in Lithuania and world wide. Consider the problems, which originate doing a lot of “paper work”, introduce with EMR (Electronic Medical Record) systems, list the advantages of using EMR in medicine organizations. Later we give a brief description of EMR functionality, used standards ant technologies in EMR creation process. After all we concentrate on data manipulating problem, trying to discover the best solution to solve it by using ADO.NET data access components. Beside, we analyze all the components of ADO.NET architecture, describe situations in which different component is most suitable, suggest how to avoid performance degradation in data manipulation processes. Data is most expensive worth in medicine. So, we must ensure that data manipulating process will be effective and will not cause any additional problems for medicine personal.
8

Accessible Retail Shopping For The Visually Impaired Using Deep Learning

January 2020 (has links)
abstract: Over the past decade, advancements in neural networks have been instrumental in achieving remarkable breakthroughs in the field of computer vision. One of the applications is in creating assistive technology to improve the lives of visually impaired people by making the world around them more accessible. A lot of research in convolutional neural networks has led to human-level performance in different vision tasks including image classification, object detection, instance segmentation, semantic segmentation, panoptic segmentation and scene text recognition. All the before mentioned tasks, individually or in combination, have been used to create assistive technologies to improve accessibility for the blind. This dissertation outlines various applications to improve accessibility and independence for visually impaired people during shopping by helping them identify products in retail stores. The dissertation includes the following contributions; (i) A dataset containing images of breakfast-cereal products and a classifier using a deep neural (ResNet) network; (ii) A dataset for training a text detection and scene-text recognition model; (iii) A model for text detection and scene-text recognition to identify product images using a user-controlled camera; (iv) A dataset of twenty thousand products with product information and related images that can be used to train and test a system designed to identify products. / Dissertation/Thesis / Masters Thesis Computer Science 2020
9

Optimalizace tvorby trénovacího a validačního datasetu pro zvýšení přesnosti klasifikace v dálkovém průzkumu Země / Training and validation dataset optimization for Earth observation classification accuracy improvement

Potočná, Barbora January 2019 (has links)
This thesis deals with training dataset and validation dataset for Earth observation classification accuracy improvement. Experiments with training data and validation data for two classification algorithms (Maximum Likelihood - MLC and Support Vector Machine - SVM) are carried out from the forest-meadow landscape located in the foothill of the Giant Mountains (Podkrkonoší). The thesis is base on the assumption that 1/3 of training data and 2/3 of validation data is an ideal ratio to achieve maximal classification accuracy (Foody, 2009). Another hypothesis was that in a case of SVM classification, a lower number of training point is required to achieve the same or similar accuracy of classification, as in the case of the MLC algorithm (Foody, 2004). The main goal of the thesis was to test the influence of proportion / amount of training and validation data on the classification accuracy of Sentinel - 2A multispectral data using the MLC algorithm. The highest overal accuracy using the MLC classification algorithm was achieved for 375 training and 625 validation points. The overal accuracy for this ratio was 72,88 %. The theory of Foody (2009) that 1/3 of training data and 2/3 of validation data is an ideal ratio to achieve the highest classification accuracy, was confirmed by the overal accuracy and...
10

High-Resolution Imaging of Earth's Lowermost Mantle

January 2019 (has links)
abstract: This research investigates the fine scale structure in Earth's mantle, especially for the lowermost mantle, where strong heterogeneity exists. Recent seismic tomography models have resolved large-scale features in the lower mantle, such as the large low shear velocity provinces (LLSVPs). However, differences are present between different models, especially at shorter length scales. Fine scale structures both within and outside LLSVPs are still poorly constrained. The drastic growth of global seismic networks presents densely sampled seismic data in unprecedented quality and quantity. In this work, the Empirical Wavelet construction method has been developed to document seismic travel time and waveform information for a global shear wave seismic dataset. A dataset of 250K high-quality seismic records with comprehensive measurements is documented and made publicly available. To more accurately classify high quality seismic signal from the noise, 1.4 million manually labeled seismic records have been used to train a supervised classification model. The constructed model performed better than the empirical model deployed in the Empirical Wavelet method, with 87% in precision and 83% in recall. To utilize lower amplitude phases such as higher multiples of S and ScS waves, we have developed a geographic bin stacking method to improve signal-to-noise ratio. It is then applied to Sn waves up to n=6 and ScSn wave up to n=5 for both minor and major arc phases. The virtual stations constructed provide unique path sampling and coverage, vastly improving sampling in the Southern Hemisphere. With the high-quality dataset we have gathered, ray-based layer stripping iterative forward tomography is implemented to update a starting tomography model by mapping the travel time residuals along the ray from the surface down to the core mantle boundary. Final updated models with different starting tomography models show consistent updates, suggesting a convergent solution. The final updated models show higher resolution results than the starting tomography models, especially on intermediate-scale structures. The combined analyses and results in this work provide new tools and new datasets to image the fine-scale heterogeneous structures in the lower mantle, which advances our understanding of the dynamics and evolution of the Earth's mantle. / Dissertation/Thesis / Doctoral Dissertation Geological Sciences 2019

Page generated in 0.0324 seconds