• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semantic Segmentation of Iron Ore Pellets in the Cloud

Lindberg, Hampus January 2021 (has links)
This master's thesis evaluates data annotation, semantic segmentation and Docker for use in AWS. The data provided has to be annotated and is to be used as a dataset for the creation of a neural network. Different neural network models are then to be compared based on performance. AWS has the option to use Docker containers and thus that option is to be examined, and lastly the different tools available in AWS SageMaker will be analyzed for bringing a neural network to the cloud. Images were annotated in Ilastik and the dataset size is 276 images, then a neural network was created in PyTorch by using the library Segmentation Models PyTorch which gave the option of trying different models. This neural network was created in a notebook in Google Colab for a quick setup and easy testing. The dataset was then uploaded to AWS S3 and the notebook was brought from Colab to an AWS instance where the dataset then could be loaded from S3. A Docker container was created and packaged with the necessary packages and libraries as well as the training and inference code, to then be pushed to the ECR (Elastic Container Registry). This container could then be used to perform training jobs in SageMaker which resulted in a trained model stored in S3, and the hyperparameter tuning tool was also examined to get a better performing model. The two different deployment methods in SageMaker was then investigated to understand the entire machine learning solution. The images annotated in Ilastik were deemed sufficient as the neural network results were satisfactory. The neural network created was able to use all of the models accessible from Segmentation Models PyTorch which enabled a lot of options. By using a Docker container all of the tools available in SageMaker could be used with the created neural network packaged in the container and pushed to the ECR. Training jobs were run in SageMaker by using the container to get a trained model which could be saved to AWS S3. Hyperparameter tuning was used and got better results than the manually tested parameters which resulted in the best neural network produced. The model that was deemed the best was Unet++ in combination with the Dpn98 encoder. The two different deployment methods in SageMaker was explored and is believed to be beneficial in different ways and thus has to be reconsidered for each project. By analysis the cloud solution was deemed to be the better alternative compared to an in-house solution, in all three aspects measured, which was price, performance and scalability.
2

Semantic Segmentation of Iron Pellets as a Cloud Service

Christopher, Rosenvall January 2020 (has links)
This master’s thesis evaluates automatic data annotation and machine learning predictions of iron ore pellets using tools provided by Amazon Web Services (AWS) in the cloud. The main tool in focus is Amazon SageMaker which is capable of automatic data annotation as well as building, training and deploying machine learning models quickly. Three different models was trained using SageMakers built in semantic segmentation algorithm, PSP, FCN and DeepLabV3. The dataset used for training and evaluation contains 180 images of iron ore pellets collected from LKAB’s experimental blast furnace in Luleå, Sweden. The Amazon Web Services solution for automatic annotation was shown to be of no use when annotating microscopic images of iron ore pellets. Ilastik which is an interactive learning and segmentation toolkit showed far superiority for the task at hand. Out of the three trained networks Fully-Convolutional Network (FCN) performed best looking at inference and training times, it was the quickest network to train and performed within 1% worse than the fastest in regard to inference time. The Fully-Convolutional Network had an average accuracy of 85.8% on the dataset, where both PSP & DeepLabV3 was showing similar performance. From the results in this thesis it was concluded that there are benefits of running deep neural networks as a cloud service for analysis and management ofiron ore pellets.
3

Using supervised learning methods to predict the stop duration of heavy vehicles.

Oldenkamp, Emiel January 2020 (has links)
In this thesis project, we attempt to predict the stop duration of heavy vehicles using data based on GPS positions collected in a previous project. All of the training and prediction is done in AWS SageMaker, and we explore possibilities with Linear Learner, K-Nearest Neighbors and XGBoost, all of which are explained in this paper. Although we were not able to construct a production-grade model within the time frame of the thesis, we were able to show that the potential for such a model does exist given more time, and propose some suggestions for the paths one can take to improve on the endpoint of this project.
4

Myaamia Translator: Using Neural Machine Translation With Attention to Translate a Low-resource Language

Baaniya, Bishal 06 April 2023 (has links)
No description available.
5

Analysis Of Fastlane For Digitalization Through Low-Code ML Platforms

Raghavendran, Krishnaraj January 2022 (has links)
Even a professional photographer sometimes uses automatic default settings that come up with the camera to take a photo. One can debate the quality of outcome from manual vs automatic mode. Until and unless we have a professional level of competence in taking a photo, updating our skills/knowledge as per the latest market trends and having enough time to try out different settings manually, it is worthwhile to use Auto-mode. As camera manufacturers, after several iterations of testing, comes up with the list of ideal parameter values, which is embedded as a factory default setting when we choose auto-mode. For non-professional photographers or amateurs recommend using the auto-mode that comes with the camera for not missing the moment. Similarly, in the context of developing machine learning models, until and unless we have the required data-engineering and ML development competence, time to train and test different ML models and tune different hyper parameter settings, it is worth to try out to Automatic Machine learning feature provided out-of-shelf by all the Cloud-based and Cloud-agnostic ML platforms. This thesis deep dives into evaluating possibility of generating automatic machine learning models with no-code/low-code experience provided by GCP, AWS, Azure and Databricks. We have made a comparison between different ML platforms on generating automatic ML model and presenting the results. It also covers the lessons learnt by developing automatic ML models from a sample dataset across all four ML platforms. Later, we have outlined machine learning subject matter expert’s viewpoints about using Automatic Machine learning models. From this research, we found automatic machine learning can come handy for many off-the-shelf analytical use-cases, this can be highly beneficial especially for time-constrained projects, when resource competence or staffing is a bottleneck and even when competent data scientists want a second-opinion or compare AutoML results with the custom ML model built.

Page generated in 0.0492 seconds