• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 3
  • 2
  • 1
  • Tagged with
  • 34
  • 23
  • 15
  • 13
  • 12
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A qualificação para o trabalho no ensino estadual paulista de 1. grau

Kawashita, Nobuko 30 September 1987 (has links)
Orientador: Maria Laura Franco / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Educação / Made available in DSpace on 2018-07-13T20:20:48Z (GMT). No. of bitstreams: 1 Kawashita_Nobuko_M.pdf: 3858899 bytes, checksum: 9b1aa3a54d4d96c6576e89f76452e297 (MD5) Previous issue date: 1987 / Resumo: A análise da qualificação para o trabalho, no ensino de 1° grau, realizada através do confronto entre teoria e prática - legislação, relação educação e trabalho, trabalho na sociedade brasileira e providências para sua implantação e estudo de caso - possibilitou-me chegar a algumas constatações: a intenção propalada foi inviabilizada pela realidade escolar e social mais ampla, pondo a nu a contradição entre o fundamento da proposta legal (Teoria do Capital Humano) e o avanço do Capitalismo monopolista no Brasil; a escola tem, hoje, um papel a cumprir, na elevação das condições sociais e culturais e qualificação para o trabalho em relação a grande maioria da população brasileira. Para isso, precisa passar por uma profunda transformação. Para discussão, proponho o que entendo por escola voltada aos interesses da maioria da população brasileira, isto é, a serviço de sua qualificação como cidadão-trabalha dor / Abstract: Not informed / Mestrado / Metodologia de Ensino / Mestre em Educação
2

Databearbetning på Ringhals

Lindskog, Jakob, Gunnarsson, Robin January 2019 (has links)
Den nya generationens digitalisering har slagit rot i samhället. Algoritmer och datamodeller styr nyhetsflödet i social media, röststyr mobilen genom att tolka rösten och självstyr bilen, helt och hållet i autonoma fordon. Inom industrierna finns det också en pågående process där machine learning kan appliceras för att öka drifttillgänglighet och minska kostnader. Det nuvarande paradigmet för att underhålla icke-säkerhetsklassade maskiner i kärnkraftindustrin är en kombination av Avhjälpande Underhåll och Förebyggande Underhåll. Avhjälpande underhåll innebär att underhålla maskinen när fel inträffar, förebyggande underhåll innebär att underhålla med periodiska intervall. Båda sätten är kostsamma för att de riskerar att under- respektive över-underhålla maskinen och blir därmed resurskrävande. Ett paradigmskifte är på väg, det stavas Prediktivt Underhåll - att kunna förutspå fel innan de inträffar och  planera underhåll därefter. Den här rapporten utforskar möjligheten att använda sig av de neurala nätverken LSTM och GRU för att kunna prognostisera eventuella skador på maskiner. Det här baseras på mätdata och historiska fel på maskinen. / The new generation of digitalization has been ingrained into society. Algorithms and data models are controlling the news feed of social media, controlling the phone by interpreting voices and controlling the car, altogether with automonous vehicles. In the industries there is also an ongoing process where machine learning is applied to increase availability and reduce costs. The current paradigm for maintaining non-critical machines in the nuclear power industry is a combination of corrective maintenance and preventive maintenance. Corrective maintenance means doing repairs on the machine upon faults, preventive maintenance means doing repairs periodically. Both ways are costly because they run the risk of under- and over-maintaining the machine and therefore becoming resource-intensive. A paradigm shift is on it's way, and it's spelled Predictive Maintenance - being able to predict faults before they happen and plan maintenance thence. This report explores the possibilities of using LSTM and GRU to forecast potential damage on machines. This is based on data from measurements and historical issues on the machine.
3

Modelling CLV in the Insurance Industry Using Deep Learning Methods / Modellering av CLV inom försäkringsbranschen med användande av metoder inom djupinlärning

Jablecka, Marta January 2020 (has links)
This paper presents a master’s thesis project in which deep learning methods are used to both calculate and subsequently attempt to maximize Customer Lifetime Value (CLV) for an insurance provider’s customers. Specifically, the report investigates whether panel data comprised of customers monthly insurance policy subscription history can be used with Recurrent Neural Networks (RNN) to achieve better predictive performance than the naïve forecasting model. In order to do this, the use of Long Short Term Memory (LSTM) for anomaly detection in a supervised manner is explored to determine which customers are more likely to change their subscription policies. Whether Deep Reinforcement Learning (DRL) can be used in this setting in order to maximize CLV is also investigated. The study found that the best RNN models outperformed the naïve model in terms of precision on the data set containing customers which are more likely to change their subscription policies. The models suffer, however, from several notable limitations so further research is advised. Selecting those customers was shown to be successful in terms of precision but not sensitivity which suggest that there is a room for improvement. The DRL models did not show a substantial improvement in terms of CLV maximization. / I detta examensarbete presenteras metoder där djupinlärning används för att både beräkna och maximera kundens lönsamhet över tid, Customer Lifetime Value (CLV), för en försäkringsleverantörs kunder. Specifikt undersöker rapporten historisk paneldata som består av kunders månatliga försäkringsinnehav där Recurrent Neural Networks (RNN) används för att uppnå bättre prediktiv prestanda än en naiv prognosmodell. Detta undersöks tillsammans med det neurala nätverket Long Short Term Memory (LSTM), där vi försöker finna anomalier på ett övervakat sätt. Där anomalier syftar på kunder som är mer benägna att ändra sin försäkringspolicy, då den största delen av populationen har samma innehav på månadsbasis. Även en gren av djupinlärning, Deep Reinforcement Learning (DRL), används för att undersöka möjligheten att maximera CLV för denna typ av data. Studien fann att de bästa RNN-modellerna överträffade den naiva modellen i termer av precision i data där kunder är mer benägna att ändra sin försäkringspolicy. Modellerna lider dock av flera anmärkningsvärda begränsningar, så ytterligare forskning rekommenderas. Att välja kunder med hjälp av LSTM visade sig vara framgångsrikt när det gäller precision men inte känslighet vilket tyder på att det finns utrymme för förbättring. DRL-modellerna visade inte någon väsentlig förbättring vad gäller CLV-maximering.
4

Sentiment Analysis of Nordic Languages

Mårtensson, Fredrik, Holmblad, Jesper January 2019 (has links)
This thesis explores the possibility of applying sentiment analysis to extract tonality of user reviews on the Nordic languages. Data processing is performed in the form of preprocessing through tokenization and padding. A model is built in a framework called Keras. Models for classification and regression were built using LSTM and GRU architectures. The results showed how the dataset influences the end result and the correlation between observed and predicted values for classification and regression. The project shows that it is possible to implement NLP in the Nordic languages and how limitations in input and performance in hardware affected the result. Some questions that arose during the project consist of methods for improving the dataset and alternative solutions for managing information related to big data and GDPR. / Denna avhandling undersöker möjligheten att tillämpa sentiment analys för att extrahera tonalitet av användarrecensioner på nordiska språk. Databehandling utförs i form av förprocessering genom tokenisering och padding. En modell är uppbyggd i en ramverkad Keras. Modeller för klassificering och regression byggdes med LSTM och GRU-arkitekturer. Resultaten visade hur datasetet påverkar slutresultatet och korrelationen mellan observerade och förutspådda värden för klassificering och regression. Projektet visar att det är möjligt att implementera NLP på de nordiska språken och hur begränsningar i input och prestanda i hårdvara påverkat resultatet. Några frågor som uppstod under projektet består av metoder för att förbättra datasetet och alternativa lösningar för hantering av information relaterad till stora data och GDPR.
5

Land Cover Classification on Satellite Image Time Series Using Deep Learning Models

Wang, Zhihao January 2020 (has links)
No description available.
6

Taskfinder : Comparison of NLP techniques for textclassification within FMCG stores

Jensen, Julius January 2022 (has links)
Natural language processing has many important applications in today, such as translations, spam filters, and other useful products. To achieve these applications supervised and unsupervised machine learning models, have shown to be successful. The most important aspect of these models is what the model can achieve with different datasets. This article will examine how RNN models compare with Naive Bayes in text classification. The chosen RNN models are long short-term memory (LSTM) and gated recurrent unit (GRU). Both LSTM and GRU will be trained using the flair Framework. The models will be trained on three separate datasets with different compositions, where the trend within each model will be examined and compared with the other models. The result showed that Naive Bayes performed better on classifying short sentences than the RNN models, but worse in longer sentences. When trained on a small dataset LSTM and GRU had a better result then Naive Bayes. The best performing model was Naive Bayes, which had the highest accuracy score in two out of the three datasets.
7

Predicting Bipolar Mood Disorder using LongShort-Term Memory Neural Networks

Hafiz, Saeed Mubasher January 2022 (has links)
Bipolar mood disorder is a severe mental condition that has multiple episodesof either of two types: manic or depressive. These phases can lead patients tobecome hyperactive, hyper-sexual, lethargic, or even commit suicide — all ofwhich seriously impair the quality of life for patients. Predicting these phaseswould help patients manage their lives better and improve our ability to applymedical interventions. Traditionally, interviews are conducted in the evening topredict potential episodes in the following days. While machine learningapproaches have been used successfully before, the data was limited tomeasuring a few self-reported parameters each day. Using biometrics recordedat short intervals over many months presents a new opportunity for machinelearning approaches. However, phases of unrest and hyperactivity, which mightbe predictive signals, are not only often experienced long before the onset ofmanic or depressive phases but are also separated by several uneventful days.This delay and its aperiodic occurrence are a challenge for deep learning. In thisthesis, a fictional dataset that mimics long and irregular delays is created andused to test the effects of such long delays and rare events. LSTMs, RNNs, andGRUs are the go-to models for deep learning in this situation. However, theydiffer in their ability to be trained over a long time. As their acronym suggests,LSTMS are believed to be easier to train and to have a better ability to remember(as their name suggests) than their simpler RNN counterparts. GRUs representa compromise in complexity between RNNs and LSTMs. Here, I will show that,contrary to the common assumption, LSTMs are surprisingly forgetful and thatRNNs have a much better ability to generalize over longer delays with shortersequences. At the same time, I could confirm that LSTMs are easily trained ontasks that have more prolonged delays.
8

Forecasting the Nasdaq-100 index using GRU and ARIMA

Cederberg, David, Tanta, Daniel January 2022 (has links)
Today, there is an overwhelming amount of data that is being collected when it comes to financial markets. For forecasting stock indexes, many models rely only on historical values of the index itself. One such model is the ARIMA model. Over the last decades, machine learning models have challenged the classical time series models, such as ARIMA. The purpose of this thesis is to study the ability to make predictions based solely on the historical values of an index, by using a certain subset of machine learning models: a neural network in the form of a Gated Recurrent Unit (GRU). The GRU model’s ability to predict a financial market is compared to the ability of a simple ARIMA model. The financial market that was chosen to make the comparison was the American stock index Nasdaq-100, i.e., an index of the 100 largest non-financial companies on NASDAQ. Our results indicate that GRU is unable to outperform ARIMA in predicting the Nasdaq-100 index. For the evaluation, multiple GRU models with various combinations of different hyperparameters were created. The accuracies of these models were then compared to the accuracy of an ARIMA model by applying a conventional forecast accuracy test, which showed that there were significant differences in the accuracy of the models, in favor of ARIMA.
9

Churn prediction using time series data / Prediktion av kunduppsägelser med hjälp av tidsseriedata

Granberg, Patrick January 2020 (has links)
Customer churn is problematic for any business trying to expand their customer base. The acquisition of new customers to replace churned ones are associated with additional costs, whereas taking measures to retain existing customers may prove more cost efficient. As such, it is of interest to estimate the time until the occurrence of a potential churn for every customer in order to take preventive measures. The application of deep learning and machine learning to this type of problem using time series data is relatively new and there is a lot of recent research on this topic. This thesis is based on the assumption that early signs of churn can be detected by the temporal changes in customer behavior. Recurrent neural networks and more specifically long short-term memory (LSTM) and gated recurrent unit (GRU) are suitable contenders since they are designed to take the sequential time aspect of the data into account. Random forest (RF) and stochastic vector machine (SVM) are machine learning models that are frequently used in related research. The problem is solved through a classification approach, and a comparison is done with implementations using LSTM, GRU, RF, and SVM. According to the results, LSTM and GRU perform similarly while being slightly better than RF and SVM in the task of predicting customers that will churn in the coming six months, and that all models could potentially lead to cost savings according to simulations (using non-official but reasonable costs assigned to each prediction outcome). Predicting the time until churn is a more difficult problem and none of the models can give reliable estimates, but all models are significantly better than random predictions. / Kundbortfall är problematiskt för företag som försöker expandera sin kundbas. Förvärvandet av nya kunder för att ersätta förlorade kunder är associerat med extra kostnader, medan vidtagandet av åtgärder för att behålla kunder kan visa sig mer lönsamt. Som så är det av intresse att för varje kund ha pålitliga tidsestimat till en potentiell uppsägning kan tänkas inträffa så att förebyggande åtgärder kan vidtas. Applicerandet av djupinlärning och maskininlärning på denna typ av problem som involverar tidsseriedata är relativt nytt och det finns mycket ny forskning kring ämnet. Denna uppsats är baserad på antagandet att tidiga tecken på kundbortfall kan upptäckas genom kunders användarmönster över tid. Reccurent neural networks och mer specifikt long short-term memory (LSTM) och gated recurrent unit (GRU) är lämpliga modellval eftersom de är designade att ta hänsyn till den sekventiella tidsaspekten i tidsseriedata. Random forest (RF) och stochastic vector machine (SVM) är maskininlärningsmodeller som ofta används i relaterad forskning. Problemet löses genom en klassificeringsapproach, och en jämförelse utförs med implementationer av LSTM, GRU, RF och SVM. Resultaten visar att LSTM och GRU presterar likvärdigt samtidigt som de presterar bättre än RF och SVM på problemet om att förutspå kunder som kommer att säga upp sig inom det kommande halvåret, och att samtliga modeller potentiellt kan leda till kostnadsbesparingar enligt simuleringar (som använder icke-officiella men rimliga kostnader associerat till varje utfall). Att förutspå tid till en kunduppsägning är ett svårare problem och ingen av de framtagna modellerna kan ge pålitliga tidsestimat, men alla är signifikant bättre än slumpvisa gissningar.
10

Predict Next Location of Users using Deep Learning

Guan, Xing January 2019 (has links)
Predicting the next location of a user has been interesting for both academia and industry. Applications like location-based advertising, traffic planning, intelligent resource allocation as well as in recommendation services are some of the problems that many are interested in solving. Along with the technological advancement and the widespread usage of electronic devices, many location-based records are created. Today, deep learning framework has successfully surpassed many conventional methods in many learning tasks, most known in the areas of image and voice recognition. One of the neural network architecture that has shown the promising result at sequential data is Recurrent Neural Network (RNN). Since the creation of RNN, much alternative architecture have been proposed, and architectures like Long Short Term Memory (LSTM) and Gated Recurrent Units (GRU) are one of the popular ones that are created[5]. This thesis uses GRU architecture and features that incorporate time and location into the network to forecast people’s next location In this paper, a spatial-temporal neural network (ST-GRU) has been proposed. It can be seen as two parts, which are ST and GRU. The first part is a feature extraction algorithm that pulls out the information from a trajectory into location sequences. That process transforms the trajectory into a friendly sequence format in order to feed into the model. The second part, GRU is proposed to predict the next location given a user’s trajectory. The study shows that the proposed model ST-GRU has the best results comparing the baseline models. / Att förutspå vart en individ är på väg har varit intressant för både akademin och industrin. Tillämpningar såsom platsbaserad annonsering, trafikplanering, intelligent resursallokering samt rekommendationstjänster är några av de problem som många är intresserade av att lösa. Tillsammans med den tekniska utvecklingen och den omfattande användningen av elektroniska enheter har många platsbaserade data skapats. Idag har tekniken djupinlärning framgångsrikt överträffat många konventionella metoder i inlärningsuppgifter, bland annat inom områdena bild och röstigenkänning. En neural nätverksarkitektur som har visat lovande resultat med sekventiella data kallas återkommande neurala nätverk (RNN). Sedan skapandet av RNN har många alternativa arkitekturer skapats, bland de mest kända är Long Short Term Memory (LSTM) och Gated Recurrent Units (GRU). Den här studien använder en modifierad GRU där man bland annat lägger till attribut såsom tid och distans i nätverket för att prognostisera nästa plats. I det här examensarbetet har ett rumsligt temporalt neuralt nätverk (ST-GRU) föreslagits. Den består av två delar, nämligen ST och GRU. Den första delen är en extraktionsalgoritm som drar ut relevanta korrelationer mellan tid och plats som är inkorporerade i nätverket. Den andra delen, GRU, förutspår nästa plats med avseende på användarens aktuella plats. Studien visar att den föreslagna modellen ST-GRU ger bättre resultat jämfört med benchmarkmodellerna.

Page generated in 0.0258 seconds