Spelling suggestions: "subject:"grau"" "subject:"grup""
1 |
A qualificação para o trabalho no ensino estadual paulista de 1. grauKawashita, Nobuko 30 September 1987 (has links)
Orientador: Maria Laura Franco / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Educação / Made available in DSpace on 2018-07-13T20:20:48Z (GMT). No. of bitstreams: 1
Kawashita_Nobuko_M.pdf: 3858899 bytes, checksum: 9b1aa3a54d4d96c6576e89f76452e297 (MD5)
Previous issue date: 1987 / Resumo: A análise da qualificação para o trabalho, no ensino de 1° grau, realizada através do confronto entre teoria e prática - legislação, relação educação e trabalho, trabalho na sociedade brasileira e providências para sua implantação e estudo de caso - possibilitou-me chegar a algumas constatações: a intenção propalada foi inviabilizada pela realidade escolar e social mais ampla, pondo a nu a contradição entre o fundamento da proposta legal (Teoria do Capital Humano) e o avanço do Capitalismo monopolista no Brasil; a escola tem, hoje, um papel a cumprir, na elevação das condições sociais e culturais e qualificação para o trabalho em relação a grande maioria da população brasileira. Para isso, precisa passar por uma profunda transformação. Para discussão, proponho o que entendo por escola voltada aos interesses da maioria da população brasileira, isto é, a serviço de sua qualificação como cidadão-trabalha dor / Abstract: Not informed / Mestrado / Metodologia de Ensino / Mestre em Educação
|
2 |
Databearbetning på RinghalsLindskog, Jakob, Gunnarsson, Robin January 2019 (has links)
Den nya generationens digitalisering har slagit rot i samhället. Algoritmer och datamodeller styr nyhetsflödet i social media, röststyr mobilen genom att tolka rösten och självstyr bilen, helt och hållet i autonoma fordon. Inom industrierna finns det också en pågående process där machine learning kan appliceras för att öka drifttillgänglighet och minska kostnader. Det nuvarande paradigmet för att underhålla icke-säkerhetsklassade maskiner i kärnkraftindustrin är en kombination av Avhjälpande Underhåll och Förebyggande Underhåll. Avhjälpande underhåll innebär att underhålla maskinen när fel inträffar, förebyggande underhåll innebär att underhålla med periodiska intervall. Båda sätten är kostsamma för att de riskerar att under- respektive över-underhålla maskinen och blir därmed resurskrävande. Ett paradigmskifte är på väg, det stavas Prediktivt Underhåll - att kunna förutspå fel innan de inträffar och planera underhåll därefter. Den här rapporten utforskar möjligheten att använda sig av de neurala nätverken LSTM och GRU för att kunna prognostisera eventuella skador på maskiner. Det här baseras på mätdata och historiska fel på maskinen. / The new generation of digitalization has been ingrained into society. Algorithms and data models are controlling the news feed of social media, controlling the phone by interpreting voices and controlling the car, altogether with automonous vehicles. In the industries there is also an ongoing process where machine learning is applied to increase availability and reduce costs. The current paradigm for maintaining non-critical machines in the nuclear power industry is a combination of corrective maintenance and preventive maintenance. Corrective maintenance means doing repairs on the machine upon faults, preventive maintenance means doing repairs periodically. Both ways are costly because they run the risk of under- and over-maintaining the machine and therefore becoming resource-intensive. A paradigm shift is on it's way, and it's spelled Predictive Maintenance - being able to predict faults before they happen and plan maintenance thence. This report explores the possibilities of using LSTM and GRU to forecast potential damage on machines. This is based on data from measurements and historical issues on the machine.
|
3 |
Modelling CLV in the Insurance Industry Using Deep Learning Methods / Modellering av CLV inom försäkringsbranschen med användande av metoder inom djupinlärningJablecka, Marta January 2020 (has links)
This paper presents a master’s thesis project in which deep learning methods are used to both calculate and subsequently attempt to maximize Customer Lifetime Value (CLV) for an insurance provider’s customers. Specifically, the report investigates whether panel data comprised of customers monthly insurance policy subscription history can be used with Recurrent Neural Networks (RNN) to achieve better predictive performance than the naïve forecasting model. In order to do this, the use of Long Short Term Memory (LSTM) for anomaly detection in a supervised manner is explored to determine which customers are more likely to change their subscription policies. Whether Deep Reinforcement Learning (DRL) can be used in this setting in order to maximize CLV is also investigated. The study found that the best RNN models outperformed the naïve model in terms of precision on the data set containing customers which are more likely to change their subscription policies. The models suffer, however, from several notable limitations so further research is advised. Selecting those customers was shown to be successful in terms of precision but not sensitivity which suggest that there is a room for improvement. The DRL models did not show a substantial improvement in terms of CLV maximization. / I detta examensarbete presenteras metoder där djupinlärning används för att både beräkna och maximera kundens lönsamhet över tid, Customer Lifetime Value (CLV), för en försäkringsleverantörs kunder. Specifikt undersöker rapporten historisk paneldata som består av kunders månatliga försäkringsinnehav där Recurrent Neural Networks (RNN) används för att uppnå bättre prediktiv prestanda än en naiv prognosmodell. Detta undersöks tillsammans med det neurala nätverket Long Short Term Memory (LSTM), där vi försöker finna anomalier på ett övervakat sätt. Där anomalier syftar på kunder som är mer benägna att ändra sin försäkringspolicy, då den största delen av populationen har samma innehav på månadsbasis. Även en gren av djupinlärning, Deep Reinforcement Learning (DRL), används för att undersöka möjligheten att maximera CLV för denna typ av data. Studien fann att de bästa RNN-modellerna överträffade den naiva modellen i termer av precision i data där kunder är mer benägna att ändra sin försäkringspolicy. Modellerna lider dock av flera anmärkningsvärda begränsningar, så ytterligare forskning rekommenderas. Att välja kunder med hjälp av LSTM visade sig vara framgångsrikt när det gäller precision men inte känslighet vilket tyder på att det finns utrymme för förbättring. DRL-modellerna visade inte någon väsentlig förbättring vad gäller CLV-maximering.
|
4 |
Integrating Customer Behavior Analysisfor Cost Prediction and ResourceUtilization in Mobile Networks : A Machine Learning Approach to Azure Server Analysis / Integrering av kundbeteendeanalys förkostnadsprediktion och resursutnyttjande imobila nätverk : En maskininlärningsmetod till Azure-serveranalysLind Amigo, Patrik, Hedblom, Vincent January 2024 (has links)
With the rapid evolution in mobile telecommunications, there is a significant need for more accurate and efficient management of resources such as CPU, RAM, and bandwidth. This thesis utilizes customer usage data alongside machine learning algorithms to predict resource demands, enabling telecommunications service providers to optimize service quality and reduce unnecessary costs. This thesis investigates enhancing mobile network cost prediction and resource utilization by integrating customer behavior analysis using machine learning models. As a predictive model we employed various machine learning techniques, including Random Forest Regressor and Recurrent Neural Networks (LSTM and GRU), and can effectively predict resource needs based on user events. Among these models, the Random Forest Regressor performed the best. This model enhances operational efficiency by providing precise resource predictions within the dataset ranges. / Med den snabba utvecklingen inom mobiltelekommunikation finns det ett betydande behov av mer exakt och effektiv hantering av resurser som CPU, RAM och bandbred. Rapporten använder data om kundanvändning tillsammans med maskininlärningsalgoritmer för att förutsäga resursbehov, vilket möjliggör att telekommunikationsleverantörer kan optimera tjänstekvalitet och minska onödiga kostnader. Detta examensarbete undersöker hur förutsägelser av kostnader och resursanvändning i mobila nätverk kan förbättras genom att integrera analys av kundbeteende med maskininlärningsmodeller. Som en prediktiv modell använde vi olika maskininlärningstekniker, inklusive Random Forest Regressor och Recurrent Neural Networks (LSTM och GRU), effektivt kan förutsäga resursbehov baserat på användarhändelser. Bland dessa modeller presterade Random Forest Regressor bäst. Denna modell förbättrar den operativa effektiviteten genom att ge mer precisa resurs prediktion inom datamängdens intervaller.
|
5 |
Finfördelad Sentimentanalys : Utvärdering av neurala nätverksmodeller och förbehandlingsmetoder med Word2Vec / Fine-grained Sentiment Analysis : Evaluation of Neural Network Models and Preprocessing Methods with Word2VecPhanuwat, Phutiwat January 2024 (has links)
Sentimentanalys är en teknik som syftar till att automatiskt identifiera den känslomässiga tonen i text. Vanligtvis klassificeras texten som positiv, neutral eller negativ. Nackdelen med denna indelning är att nyanser går förlorade när texten endast klassificeras i tre kategorier. En vidareutveckling av denna klassificering är att inkludera ytterligare två kategorier: mycket positiv och mycket negativ. Utmaningen med denna femklassificering är att det blir svårare att uppnå hög träffsäkerhet på grund av det ökade antalet kategorier. Detta har lett till behovet av att utforska olika metoder för att lösa problemet. Syftet med studien är därför att utvärdera olika klassificerare, såsom MLP, CNN och Bi-GRU i kombination med word2vec för att klassificera sentiment i text i fem kategorier. Studien syftar också till att utforska vilken förbehandling som ger högre träffsäkerhet för word2vec. Utvecklingen av modellerna gjordes med hjälp av SST-datasetet, som är en känd dataset inom finfördelad sentimentanalys. För att avgöra vilken förbehandling som ger högre träffsäkerhet för word2vec, förbehandlades datasetet på fyra olika sätt. Dessa innefattar enkel förbehandling (EF), samt kombinationer av vanliga förbehandlingar som att ta bort stoppord (EF+Utan Stoppord) och lemmatisering (EF+Lemmatisering), samt en kombination av båda (EF+Utan Stoppord/Lemmatisering). Dropout användes för att hjälpa modellerna att generalisera bättre, och träningen reglerades med early stopp-teknik. För att utvärdera vilken klassificerare som ger högre träffsäkerhet, användes förbehandlingsmetoden som hade högst träffsäkerhet som identifierades, och de optimala hyperparametrarna utforskades. Måtten som användes i studien för att utvärdera träffsäkerheten är noggrannhet och F1-score. Resultaten från studien visade att EF-metoden presterade bäst i jämförelse med de andra förbehandlingsmetoderna som utforskades. Den modell som hade högst noggrannhet och F1-score i studien var Bi-GRU. / Sentiment analysis is a technique aimed at automatically identifying the emotional tone in text. Typically, text is classified as positive, neutral, or negative. The downside of this classification is that nuances are lost when text is categorized into only three categories. An advancement of this classification is to include two additional categories: very positive and very negative. The challenge with this five-class classification is that achieving high performance becomes more difficult due to the increased number of categories. This has led to the need to explore different methods to solve the problem. Therefore, the purpose of the study is to evaluate various classifiers, such as MLP, CNN, and Bi-GRU in combination with word2vec, to classify sentiment in text into five categories. The study also aims to explore which preprocessing method yields higher performance for word2vec. The development of the models was done using the SST dataset, which is a well-known dataset in fine-grained sentiment analysis. To determine which preprocessing method yields higher performance for word2vec, the dataset was preprocessed in four different ways. These include simple preprocessing (EF), as well as combinations of common preprocessing techniques such as removing stop words (EF+Without Stopwords) and lemmatization (EF+Lemmatization), as well as a combination of both (EF+Without Stopwords/Lemmatization). Dropout was used to help the models generalize better, and training was regulated with early stopping technique. To evaluate which classifier yields higher performance, the preprocessing method with the highest performance was used, and the optimal hyperparameters were explored. The metrics used in the study to evaluate performance are accuracy and F1-score. The results of the study showed that the EF method performed best compared to the other preprocessing methods explored. The model with the highest accuracy and F1-score in the study was Bi-GRU.
|
6 |
Sentiment Analysis of Nordic LanguagesMårtensson, Fredrik, Holmblad, Jesper January 2019 (has links)
This thesis explores the possibility of applying sentiment analysis to extract tonality of user reviews on the Nordic languages. Data processing is performed in the form of preprocessing through tokenization and padding. A model is built in a framework called Keras. Models for classification and regression were built using LSTM and GRU architectures. The results showed how the dataset influences the end result and the correlation between observed and predicted values for classification and regression. The project shows that it is possible to implement NLP in the Nordic languages and how limitations in input and performance in hardware affected the result. Some questions that arose during the project consist of methods for improving the dataset and alternative solutions for managing information related to big data and GDPR. / Denna avhandling undersöker möjligheten att tillämpa sentiment analys för att extrahera tonalitet av användarrecensioner på nordiska språk. Databehandling utförs i form av förprocessering genom tokenisering och padding. En modell är uppbyggd i en ramverkad Keras. Modeller för klassificering och regression byggdes med LSTM och GRU-arkitekturer. Resultaten visade hur datasetet påverkar slutresultatet och korrelationen mellan observerade och förutspådda värden för klassificering och regression. Projektet visar att det är möjligt att implementera NLP på de nordiska språken och hur begränsningar i input och prestanda i hårdvara påverkat resultatet. Några frågor som uppstod under projektet består av metoder för att förbättra datasetet och alternativa lösningar för hantering av information relaterad till stora data och GDPR.
|
7 |
Land Cover Classification on Satellite Image Time Series Using Deep Learning ModelsWang, Zhihao January 2020 (has links)
No description available.
|
8 |
Taskfinder : Comparison of NLP techniques for textclassification within FMCG storesJensen, Julius January 2022 (has links)
Natural language processing has many important applications in today, such as translations, spam filters, and other useful products. To achieve these applications supervised and unsupervised machine learning models, have shown to be successful. The most important aspect of these models is what the model can achieve with different datasets. This article will examine how RNN models compare with Naive Bayes in text classification. The chosen RNN models are long short-term memory (LSTM) and gated recurrent unit (GRU). Both LSTM and GRU will be trained using the flair Framework. The models will be trained on three separate datasets with different compositions, where the trend within each model will be examined and compared with the other models. The result showed that Naive Bayes performed better on classifying short sentences than the RNN models, but worse in longer sentences. When trained on a small dataset LSTM and GRU had a better result then Naive Bayes. The best performing model was Naive Bayes, which had the highest accuracy score in two out of the three datasets.
|
9 |
Predicting Bipolar Mood Disorder using LongShort-Term Memory Neural NetworksHafiz, Saeed Mubasher January 2022 (has links)
Bipolar mood disorder is a severe mental condition that has multiple episodesof either of two types: manic or depressive. These phases can lead patients tobecome hyperactive, hyper-sexual, lethargic, or even commit suicide — all ofwhich seriously impair the quality of life for patients. Predicting these phaseswould help patients manage their lives better and improve our ability to applymedical interventions. Traditionally, interviews are conducted in the evening topredict potential episodes in the following days. While machine learningapproaches have been used successfully before, the data was limited tomeasuring a few self-reported parameters each day. Using biometrics recordedat short intervals over many months presents a new opportunity for machinelearning approaches. However, phases of unrest and hyperactivity, which mightbe predictive signals, are not only often experienced long before the onset ofmanic or depressive phases but are also separated by several uneventful days.This delay and its aperiodic occurrence are a challenge for deep learning. In thisthesis, a fictional dataset that mimics long and irregular delays is created andused to test the effects of such long delays and rare events. LSTMs, RNNs, andGRUs are the go-to models for deep learning in this situation. However, theydiffer in their ability to be trained over a long time. As their acronym suggests,LSTMS are believed to be easier to train and to have a better ability to remember(as their name suggests) than their simpler RNN counterparts. GRUs representa compromise in complexity between RNNs and LSTMs. Here, I will show that,contrary to the common assumption, LSTMs are surprisingly forgetful and thatRNNs have a much better ability to generalize over longer delays with shortersequences. At the same time, I could confirm that LSTMs are easily trained ontasks that have more prolonged delays.
|
10 |
Forecasting the Nasdaq-100 index using GRU and ARIMACederberg, David, Tanta, Daniel January 2022 (has links)
Today, there is an overwhelming amount of data that is being collected when it comes to financial markets. For forecasting stock indexes, many models rely only on historical values of the index itself. One such model is the ARIMA model. Over the last decades, machine learning models have challenged the classical time series models, such as ARIMA. The purpose of this thesis is to study the ability to make predictions based solely on the historical values of an index, by using a certain subset of machine learning models: a neural network in the form of a Gated Recurrent Unit (GRU). The GRU model’s ability to predict a financial market is compared to the ability of a simple ARIMA model. The financial market that was chosen to make the comparison was the American stock index Nasdaq-100, i.e., an index of the 100 largest non-financial companies on NASDAQ. Our results indicate that GRU is unable to outperform ARIMA in predicting the Nasdaq-100 index. For the evaluation, multiple GRU models with various combinations of different hyperparameters were created. The accuracies of these models were then compared to the accuracy of an ARIMA model by applying a conventional forecast accuracy test, which showed that there were significant differences in the accuracy of the models, in favor of ARIMA.
|
Page generated in 0.0363 seconds