• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 96
  • 96
  • 40
  • 40
  • 24
  • 22
  • 22
  • 20
  • 19
  • 18
  • 17
  • 17
  • 16
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

O valor futuro de cada cliente : estimação do Customer Lifetime Value

Silveira, Rodrigo Heldt January 2014 (has links)
A capacidade de o marketing mensurar e comunicar o valor de suas atividades e investimentos tem sido uma das prioridades de pesquisa na área nos últimos anos. Para atingir esse objetivo, a capacidade de mensurar adequadamente os ativos de marketing, como o Customer Lifetime Value e, de forma agregada, o Customer Equity, torna-se essencial, pois esses ativos são considerados os elementos capazes de traduzir em valores monetários o resultado dos diversos investimentos realizados pela área de marketing. Diante da mensuração desses valores, é possível o planejamento e a realização de ações mais precisas por parte dos profissionais de marketing. Sendo assim, no presente estudo objetivou-se construir e aplicar um modelo de estimação de Customer Lifetime Value no modo bottom-up (individual por cliente) em uma amostra de clientes de uma empresa do setor de serviços financeiros. O modelo bayesiano hierárquico aplicado, com três regressões estruturadas conforme o modelo Seemingly Unrelated Regressions (SUR) (ZELNER, 1971), foi construído a partir dos trabalhos de Kumar et al. (2008), Kumar e Shah (2009) e Cowles, Carlin e Connet (1996). Os resultados evidenciaram (1) que o modelo foi capaz de estimar com consistência o valor futuro de 84% dos clientes analisados; (2) que esse valor estimado traduz o potencial de rentabilidade que pode ser esperado futuramente para cada cliente; (3) que a base de clientes pode ser segmentada a partir do Customer Lifetime Value. Diante do conhecimento do valor futuro de cada cliente, se vislumbrou possibilidades de ações que tragam melhorias para gestão de clientes tradicionalmente utilizada, principalmente no que diz respeito à alocação dos recursos de marketing. / The marketing capacity to measure and to communicate the value resultant of its activities and investments has been one of the area top research priorities in the last few years. In order to achieve this objective, the capacity to appropriately measure the marketing assets, as the Customer Lifetime Value and, in aggregate form, the Customer Equity, has been pointed out as essential, because this assets are considered elements capable of translating the result of marketing investments into monetary values. Given the measurement of those values, marketers become able to plan and take more precise actions. Thus, the objective of present study is to build and test a bottom-up Customer Lifetime Value estimation model to a sample of customers from a company of finance services. The bayesian hierarchical model, composed of three regressions structured according to the Seemingly Unrelated Regressions (SUR) model (ZELNER, 1971), was built from the works of Kumar et al. (2008), Kumar and Shah (2009) and Cowles, Carlin and Connet (1996). The results show that (1) the model was capable to estimate with consistency the future value of 84% of the analyzed customers; (2) this estimated future values indicate the potential profitability of each customer; (3) the customer base can be segmented from the Customer Lifetime Value. Given the knowledge obtained about the future value of each customer and the segments established, several actions that can bring improvements to the traditional way of managing customers were suggested, in special those concerning marketing resource allocation.
42

Bayesian exploratory factor analysis

Conti, Gabriella, Frühwirth-Schnatter, Sylvia, Heckman, James J., Piatek, Rémi 27 June 2014 (has links) (PDF)
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. (authors' abstract)
43

Advanced UNet for 3D Lung Segmentation and Applications

Kadia, Dhaval Dilip 18 May 2021 (has links)
No description available.
44

Detecting gastrointestinal abnormalities with binary classification of the Kvasir-Capsule dataset : A TensorFlow deep learning study / Detektering av gastrointenstinentala abnormaliteter med binär klassificering av datasetet Kvasir-Capsule : En TensoFlow djupinlärning studie

Hollstensson, Mathias January 2022 (has links)
The early discovery of gastrointestinal (GI) disorders can significantly decrease the fatality rate of severe afflictions. Video capsule endoscopy (VCE) is a technique that produces an eight hour long recording of the GI tract that needs to be manually reviewed. This has led to the demand for AI-based solutions, but unfortunately, the lack of labeled data has been a major obstacle. In 2020 the Kvasir-Capsule dataset was produced which is the largest labeled dataset of GI abnormalities to date, but challenges still exist.The dataset suffers from unbalanced and very similar data created from labeled video frames. To avoid specialization to the specific data the creators of the set constructed an official split which is encouraged to use for testing. This study evaluates the use of transfer learning, Data augmentation and binary classification to detect GI abnormalities. The performance of machine learning (ML) classification is explored, with and without official split-based testing. For the performance evaluation, a specific focus will be on achieving a low rate of false negatives. The proposition behind this is that the most important aspect of an automated detection system for GI abnormalities is a low miss rate of possible lethal abnormalities. The results from the controlled experiments conducted in this study clearly show the importance of using official split-based testing. The difference in performance between a model trained and tested on the same set and a model that uses official split-based testing is significant. This enforces that without the use of official split-based testing the model will not produce reliable and generalizable results. When using official split-based testing the performance is improved compared to the initial baseline that is presented with the Kvasir-Capsule set. Some experiments in the study produced results with as low as a 1.56% rate of false negatives but with the cost of lowered performance for the normal class.
45

Joint Models for the Association of Longitudinal Binary and Continuous Processes With Application to a Smoking Cessation Trial

Liu, Xuefeng, Daniels, Michael J., Marcus, Bess 01 June 2009 (has links)
Joint models for the association of a longitudinal binary and a longitudinal continuous process are proposed for situations in which their association is of direct interest. The models are parameterized such that the dependence between the two processes is characterized by unconstrained regression coefficients. Bayesian variable selection techniques are used to parsimoniously model these coefficients. A Markov chain Monte Carlo (MCMC) sampling algorithm is developed for sampling from the posterior distribution, using data augmentation steps to handle missing data. Several technical issues are addressed to implement the MCMC algorithm efficiently. The models are motivated by, and are used for, the analysis of a smoking cessation clinical trial in which an important question of interest was the effect of the (exercise) treatment on the relationship between smoking cessation and weight gain.
46

Entwicklung eines Monte-Carlo-Verfahrens zum selbständigen Lernen von Gauß-Mischverteilungen

Lauer, Martin 03 March 2005 (has links)
In der Arbeit wird ein neuartiges Lernverfahren für Gauß-Mischverteilungen entwickelt. Es basiert auf der Technik der Markov-Chain Monte-Carlo Verfahren und ist in der Lage, in einem Zuge die Größe der Mischverteilung sowie deren Parameter zu bestimmen. Das Verfahren zeichnet sich sowohl durch eine gute Anpassung an die Trainingsdaten als auch durch eine gute Generalisierungsleistung aus. Ausgehend von einer Beschreibung der stochastischen Grundlagen und einer Analyse der Probleme, die beim Lernen von Gauß-Mischverteilungen auftreten, wird in der Abeit das neue Lernverfahren schrittweise entwickelt und seine Eigenschaften untersucht. Ein experimenteller Vergleich mit bekannten Lernverfahren für Gauß-Mischverteilungen weist die Eignung des neuen Verfahrens auch empirisch nach.
47

Statistical Inference for Multivariate Stochastic Differential Equations

Liu, Ge 15 November 2019 (has links)
No description available.
48

Bayesian Regression Trees for Count Data: Models and Methods

Geels, Vincent M. 27 September 2022 (has links)
No description available.
49

Investigation of Green Strawberry Detection Using R-CNN with Various Architectures

Rivers, Daniel W 01 March 2022 (has links) (PDF)
Traditional image processing solutions have been applied in the past to detect and count strawberries. These methods typically involve feature extraction followed by object detection using one or more features. Some object detection problems can be ambiguous as to what features are relevant and the solutions to many problems are only fully realized when the modern approach has been applied and tested, such as deep learning. In this work, we investigate the use of R-CNN for green strawberry detection. The object detection involves finding regions of interest (ROIs) in field images using the selective segmentation algorithm and inputting these regions into a pre-trained deep neural network (DNN) model. The convolutional neural networks VGG, MobileNet and ResNet were implemented to detect subtle differences between green strawberries and various background elements. Downscaling factors, intersection over union (IOU) thresholds and non-maxima suppression (NMS) values can be tweaked to increase recall and reduce false positives while data augmentation and negative hardminging can be used to increase the amount of input data. The state of the art model is sufficient in locating the green strawberries with an overall model accuracy of 74%. The R-CNN model can then be used for crop yield prediction to forecast the actual red strawberry count one week in advance with a 90% accuracy.
50

Club Head Tracking : Visualizing the Golf Swing with Machine Learning

Herbai, Fredrik January 2023 (has links)
During the broadcast of a golf tournament, a way to show the audience what a player's swing looks like would be to draw a trace following the movement of the club head. A computer vision model can be trained to identify the position of the club head in an image, but due to the high speed at which professional players swing their clubs coupled with the low frame rate of a typical broadcast camera, the club head is not discernible whatsoever in most frames. This means that the computer vision model is only able to deliver a few sparse detections of the club head. This thesis project aims to develop a machine learning model that can predict the complete motion of the club head, in the form of a swing trace, based on the sparse club head detections. Slow motion videos of golf swings are collected, and the club head's position is annotated manually in each frame. From these annotations, relevant data to describe the club head's motion, such as position and time parameters, is extracted and used to train the machine learning models. The dataset contains 256 annotated swings of professional and competent amateur golfers. The two models that are implemented in this project are XGBoost and a feed forward neural network. The input given to the models only contains information in specific parts of the swing to mimic the pattern of the sparse detections. Both models learned the underlying physics of the golf swing, and the quality of the predicted traces depends heavily on the amount of information provided in the input. In order to produce good predictions with only the amount of input information that can be expected from the computer vision model, a lot more training data is required. The traces predicted by the neural network are significantly smoother and thus look more realistic than the predictions made by the XGBoost model.

Page generated in 0.1425 seconds