Spelling suggestions: "subject:"text""
41 |
Diseño de un modelo predictivo para el aumento de pólizas principales en una Compañía de SegurosGarcía Bacchiega, Felipe Ignacio January 2017 (has links)
Ingeniero Civil Industrial / Seguros Falabella es una Corredora de Seguros Grandes Tiendas perteneciente al grupo Falabella. Tiene participación en algunos países de Latinoamérica, en particular en Chile, donde se realiza este trabajo de memoria.
El objetivo general de esta memoria consiste en diseñar un modelo predictivo basado en las características individuales de los clientes, que entregue la propensión de compra para los seis productos principales de la compañía, con el fin de incrementar el stock de estas pólizas.
Se realizan modelos de propensión para todos los ramos principales de seguros, correspondientes a automotrices, vida, vida con bonificación, salud, hogar y transaccionales. En particular se utilizan dos metodologías distintas de clasificación binaria: Árbol de Decisión y Logit Binario.
Dada la naturaleza de los datos y la diferencia de información para los clientes con y sin la tarjeta de crédito del holding, es necesario calcular modelos independientes para ambos tipos de clientes. Así, se generan 12 modelos distintos tanto para Árbol de Decisión como también para Logit Binario.
La empresa cuenta con un total de 3 millones de registros de clientes entre los periodos 2015 y 2016. Debido a la metodología elegida, sólo se utiliza un 40% de la base resultante para obtener resultados consistentes y en un tiempo de ejecución razonable.
En base a los resultados, se elige el método Logit Binario puesto que tuvo un mejor desempeño en las métricas más significativas para el negocio. Éste, en promedio, logró recuperar un 60% de las ventas generadas con sólo un 20% de la base de datos, en comparación con el 53,3% del árbol de decisión.
Para elegir el producto más adecuado para cada cliente, se propone un modelo de Next Best Offer . Se realiza una simulación de esta metodología versus la actual, obteniendo un beneficio estimado 11% mayor y un aumento de 22% en pólizas vendidas.
Para validar este método, se plantean dos experimentos de igualdad de proporciones. El primero busca comprobar que la elección mayor decil de propensión genera más ventas cuando existe un único máximo decil. El segundo busca verificar que la ponderación por el beneficio de la compañía genera más ventas cuando existe más de un producto que compartan el máximo decil.
Finalmente, para estudios futuros, se propone la realización de un modelo que determine, para cada cliente, el canal más propenso para la contratación de un producto. Además, se plantea una idea de mejora del método Next Best Offer mediante la optimización según una ponderación tanto del decil de propensión como también del beneficio esperado para la compañía. / 13/11/2022
|
42 |
Spelkaraktär : En kreativ designprocessÖstholm, John January 2009 (has links)
<p>Designprocessen för en ”next-gen”-spelkaraktär är i dag förhållandevis invecklad och innehåller många olika moment innan den når sitt slutgiltiga stadium med en färdig karaktär som resultat. Den här artikeln dokumenterar designprocessen för en spelkaraktär från början till slut och kommer att se mycket till den kreativa men även den tekniska aspekten av detta arbete. Arbetet resulterar i en karaktär som modelleras, skulpteras, animeras och slutligen importeras till Unreal Editor 3.0.</p>
|
43 |
Anhöriggruppens påverkan på anhörigas känsla av sammanhangAxlund, Anna, Wennberg, Marie January 2008 (has links)
<p>Abstrakt</p><p>När någon i familjen drabbas av sjukdom, står oftast de närmast anhöriga för den vårdande omsorgen, vilket kan vara påfrestande för hälsan. Det senaste decenniet har det offentliga stödet till anhöriga uppmärksammats i Sverige, vilket har inneburit en satsning på 300 miljoner kronor, Anhörig 300. Vars avsikt var att stödja och underlätta de anhörigas livssituation. Då kan en stödjande verksamhet som anhöriggrupp, vara ett viktigt komplement för reflektion och utveckling med andra. Tillvaron är full av påfrestningar, vad är det som gör att vissa klarar av dessa, medan andra inte gör det? Antonovsky (1991) svar på detta är känslan av sammanhang (KASAM). Syftet med studie var att studera om och i så fall hur interventionen i en anhöriggrupp kan påverka de anhörigas KASAM. Arbetet inleddes med en genomgång av både litteratur och forskning, för att öka kunskap inom problemområdet. Datainsamlingsmetoden som användes var ett ”Livsfrågeformulär”. Urvalet bestod av anhöriga till person över 20 år som drabbats av sjukdom och/eller funktionshinder. Studien genomfördes som en för- och eftermätning av interventionen i en anhöriggrupp. Resultatet visade att KASAM förändrades, men inte endast i positiv riktning, vilket var författarnas hypotes. Detta behöver inte ses som något negativt enligt Antonovsky (1991), utan det är mycket vanligt att en utveckling föregås av ett tillstånd av obalans, vilket kan påverka KASAM tillfälligt. Vad som orsakade detta kan vara svårt att fastställa. De slutsatser som gjordes var att trots fördelar med stöd i grupp, så kan det vara svårt att påvisa att det var just det stödet som påverkade KASAM, däremot kan det ses som en resurs, enligt forskning, i omsorgsarbetet för de anhöriga.</p>
|
44 |
Anhöriggruppens påverkan på anhörigas känsla av sammanhangAxlund, Anna, Wennberg, Marie January 2008 (has links)
Abstrakt När någon i familjen drabbas av sjukdom, står oftast de närmast anhöriga för den vårdande omsorgen, vilket kan vara påfrestande för hälsan. Det senaste decenniet har det offentliga stödet till anhöriga uppmärksammats i Sverige, vilket har inneburit en satsning på 300 miljoner kronor, Anhörig 300. Vars avsikt var att stödja och underlätta de anhörigas livssituation. Då kan en stödjande verksamhet som anhöriggrupp, vara ett viktigt komplement för reflektion och utveckling med andra. Tillvaron är full av påfrestningar, vad är det som gör att vissa klarar av dessa, medan andra inte gör det? Antonovsky (1991) svar på detta är känslan av sammanhang (KASAM). Syftet med studie var att studera om och i så fall hur interventionen i en anhöriggrupp kan påverka de anhörigas KASAM. Arbetet inleddes med en genomgång av både litteratur och forskning, för att öka kunskap inom problemområdet. Datainsamlingsmetoden som användes var ett ”Livsfrågeformulär”. Urvalet bestod av anhöriga till person över 20 år som drabbats av sjukdom och/eller funktionshinder. Studien genomfördes som en för- och eftermätning av interventionen i en anhöriggrupp. Resultatet visade att KASAM förändrades, men inte endast i positiv riktning, vilket var författarnas hypotes. Detta behöver inte ses som något negativt enligt Antonovsky (1991), utan det är mycket vanligt att en utveckling föregås av ett tillstånd av obalans, vilket kan påverka KASAM tillfälligt. Vad som orsakade detta kan vara svårt att fastställa. De slutsatser som gjordes var att trots fördelar med stöd i grupp, så kan det vara svårt att påvisa att det var just det stödet som påverkade KASAM, däremot kan det ses som en resurs, enligt forskning, i omsorgsarbetet för de anhöriga.
|
45 |
Methods to Prepare DNA for Efficient Massive SequencingLundin, Sverker January 2012 (has links)
Massive sequencing has transformed the field of genome biology due to the continuous introduction and evolution of new methods. In recent years, the technologies available to read through genomes have undergone an unprecedented rate of development in terms of cost-reduction. Generating sequence data has essentially ceased to be a bottleneck for analyzing genomes instead to be replaced by limitations in sample preparation and data analysis. In this work, new strategies are presented to increase both the throughput of library generation prior to sequencing, and the informational content of libraries to aid post-sequencing data processing. The protocols developed aim to enable new possibilities for genome research concerning project scale and sequence complexity. The first two papers that underpin this thesis deal with scaling library production by means of automation. Automated library preparation is first described for the 454 sequencing system based on a generic solid-phase polyethylene-glycol precipitation protocol for automated DNA handling. This was one of the first descriptions of automated sample handling for producing next generation sequencing libraries, and substantially improved sample throughput. Building on these results, the use of a double precipitation strategy to replace the manual agarose gel excision step for Illumina sequencing is presented. This protocol considerably improved the scalability of library construction for Illumina sequencing. The third and fourth papers present advanced strategies for library tagging in order to multiplex the information available in each library. First, a dual tagging strategy for massive sequencing is described in which two sets of tags are added to a library to trace back the origins of up to 4992 amplicons using 122 tags. The tagging strategy takes advantage of the previously automated pipeline and was used for the simultaneous sequencing of 3700 amplicons. Following that, an enzymatic protocol was developed to degrade long range PCR-amplicons and forming triple-tagged libraries containing information of sample origin, clonal origin and local positioning for the short-read sequences. Through tagging, this protocol makes it possible to analyze a longer continuous sequence region than would be possible based on the read length of the sequencing system alone. The fifth study investigates commonly used enzymes for constructing libraries for massive sequencing. We analyze restriction enzymes capable of digesting unknown sequences located some distance from their recognition sequence. Some of these enzymes have previously been extensively used for massive nucleic acid analysis. In this first high throughput study of such enzymes, we investigated their restriction specificity in terms of the distance from the recognition site and their sequence dependence. The phenomenon of slippage is characterized and shown to vary significantly between enzymes. The results obtained should favor future protocol development and enzymatic understanding. Through these papers, this work aspire to aid the development of methods for massive sequencing in terms of scale, quality and knowledge; thereby contributing to the general applicability of the new paradigm of sequencing instruments. / <p>QC 20121126</p>
|
46 |
Comparison of DNA sequence assembly algorithms using mixed data sourcesBamidele-Abegunde, Tejumoluwa 15 April 2010
DNA sequence assembly is one of the fundamental areas of bioinformatics. It involves the correct formation of a genome sequence from its DNA fragments ("reads") by aligning and merging the fragments. There are different sequencing technologies -- some support long DNA reads and the others, shorter DNA reads. There are sequence assembly programs specifically designed for these different types of raw sequencing data.<p>
This work explores and experiments with these different types of assembly software in order to compare their performance on the type of data for which they were designed, as well as their performance on data for which they were not designed, and on mixed data. Such results are useful for establishing good procedures and tools for sequence assembly in the current genomic environment where read data of different lengths are available. This work also investigates the effect of the presence or absence of quality information on the results produced by sequence assemblers.<p>
Five strategies were used in this research for assembling mixed data sets and the testing was done using a collection of real and artificial data sets for six bacterial organisms. The results show that there is a broad range in the ability of some DNA sequence assemblers to handle data from various sequencing technologies, especially data other than the kind they were designed for. For example, the long-read assemblers PHRAP and MIRA produced good results from assembling 454 data. The results also show the importance of having an effective methodology for assembling mixed data sets. It was found that combining contiguous sequences obtained from short-read assemblers with long DNA reads, and then assembling this combination using long-read assemblers was the most appropriate approach for assembling mixed short and long reads. It was found that the results from assembling the mixed data sets were better than the results obtained from separately assembling individual data from the different sequencing technologies. DNA sequence assemblers which do not depend on the availability of quality information were used to test the effect of the presence of quality values when assembling data. The results show that regardless of the availability of quality information, good results were produced in most of the assemblies.<p>
In more general terms, this work shows that the approach or methodology used to assemble DNA sequences from mixed data sources makes a lot of difference in the type of results obtained, and that a good choice of methodology can help reduce the amount of effort spent on a DNA sequence assembly project.
|
47 |
Development of a Virtual Applications Networking Infrastructure NodeRedmond, Keith 15 February 2010 (has links)
This thesis describes the design of a Virtual Application Networking Infrastructure
(VANI) node that can be used to facilitate network architecture experimentation. Cur-
rently the VANI nodes provide four classes of physical resources – processing, reconfig-
urable hardware, storage and interconnection fabric – but the set of sharable resources
can be expanded. Virtualization software allows slices of these resources to be appor-
tioned to VANI nodes that can in turn be interconnected to form virtual networks, which
can operate according to experimental network and application protocols. This thesis
discusses the design decisions that have been made in the development of this system
and provides a detailed description of the prototype, including how users interact with
the resources and the interfaces provided by the virtualization layers.
|
48 |
Alzheimers sjukdom : Närståendes upplevelser i samband med vården - En studie av självbiografierKarlsson, Sandra, Meholli, Melihate January 2013 (has links)
Background: Alzheimer's disease is a disease that primarily affects the elderly but also younger people. Alzheimer's disease is a type of dementia which means that you get changes in the cerebral cortex and cells gradually die. The disease causes memory loss and things that were obvious before will be difficult for the sufferer. Alzheimer's disease also affects next of kin to a large degree; they will have to take a great responsibility. The next of kin are entitled to support from healthcare. Aim: The aim was to highlight next of kin' experiences of healthcare to their family members with Alzheimer's disease. Method : The study was based on narratives, which in this case means analysis of autobiographies. Five biographies were analyzed in accordance with Dahlborg-Lyckhage Results : Four categories and eleven subcategories emerged which were based on what the next of kin had experienced. The experiences were reflected in four categories: powerlessness, joy in caring, grief and lack of trust. This result shows gaps in knowledge and treatment of relatives. To make it easier for the next of kin caregivers should for example provide information on the course of the disease, get individual support and caregivers should take next of kin seriously. Conclusion : Alzheimer's disease affects the entire family. It is important that nurses take their responsibility by providing information and support to the next of kin so that they can better deal with the situation. The next of kin are an important part of the sufferer's life and have influence for the development of the disease.
|
49 |
Development of a Virtual Applications Networking Infrastructure NodeRedmond, Keith 15 February 2010 (has links)
This thesis describes the design of a Virtual Application Networking Infrastructure
(VANI) node that can be used to facilitate network architecture experimentation. Cur-
rently the VANI nodes provide four classes of physical resources – processing, reconfig-
urable hardware, storage and interconnection fabric – but the set of sharable resources
can be expanded. Virtualization software allows slices of these resources to be appor-
tioned to VANI nodes that can in turn be interconnected to form virtual networks, which
can operate according to experimental network and application protocols. This thesis
discusses the design decisions that have been made in the development of this system
and provides a detailed description of the prototype, including how users interact with
the resources and the interfaces provided by the virtualization layers.
|
50 |
Comparison of DNA sequence assembly algorithms using mixed data sourcesBamidele-Abegunde, Tejumoluwa 15 April 2010 (has links)
DNA sequence assembly is one of the fundamental areas of bioinformatics. It involves the correct formation of a genome sequence from its DNA fragments ("reads") by aligning and merging the fragments. There are different sequencing technologies -- some support long DNA reads and the others, shorter DNA reads. There are sequence assembly programs specifically designed for these different types of raw sequencing data.<p>
This work explores and experiments with these different types of assembly software in order to compare their performance on the type of data for which they were designed, as well as their performance on data for which they were not designed, and on mixed data. Such results are useful for establishing good procedures and tools for sequence assembly in the current genomic environment where read data of different lengths are available. This work also investigates the effect of the presence or absence of quality information on the results produced by sequence assemblers.<p>
Five strategies were used in this research for assembling mixed data sets and the testing was done using a collection of real and artificial data sets for six bacterial organisms. The results show that there is a broad range in the ability of some DNA sequence assemblers to handle data from various sequencing technologies, especially data other than the kind they were designed for. For example, the long-read assemblers PHRAP and MIRA produced good results from assembling 454 data. The results also show the importance of having an effective methodology for assembling mixed data sets. It was found that combining contiguous sequences obtained from short-read assemblers with long DNA reads, and then assembling this combination using long-read assemblers was the most appropriate approach for assembling mixed short and long reads. It was found that the results from assembling the mixed data sets were better than the results obtained from separately assembling individual data from the different sequencing technologies. DNA sequence assemblers which do not depend on the availability of quality information were used to test the effect of the presence of quality values when assembling data. The results show that regardless of the availability of quality information, good results were produced in most of the assemblies.<p>
In more general terms, this work shows that the approach or methodology used to assemble DNA sequences from mixed data sources makes a lot of difference in the type of results obtained, and that a good choice of methodology can help reduce the amount of effort spent on a DNA sequence assembly project.
|
Page generated in 0.0436 seconds