Spelling suggestions: "subject:"decisiontheory"" "subject:"decisions.theory""
661 |
Essays in direct marketing : understanding response behavior and implementation of targeting strategiesSinha, Shameek 06 July 2011 (has links)
In direct marketing, understanding the response behavior of consumers to marketing initiatives is a pre-requisite for marketers before implementing targeting strategies to reach potential as well as existing consumers in the future. Consumer response can either be in terms of the incidence or timing of purchases, category/ brand
choice of purchases made as well as the volume or purchase amounts in each category. Direct marketers seek to explore how past consumer response behavior as well as their
targeting actions affects current response patterns. However, considerable heterogeneity
is also prevalent in consumer responses and the possible sources of this heterogeneity need to be investigated. With the knowledge of consumer response and the corresponding heterogeneity, direct marketers can devise targeting strategies to attract potential new consumers as well as retain existing consumers.
In the first essay of my dissertation (Chapter 2), I model the response behavior of donors in non-profit charity fund-raising in terms of their timing and volume of
donations. I show that past donations (both the incidence and volume) and solicitation for alternative causes by non-profits matter in donor responses and the heterogeneity in donation behavior can be explained in terms of individual and community level donor characteristics. I also provide a heuristic approach to target new donors by using a
classification scheme for donors in terms of the frequency and amount of donations and then characterize each donor portfolio with corresponding donor characteristics.
In the second essay (Chapter 3), I propose a more structural approach in the targeting of customers by direct marketers in the context of customized retail couponing. First I model customer purchase in a retail setting where brand choice decisions in a product category depend on pricing, in-store promotions, coupon targeting as well as the face values of those coupons. Then using a utility function specification for the retailer which implements a trade-off between net revenue (revenue – coupon face value) and
information gain, I propose a Bayesian decision theoretic approach to determine optimal
customized coupon face values. The optimization algorithm is sequential where past as well as future customer responses affect targeted coupon face values and the direct marketer tries to determine the trade-off through natural experimentation. / text
|
662 |
Methodology for global optimization of computationally expensive design problemsKoullias, Stefanos 20 September 2013 (has links)
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm.
This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems.
Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
|
663 |
Choice Under Uncertainty: Violations of Optimality in Decision MakingRodenburg, Kathleen 11 June 2013 (has links)
This thesis is an investigation of how subjects behave in an individual binary choice decision task with the option to purchase or observe for free additional information before reaching a decision. In part 1 of this thesis, an investigative study is conducted with the intent to sharpen the view to literature concerning corresponding psychology and economics experiments designed to test decision tasks that involve purchasing and observing information from an imperfect message prior to taking a terminal action choice. This investigative study identifies areas of research that warrant further investigation as well as provides enhancements for execution in the subsequent experiment conducted in Part 2 & 3 of this thesis. In Part 2 & 3, I conduct an experiment to test how subjects behave in an individual binary choice decision task with the option to purchase or observe for free additional information before reaching a final decision. I find that subjects’ behaviour over time converges toward optimal decisions prior to observing an imperfect information signal. However, when subjects observe an imperfect information signal prior to their terminal choice there is greater deviation from optimal behaviour. I find in addition to behaviour that is reflective of a risk-neutral BEU maximizer, status quo bias, over-weighing the informational value of the message received and past statistically independent outcomes influencing future choices. The subjects’ willingness to pay (WTP) to use the additional information gathered from an imperfect message service when making a final decision was on average less than the risk neutral BEU willingness to pay benchmark. Moreover, as the informative value of the message increased, causing the BEU valuation to increase, subjects under-estimated the value of the message signal to a greater degree. Although risk attitudes may have influenced the subjects’ WTP decisions, it does not account for the increased conservative WTP behaviour when information became more valuable. Additionally, the findings from this study suggest that individuals adopt different decision rules depending on both personal attributes (i.e. skillset, gender, experience) and on the context and environment in which the decision task is conducted. / SSHRC grant: Social Sciences and Humanities Research Council via Dr. Bram Cadsby Professor Department of Economics, University of Guelph
|
664 |
Configurable analog hardware for neuromorphic Bayesian inference and least-squares solutionsShapero, Samuel Andre 10 January 2013 (has links)
Sparse approximation is a Bayesian inference program with a wide number of signal processing applications, such as Compressed Sensing recovery used in medical imaging. Previous sparse coding implementations relied on digital algorithms whose power consumption and performance scale poorly with problem size, rendering them unsuitable for portable applications, and a bottleneck in high speed applications. A novel analog architecture, implementing the Locally Competitive Algorithm (LCA), was designed and programmed onto a Field Programmable Analog Arrays (FPAAs), using floating gate transistors to set the analog parameters. A network of 6 coefficients was demonstrated to converge to similar values as a digital sparse approximation algorithm, but with better power and performance scaling. A rate encoded spiking algorithm was then developed, which was shown to converge to similar values as the LCA. A second novel architecture was designed and programmed on an FPAA implementing the spiking version of the LCA with integrate and fire neurons. A network of 18 neurons converged on similar values as a digital sparse approximation algorithm, with even better performance and power efficiency than the non-spiking network. Novel algorithms were created to increase floating gate programming speed by more than two orders of magnitude, and reduce programming error from device mismatch. A new FPAA chip was designed and tested which allowed for rapid interfacing and additional improvements in accuracy. Finally, a neuromorphic chip was designed, containing 400 integrate and fire neurons, and capable of converging on a sparse approximation solution in 10 microseconds, over 1000 times faster than the best digital solution.
|
665 |
Bayesian networks for uncertainty estimation in the response of dynamic structuresCalanni Fraccone, Giorgio M. 07 July 2008 (has links)
The dissertation focuses on estimating the uncertainty associated with stress/strain prediction procedures from dynamic test data used in turbine blade analysis. An accurate prediction of the maximum response levels for physical components during in-field operating conditions is essential for evaluating their performance and life characteristics, as well as for investigating how their behavior critically impacts system design and reliability assessment. Currently, stress/strain inference for a dynamic system is based on the combination of experimental data and results from the analytical/numerical model of the component under consideration. Both modeling challenges and testing limitations, however, contribute to the introduction of various sources of uncertainty within the given estimation procedure, and lead ultimately to diminished accuracy and reduced confidence in the predicted response.
The objective of this work is to characterize the uncertainties present in the current response estimation process and provide a means to assess them quantitatively. More specifically, proposed in this research is a statistical methodology based on a Bayesian-network representation of the modeling process which allows for a statistically rigorous synthesis of modeling assumptions and information from experimental data. Such a framework addresses the problem of multi-directional uncertainty propagation, where standard techniques for unidirectional propagation from inputs' uncertainty to outputs' variability are not suited. Furthermore, it allows for the inclusion within the analysis of newly available test data that can provide indirect evidence on the parameters of the structure's analytical model, as well as lead to a reduction of the residual uncertainty in the estimated quantities.
As part of this work, key uncertainty sources (i.e., material and geometric properties, sensor measurement and placement, as well as noise due data processing limitations) are investigated, and their impact upon the system response estimates is assessed through sensitivity studies. The results are utilized for the identification of the most significant contributors to uncertainty to be modeled within the developed Bayesian inference scheme. Simulated experimentation, statistically equivalent to specified real tests, is also constructed to generate the data necessary to build the appropriate Bayesian network, which is then infused with actual experimental information for the purpose of explaining the uncertainty embedded in the response predictions and quantifying their inherent accuracy.
|
666 |
Bayesian based risk stratification of atrial fibrillation in coronary artery bypass graft patientsWiggins, Matthew Corbin 22 May 2007 (has links)
Roughly thirty percent of coronary artery bypass graft (CABG) patients develop atrial fibrillation (AF) in the five days following surgery, increasing the risk of stroke, prolonging hospital stay three to four days, and increasing the overall cost of the procedure. Current pharmacologic and nonpharmacologic means of AF prevention are suboptimal, and their side effects, expense, and inconvenience limit their widespread application. An accurate method for identifying patients at high risk for postoperative AF would allow these methods to be focused on the patients on which its utility would be highest. The main objective of this research was to develop a Bayesian network (BN) which could model/predict/assign risk of the occurrence of atrial fibrillation in CABG patients using retrospective data. A secondary objective was to develop an integrated framework for more advanced methods of feature selection and fusion for medical classification/prediction.
We determined that the naïve Bayesian network classifier used with features selected by a genetic algorithm is a better classifier to use, given our cohort. The naïve BN allows for reasonable prediction despite being presented with patients with missing data points as might occur in the hospital. This classifier achieves a sensitivity of 0.63 and a specificity of 0.73 with an AUC of 0.74. Furthermore, this system is based on probabilities that are well understood and easily incorporated into a clinical environment. These probabilities can be altered based on the cardiologists prior knowledge through Bayesian statistics, allowing for online sensitivity analysis by doctors, to perceive the best treatment options.
Contributions of this research include:
- An accurate, physician-friendly, postoperative AF risk stratification system that performs even under missing data conditions, while outperforming the state of the art system,
- A thorough analysis of previously examined and novel pre- and postoperative clinical and ECG features for postoperative AF risk stratification,
- A new methodology for genetic algorithm-built traditional Bayesian network classifiers allowing dynamic structure through novel chromosome, operator, and fitness definitions, and
- An integrated methodology for inclusion of doctor s expert knowledge into a probabilistic diagnosis support system.
|
667 |
Rough set-based reasoning and pattern mining for information filteringZhou, Xujuan January 2008 (has links)
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
|
668 |
Explanation in Bayesian belief networksSuermondt, Henri Jacques. January 1992 (has links) (PDF)
Thesis (Ph.D.)--Stanford University, 1992. / Includes bibliographical references (leaves 236-249).
|
669 |
Bayesian generalized linear models for meta-analysis of diagnostic tests.Xing, Yan. Cormier, Janice N., Swint, John Michael, January 2008 (has links)
Thesis (Ph. D.)--University of Texas Health Science Center at Houston, School of Public Health, 2008. / Source: Dissertation Abstracts International, Volume: 69-02, Section: B, page: 0769. Advisers: Claudia Pedroza; Asha S. Kapadia. Includes bibliographical references.
|
670 |
The value of information updating in new product development /Artmann, Christian. January 1900 (has links)
Originally presented as the author's Thesis (Ph. D.)--WHU, Otto-Beisheim School of Management, Vallendar, Germany. / Includes bibliographical references (p. 195-205) and index.
|
Page generated in 0.0552 seconds