Spelling suggestions: "subject:"limit"" "subject:"timit""
131 |
Implementing a competing limit increase challenger strategy to a retail - banking segment / Derrick NolanNolan, Derrick January 2008 (has links)
Today, many financial institutions extending credit rely on automated credit scorecard decision engines to drive credit strategies that are used to allocate (application scoring) and manage (behavioural scoring) credit limits. The accuracy and predictive power of these models are meticulously monitored, to ensure that they deliver the required separation between good (non-delinquent) accounts and bad (delinquent) accounts.
The strategies associated to the scores (champion strategies) produced using the scorecards, are monitored on a quarterly basis (minimum), ensuring that the limit allocated to a customer, with its associated risk, is still providing the lender with the best returns on their appetite for risk. The strategy monitoring opportunity should be used to identify possible clusters of customers that are not producing the optimal returns for the lender. The identified existing strategy (champion) that does not return the desired output is challenged with an alternative strategy that may or may not result in better results. These clusters should have a relatively low credit risk ranking, be credit hungry, and have the capacity to service the debt. This research project focuses on the management of (behavioural) strategies that manage the ongoing limit increases provided to current account holders. Utilising a combination of the behavioural scores and credit turnover, an optimal recommended or confidential limit is calculated for the customer. Once the new limits are calculated, a sample is randomly selected from the cluster of customers and tested in the operational environment. With the implementation of the challenger, strategy should ensure that the intended change on the customer's limit is well received by the customers. Measures that can be used are risk, response, retention, and revenue. The champion and challenger strategies are monitored over a period until a victor (if there is one) can be identified. It is expected that the challenger strategy should have a minimal impact on the customers affected by the experiment and that the bank should not experience greater credit risk from the increased limits. The profit from the challenger should increase the interest revenue earned from the increased limit. Once it has been established through monitoring whether the champion or the challenger strategy has won, the winning strategy is rolled-out to the rest of the customers from the champion population. / Thesis (Ph.D. (Operational Research))--North-West University, Vaal Triangle Campus, 2009.
|
132 |
Implementing a competing limit increase challenger strategy to a retail - banking segment / Derrick NolanNolan, Derrick January 2008 (has links)
Today, many financial institutions extending credit rely on automated credit scorecard decision engines to drive credit strategies that are used to allocate (application scoring) and manage (behavioural scoring) credit limits. The accuracy and predictive power of these models are meticulously monitored, to ensure that they deliver the required separation between good (non-delinquent) accounts and bad (delinquent) accounts.
The strategies associated to the scores (champion strategies) produced using the scorecards, are monitored on a quarterly basis (minimum), ensuring that the limit allocated to a customer, with its associated risk, is still providing the lender with the best returns on their appetite for risk. The strategy monitoring opportunity should be used to identify possible clusters of customers that are not producing the optimal returns for the lender. The identified existing strategy (champion) that does not return the desired output is challenged with an alternative strategy that may or may not result in better results. These clusters should have a relatively low credit risk ranking, be credit hungry, and have the capacity to service the debt. This research project focuses on the management of (behavioural) strategies that manage the ongoing limit increases provided to current account holders. Utilising a combination of the behavioural scores and credit turnover, an optimal recommended or confidential limit is calculated for the customer. Once the new limits are calculated, a sample is randomly selected from the cluster of customers and tested in the operational environment. With the implementation of the challenger, strategy should ensure that the intended change on the customer's limit is well received by the customers. Measures that can be used are risk, response, retention, and revenue. The champion and challenger strategies are monitored over a period until a victor (if there is one) can be identified. It is expected that the challenger strategy should have a minimal impact on the customers affected by the experiment and that the bank should not experience greater credit risk from the increased limits. The profit from the challenger should increase the interest revenue earned from the increased limit. Once it has been established through monitoring whether the champion or the challenger strategy has won, the winning strategy is rolled-out to the rest of the customers from the champion population. / Thesis (Ph.D. (Operational Research))--North-West University, Vaal Triangle Campus, 2009.
|
133 |
Estimation of the variation of prices using high-frequency financial dataYsusi Mendoza, Carla Mariana January 2005 (has links)
When high-frequency data is available, realised variance and realised absolute variation can be calculated from intra-day prices. In the context of a stochastic volatility model, realised variance and realised absolute variation can estimate the integrated variance and the integrated spot volatility respectively. A central limit theory enables us to do filtering and smoothing using model-based and model-free approaches in order to improve the precision of these estimators. When the log-price process involves a finite activity jump process, realised variance estimates the quadratic variation of both continuous and jump components. Other consistent estimators of integrated variance can be constructed on the basis of realised multipower variation, i.e., realised bipower, tripower and quadpower variation. These objects are robust to jumps in the log-price process. Therefore, given adequate asymptotic assumptions, the difference between realised multipower variation and realised variance can provide a tool to test for jumps in the process. Realised variance becomes biased in the presence of market microstructure effect, meanwhile realised bipower, tripower and quadpower variation are more robust in such a situation. Nevertheless there is always a trade-off between bias and variance; bias is due to market microstructure noise when sampling at high frequencies and variance is due to the asymptotic assumptions when sampling at low frequencies. By subsampling and averaging realised multipower variation this effect can be reduced, thereby allowing for calculations with higher frequencies.
|
134 |
Investigations into the Shear Strength Reduction method using distinct element modelsFournier, Mathew 11 1900 (has links)
This thesis reports a detailed investigation into the use of the Shear Strength Reduction (SSR) method to determine factor of safety values in discontinuum models using the Universal Distinct Element Code. The SSR method depends on the definition of failure within the model and two different criteria were compared: the numerical unbalanced force definition and a more qualitative displacement-monitoring based method. A parametric study was first undertaken, using a simple homogeneous rock slope, with three different joint networks representing common kinematic states. Lessons learned from this study were then applied to a more complex case history used for validation of the SSR method.
The discontinuum models allow for the failure surface to propagate based on constitutive models that better idealize the rockmass than simpler methods such as limit equilibrium (e.g. either method of slices or wedge solutions) and even numerical continuum models (e.g. finite difference, finite element). Joints are explicitly modelled and can exert a range of influences on the SSR result. Simple elasto-plastic models are used for both the intact rock and joint properties. Strain-softening models are also discussed with respect to the SSR method. The results presented highlight several important relationships to consider related to both numerical procedures and numerical input parameters.
The case history was modelled similar to how a typical forward analysis would be undertaken: i.e. simple models with complexities added incrementally. The results for this case generally depict a rotational failure mode with a reduced factor of safety due to the presence of joints within the rockmass when compared to a traditional limit equilibrium analysis. Some models with large persistence of steeply dipping joints were able to capture the actual failure surface. Softening models were employed in order to mimic the generation and propagation of joints through the rockmass in a continuum; however, only discontinuum models using explicitly defined joints in the model were able to capture the correct failure surface.
|
135 |
Mutation-Selection Balance and the evolution of genetic variance in multiple male sexually-selected pheromones of the vinegar fly Drosophila serrataEmma Hine Unknown Date (has links)
The multivariate distribution of genetic variance is key to understanding two fundamental and interrelated processes in evolution; the ability of populations to respond to selection, and the balance of forces that maintain the genetic variance that such a response is based upon. In this thesis, I develop an analytical framework for characterizing the multivariate distribution of genetic variance and how it evolves. I then apply this framework to explore the evolution of genetic variance in multiple sexually-selected traits under artificial selection using the Drosophila serrata experimental system. An analytical framework for characterizing the multivariate distribution of genetic variance and how it evolves: First, I present a method from the statistical literature to establish the statistical dimensionality of genetic variance in a suite of traits. I evaluate the ability of this and two other methods to predict the correct number and orientation of dimensions of genetic variance by conducting a simulation study for a suite of eight traits. Second, I present a method from the materials science literature that uses multi-linear algebra to characterize the variation among matrices. I show how variation in the multivariate distribution of genetic variance among populations can be analyzed by constructing a fourth-order genetic variance-covariance tensor, and how the spectral decomposition of this tensor reveals independent aspects of change in genetic variance. I use the tensor to explore the variation in the genetic variance of eight traits among nine populations of D. serrata, and show how this variation can be associated with variation in selection pressures to determine whether selection may have affected genetic variance within populations. The evolution of genetic variance in sexually-selected traits under artificial selection: Female D. serrata display a strong preference for a particular combination of male cuticular hydrocarbons (CHCs). Individually, these pheromones display substantial genetic variance, but the genetic variance is not distributed equally among all phenotypic dimensions. In the specific CHC combination preferred by females, genetic variance is low. This is compatible with the expectation that selection will deplete genetic variance, but is contrary to the typical observation of high levels of genetic variance in individual sexually-selected traits. By artificially selecting on the trait combination preferred by females, I show that male mating success can successfully respond to selection, but the evolution of the combination of CHCs preferred by females is constrained. I then show that a key prediction of mutation-selection balance (MSB) models that has rarely been observed holds for these traits. Under MSB, genetic variance is expected to be maintained by rare alleles with large effects. Therefore, when a trait that is usually under stabilizing selection is subjected to directional artificial selection, the genetic variance is predicted to increase. I show that genetic variance increases in the CHC combination preferred by females under artificial selection, but not when another combination of the same traits with greater genetic variance is artificially selected. Complex segregation analysis indicated that the observed increase in genetic variance was a consequence of at least one allele of major effect increasing in frequency. This experiment demonstrates the importance of the past history of selection on the nature of genetic variance. General conclusion: Mutation-selection balance (MSB) is appealing as an explanation for the maintenance of genetic variance because it is simple and intuitive: the total mutation rate must be sufficiently high to replenish the variation eliminated by selection. However, MSB models seem unable to adequately explain the coexistence of the observed levels of genetic variance and strength of selection on individual traits. I contend that the failure of MSB models to explain these data is not a failure of MSB theory itself; rather it is the data that has been used to evaluate MSB models that may not be appropriate. It is now clear that there are fewer genetically independent traits than measured phenotypes, and that selection gradients measured for individual traits do not adequately reflect the nature of selection on combinations of traits. In other words, it is not possible to understand the relationship between genetic variance and selection by simply associating median levels of genetic variance and selection collated across studies as levels of genetic variance are likely to be much lower in trait combinations associated with stronger selection. Together, these observations suggest that we should be looking at the distribution of genetic variance in suites of traits and pattern of correlated selection on those same traits if we are to understand MSB.
|
136 |
Three essays on price formation and liquidity in financial futures marketsCummings, James Richard January 2009 (has links)
Doctor of Philosophy / This dissertation presents the results of three empirical studies on price formation and liquidity in financial futures markets. The research entails three related areas: the effect of taxes on the prices of Australian stock index futures; the efficiency of the information transmission mechanism between the cash and futures markets; and the price and liquidity impact of large trades in interest rate and equity index futures markets. An overview of previous research identifies some important gaps in the existing literature that this dissertation aims to resolve for the benefit of arbitrageurs, investment managers, brokers and regulators.
|
137 |
Bootstrapping functional M-estimators /Zhan, Yihui, January 1996 (has links)
Thesis (Ph. D.)--University of Washington, 1996. / Vita. Includes bibliographical references (p. [180]-188).
|
138 |
Analytical and experimental performance comparison of energy detectors for cognitive radios /Ciftci, Selami, January 2008 (has links)
Thesis (M.S.)--University of Texas at Dallas, 2008. / Includes vita. Includes bibliographical references (leaves 62-63)
|
139 |
Determining the analytical figures of merit from LC-MS/MS dataJohnson, Renee Michelle 02 November 2017 (has links)
Synthetic drugs such as piperazines are among the most commonly abused drugs and are typically consumed by younger populations. Because of their popularity, developing optimized analytical strategies designed to improve detection and interpretation of synthetic piperazines is of interest to the forensic community. To improve the likelihood that a substance of interest is detected, careful evaluation into the mass spectrometry signal is required. However, with all analytical pursuits, there is a limit at which the substance cannot be detected with certainty; thus a threshold is commonly referred to as the limit of detection (LOD). Formally, the LOD is the minimum amount of analyte (concentration, mass, number of molecules, etc.) that can be detected at a known confidence level.
The purpose of this research was to use common analytical methods to calculate the LOD and verify the results with previous work at the Boston University forensic toxicology laboratory. Data from the Liquid Chromatography-tandem Mass Spectrometer (LC-MS/MS) was previously generated and consisted of signal intensity information in the form of peak height and peak area, from titrations of eight synthetic piperazines that included: Benzylpiperazine (BZP), 1-(3-chlorophenyl)-piperazine (mCPP), 3-trifluoromethylphenylpiperazine monohydrochloride (TFMPP), methylbenzylpiperazine (MBZP), 1-(4-fluorobenzyl)-piperazine (FBZP), 2,3-dichlorophenylpiperazine (DCPP), para-fluorophenylpiperazine (pFPP) and para-methoxyphenylpiperazine (MeOPP). Generally, the LOD is determined by first evaluating the signal in the absence of analyte and determining the probability that signal, , crosses the signal threshold, . The signal threshold is based upon the false detection rate the laboratory can withstand for a given interpretation scheme. In instances where very small levels of false detections can be tolerated, a large is chosen. In other circumstances, where noise detection can adequately be interpreted, a low is chosen. In chromatography and radiography the typical one sided =0.003.
The number of molecules for each analyte at each concentration (20 ng/mL, 50 ng/mL, 200 ng/mL, 500 ng/mL, 1000 ng/mL and 2000 ng/mL) was determined and used throughout this work. Peak area signals and ratios versus number of molecules for each analyte were used to, first, visually inspect the linearity of signal to analyte level. It was determined that using internal standards improved linearity, as expected; however, the data suggested that absolute signal intensity was sufficient to compute the LOD for these compounds. Generally accepted methods of calculating LOD were not used for this research as the signal from the blank was not detected most likely due to the sensitivity of the instrument used. This study used an extrapolation of the data and propagation of errors method to calculate the LOD as the signal from the blank was not needed. For all eight analytes, the LOD calculated was similar to the lowest concentration (20 ng/mL) used when validating this method.
This research needs to be expanded on to include more concentration points and see the plateau effect at higher concentrations. This will provide information to analytical chemists when a blank signal is not available about how the LOD can be calculated with high confidence.
|
140 |
Cytochrome P450 2E1/Nickel-Poly(propylene imine) dendrimeric nanobiosensor for pyrazinamide - A first line TB DrugZosiwe, Mlandeli Siphelele Ernest January 2015 (has links)
>Magister Scientiae - MSc / The tuberculosis (TB) disease to this day remains one of the world’s prominent killerdiseases. Pyrazinamide (PZA) is one of the most commonly prescribed anti- tuberculosis (anti-TB) drugs due to its ability to significantly shorten the TB treatment period from the former nine months to the current six months duration. However, excess PZA in the body causes hepatotoxicity and damages the liver. This hepatotoxicity, together with the resistance of the bacteria to treatment drugs, poor medication and inappropriate dosing, greatly contribute to the high incidents of TB deaths and diseases that are due to side effects (such as liver damage). This brings about the calls for alternative methods for ensuring reliable dosing of the drug, which will be specific from person to person due to inter-individual differences in drug metabolism. A novel biosensor system for monitoring the metabolism of PZA was prepared with a Ni-PPI-PPy star copolymer and cytochrome P450 2E1 (CYP2E1) deposited onto a platinum electrode. The nanobiosensor system exhibited enhanced electro-activity that is attributed to the catalytic effect of the incorporated star copolymer. The biosensor had a sensitivity of 0.142 µA.nM-1, and a dynamic linear range (DLR) of 0.01 nM-0.12 nM (1.231 – 7.386 ng/L PZA). The limit of detection of the biosensor was found to be 0.00114 nM (0.14 ng/L) PZA. From the HPLC peakconcentration (Cmax) of PZA determined 2 h after drug intake is 2.79 – 3.22 ng.L-1,which is very detectable with the nanobiosensor as it falls within the dynamic linear range.
|
Page generated in 0.0262 seconds