Spelling suggestions: "subject:"costbased"" "subject:"cost.based""
1 |
Investment and policy decisions involving rural road networks in Saskatchewan : a network design approachChristensen, Paul Normann 13 January 2006
Worldwide, rural road networks serve a vital link in the chain leading goods to markets and people to places. The efficiency of rural road network services is influenced by road-related investment and policy decisions. Reaching good decisions, however, is complicated by: interrelationships among policy, investment, road use, road performance, and rural economies; and combinatorial challenges involving the distribution of discrete policy and investment arrangements across networks.</p><p>The main objective of this study is to address this complex problem as it pertains to rural road networks in Saskatchewan. Rural roads in Saskatchewan are suffering under increasing volumes of heavy truck traffic motivated principally by recent changes in the grain handling and transportation system. To address this problem, Saskatchewan Department of Highways and Transportation is considering a range of haul policy and road structure investment options. The question is, what (spatial) arrangement of available policy and investment options best meets this challenge. </p><p>To answer this question, a cost-based standard is incorporated within a network design modeling approach and solved using custom algorithmic strategies. Applied to a case study network, the model determines a demonstrably good arrangement of costly road structure modifications under each considered policy option. Resulting policy-investment combinations are subsequently ranked according to total cost and equivalent net benefit standards. </p><p>A number of important findings emerge from this analysis. Policy and investment decisions are linked; spatial arrangement of road structure modifications is contingent on the haul policy regime in place. Road performance and use characteristics are indeed sensitive to policy and investment decisions. Optimal budget levels computed by the model contradict perceptions that rural road networks in Saskatchewan are grossly under-funded. Despite best intentions, ill-considered policy can actually reduce the net benefits of road provision and use. </p><p> Model application and design limitations suggest promising avenues for future research. These include: model larger networks in Saskatchewan and beyond; determine optimal road budgets under benefit-cost standards reflecting competing economic needs; employ model within regional economic planning investigations to forecast road-related implications; and model policy endogenously to aid design of heavy haul sub-networks and to address questions concerning network expansion or contraction.
|
2 |
Investment and policy decisions involving rural road networks in Saskatchewan : a network design approachChristensen, Paul Normann 13 January 2006 (has links)
Worldwide, rural road networks serve a vital link in the chain leading goods to markets and people to places. The efficiency of rural road network services is influenced by road-related investment and policy decisions. Reaching good decisions, however, is complicated by: interrelationships among policy, investment, road use, road performance, and rural economies; and combinatorial challenges involving the distribution of discrete policy and investment arrangements across networks.</p><p>The main objective of this study is to address this complex problem as it pertains to rural road networks in Saskatchewan. Rural roads in Saskatchewan are suffering under increasing volumes of heavy truck traffic motivated principally by recent changes in the grain handling and transportation system. To address this problem, Saskatchewan Department of Highways and Transportation is considering a range of haul policy and road structure investment options. The question is, what (spatial) arrangement of available policy and investment options best meets this challenge. </p><p>To answer this question, a cost-based standard is incorporated within a network design modeling approach and solved using custom algorithmic strategies. Applied to a case study network, the model determines a demonstrably good arrangement of costly road structure modifications under each considered policy option. Resulting policy-investment combinations are subsequently ranked according to total cost and equivalent net benefit standards. </p><p>A number of important findings emerge from this analysis. Policy and investment decisions are linked; spatial arrangement of road structure modifications is contingent on the haul policy regime in place. Road performance and use characteristics are indeed sensitive to policy and investment decisions. Optimal budget levels computed by the model contradict perceptions that rural road networks in Saskatchewan are grossly under-funded. Despite best intentions, ill-considered policy can actually reduce the net benefits of road provision and use. </p><p> Model application and design limitations suggest promising avenues for future research. These include: model larger networks in Saskatchewan and beyond; determine optimal road budgets under benefit-cost standards reflecting competing economic needs; employ model within regional economic planning investigations to forecast road-related implications; and model policy endogenously to aid design of heavy haul sub-networks and to address questions concerning network expansion or contraction.
|
3 |
Discovering Compact and Informative Structures through Data PartitioningFiterau, Madalina 01 September 2015 (has links)
In many practical scenarios, prediction for high-dimensional observations can be accurately performed using only a fraction of the existing features. However, the set of relevant predictive features, known as the sparsity pattern, varies across data. For instance, features that are informative for a subset of observations might be useless for the rest. In fact, in such cases, the dataset can be seen as an aggregation of samples belonging to several low-dimensional sub-models, potentially due to different generative processes. My thesis introduces several techniques for identifying sparse predictive structures and the areas of the feature space where these structures are effective. This information allows the training of models which perform better than those obtained through traditional feature selection. We formalize Informative Projection Recovery, the problem of extracting a set of low-dimensional projections of data which jointly form an accurate solution to a given learning task. Our solution to this problem is a regression-based algorithm that identifies informative projections by optimizing over a matrix of point-wise loss estimators. It generalizes to a number of machine learning problems, offering solutions to classification, clustering and regression tasks. Experiments show that our method can discover and leverage low-dimensional structure, yielding accurate and compact models. Our method is particularly useful in applications involving multivariate numeric data in which expert assessment of the results is of the essence. Additionally, we developed an active learning framework which works with the obtained compact models in finding unlabeled data deemed to be worth expert evaluation. For this purpose, we enhance standard active selection criteria using the information encapsulated by the trained model. The advantage of our approach is that the labeling effort is expended mainly on samples which benefit models from the hypothesis class we are considering. Additionally, the domain experts benefit from the availability of informative axis aligned projections at the time of labeling. Experiments show that this results in an improved learning rate over standard selection criteria, both for synthetic data and real-world data from the clinical domain, while the comprehensible view of the data supports the labeling process and helps preempt labeling errors.
|
4 |
Automatic Tuning of Data-Intensive Analytical WorkloadsHerodotou, Herodotos January 2012 (has links)
<p>Modern industrial, government, and academic organizations are collecting massive amounts of data ("Big Data") at an unprecedented scale and pace. The ability to perform timely and cost-effective analytical processing of such large datasets in order to extract deep insights is now a key ingredient for success. These insights can drive automated processes for advertisement placement, improve customer relationship management, and lead to major scientific breakthroughs.</p><p>Existing database systems are adapting to the new status quo while large-scale dataflow systems (like Dryad and MapReduce) are becoming popular for executing analytical workloads on Big Data. Ensuring good and robust performance automatically on such systems poses several challenges. First, workloads often analyze a hybrid mix of structured and unstructured datasets stored in nontraditional data layouts. The structure and properties of the data may not be known upfront, and will evolve over time. Complex analysis techniques and rapid development needs necessitate the use of both declarative and procedural programming languages for workload specification. Finally, the space of workload tuning choices is very large and high-dimensional, spanning configuration parameter settings, cluster resource provisioning (spurred by recent innovations in cloud computing), and data layouts.</p><p>We have developed a novel dynamic optimization approach that can form the basis for tuning workload performance automatically across different tuning scenarios and systems. Our solution is based on (i) collecting monitoring information in order to learn the run-time behavior of workloads, (ii) deploying appropriate models to predict the impact of hypothetical tuning choices on workload behavior, and (iii) using efficient search strategies to find tuning choices that give good workload performance. The dynamic nature enables our solution to overcome the new challenges posed by Big Data, and also makes our solution applicable to both MapReduce and Database systems. We have developed the first cost-based optimization framework for MapReduce systems for determining the cluster resources and configuration parameter settings to meet desired requirements on execution time and cost for a given analytic workload. We have also developed a novel tuning-based optimizer in Database systems to collect targeted run-time information, perform optimization, and repeat as needed to perform fine-grained tuning of SQL queries.</p> / Dissertation
|
5 |
Cost-Based CLEAN Algorithm for Selective RAKE Receivers in UWB SystemsKe, Chih-chiang 29 July 2008 (has links)
In this thesis, we propose a cost-based CLEAN algorithm to accurately find dense multi-path parameters and improve the performance of selective RAKE receiver in indoor UWB systems. RAKE receiver can resolve the dense multi-path interference problems with the multi-path parameters. Because the weak paths are of lower valuable for system performance improvement, selective RAKE receiver combines only the strongest multi-path components and reduce the number of fingers to lower the complexity of RAKE receiver. However, selective RAKE receiver needs accurate multi-path detection to decide the suitable number and parameters of fingers. In order to improve the performance of selective RAKE receiver, the main issue in this thesis is to detect the best paths of channel with the CLEAN algorithm. CLEAN algorithm uses the correlation of the received signal and the template signal as the basis for searching paths. If there are closely adjacent paths, or if one of signal paths is relatively stronger, the detection error of paths may occur and thus affects the performance of the receiver. EP-based CLEAN algorithm uses the cost function and the evolutionary programming (EP) to search the multi-path delay times and gain coefficients for minimizing the cost function. Accurate multi-path detection and high resolution of adjacent paths can be obtained. However, EP-based CLEAN algorithm makes a time-consuming blind search. In the thesis, a CLEAN algorithm based on the cost function is proposed. The proposed cost-based CLEAN algorithm searches the delay times near the peaks of the cross-correlation for local minimum of the cost function, and then uses CLEAN algorithm to extract autocorrelation components and obtain the accurate multi-path detection. By testing the IEEE802.15.3a UWB channel models, and comparing with CLEAN algorithm, the cost-based CLEAN algorithm in the thesis can achieve better detection accuracy in multi-path searching, and improve the performance of selective RAKE receiver.
|
6 |
The nature of the relationship between market orientation and performanceFrench, Mark J. January 2011 (has links)
A review of the literature indicates that a universally enhancing relationship between market orientation and performance is not conclusively supported. Recent research suggests that the relationship between marketing investments and profit may be inverted U-shaped such that there is an optimal level of marketing investments which maximises profit (Mantrala et al 2007). In this study, it is proposed that market orientation has different curvilinear relationships with different types of performance. Using a performance categorisation suggested by Kirca et al (2005), it is theorised that market orientation s relationship with revenue-based performance (e.g. sales growth, market share growth) is subject to diminishing returns such that performance is enhanced for all levels of market orientation but the incremental benefits diminish as market orientation increases. For cost-based performance (e.g. profit, return on sales), it is proposed that the incremental costs of implementing market oriented activities may exceed the benefits. Thus, cost-based performance may have an inverted U-shaped relationship with market orientation. Three mechanisms by which diminishing returns affect the market orientation - performance relationship are identified; duplication, contradiction and prioritisation. A review of over 400 papers in the market orientation literature demonstrates that a research gap exists for different curvilinear relationships between market orientation and different types of performance. In particular, an inverted U-shaped relationship has not previously been found between market orientation and profit. A sampling frame was selected to control for both the macro-environment, and different performance levels in different industries (Dess and Robinson 1984). In a sample of 113 UK car dealers operating in the new car market the hypothesised relationships were tested using both objective and subjective performance measures. The findings relating to objective performance measures support the full inverted U-shaped relationship between market orientation and profit across the observed range of values. The relationship for objective revenue-based performance is more curvilinear with significant linear and curvilinear components. In highly competitive environments maximum profit shifts to a higher level of market orientation and overall the relationship is predominantly enhancing. Conversely, in uncompetitive environments profit is maximised at a lower level of market orientation and the relationship becomes detrimental at moderate market orientation levels. In recession, the profit for all new car dealers is reduced and maximum profit occurs at a lower market orientation level. In addition, the relationship between market orientation and sales growth turns negative in a recession. Interestingly, the results for subjective performance are distinctly different to, and sometimes contradict, the objective performance results. In particular, subjective performance predominantly has a positive linear relationship with market orientation.
|
7 |
Cost-Based Optimization of Integration FlowsBöhm, Matthias 02 May 2011 (has links) (PDF)
Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation.
|
8 |
Kostnadshyressättning av statliga kulturfastigheter. En alternativ modell / Kostnadshyressättning av statliga kulturfastigheterBoman, Linda January 2015 (has links)
Syftet med uppsatsen är att utveckla och utvärdera en alternativ modell för hyressättning av statliga ändamålsfastigheter. Arbetet är avgränsat till att omfatta fem kulturinstitutioner i Stockholm som bedriver sina verksamheter i byggnader förvaltade av Statens Fastighetsverk. Dessa är Nationalmuseum, Naturhistoriska riksmuseet, Historiska museet, Dramaten och Operan. / This essay develops and evaluates an alternative rent-setting model for public special purpose properties. The report is limited to cover the relationship between the National Property Board and five of their tenants within the cultural sphere that are subject to cost-based rent.
|
9 |
A comparative analysis of the cost-based and simplified upper limit approaches for calculating analytical threshold in support of forensic DNA short tandem repeat analysisGordon, Daniel Bernard 01 February 2023 (has links)
The determination and application of Analytical Threshold (AT) is a vital part of the forensic Deoxyribonucleic Acid (DNA) internal validation process. AT is the relative fluorescence unit (RFU) signal at which allelic peaks can be confidently distinguished from baseline noise. Several methods of calculating AT are currently being implemented within the forensic DNA community. These methods may utilize DNA negative sample data, DNA positive sample data, or both in their calculations. In this study, two of the DNA positive-based AT calculation techniques were chosen for assessment and comparison. The simplified upper limit approach (ULA) and the cost-based approach. ATs were calculated for each dye channel using a dilution series of 3 single source DNA samples ranging from 0.05-0.8ng. The ATs calculated via the cost-based approach consistently exhibited lower values than those determined via the ULA. As a result, the incidence of allelic drop-out exhibited by these AT values was also consistently lower, with an equivalent or only marginally increased incidence of baseline noise drop-in. These results indicated that the cost-based approach may be a more effective and practical method of calculating AT than the ULA, particularly in the analysis of low DNA template samples.
|
10 |
EXAMINING CALIFORNIA’S ASSEMBLY BILL 1629 AND THE LONG-TERM CARE REIMBURSEMENT ACT: DID NURSING HOME NURSE STAFFING CHANGE?Krauchunas, Matthew 13 April 2011 (has links)
California’s elderly population over age 85 is estimated to grow 361% by the year 2050. Many of these elders are frail and highly dependent on caregivers making them more likely to need nursing home care. A 1998 United States Government Accountability Office report identified poor quality of care in California nursing homes. This report spurred multiple Assembly Bills in California designed to increase nursing home nurse staffing, change the state’s Medi-Cal reimbursement methodology, or both. The legislation culminated in Assembly Bill (AB) 1629, signed into law in September 2004, which included the Long-Term Care Reimbursement Act. This legislation changed the state’s Medi-Cal reimbursement from a prospective, flat rate to a prospective, cost-based methodology and was designed in part to increase nursing home nurse staffing. It is estimated that this methodology change moved California from the bottom 10% of Medicaid nursing home reimbursement rates nationwide to the top 25%. This study analyzed the effect of AB 1629 on a panel of 567 free-standing nursing homes that were in continuous operation between the years 2002 – 2007. Resource Dependence Theory was used to construct the conceptual framework. Ordinary least squares (OLS) and first differencing with instrumental variable estimation procedures were used to test five hypotheses concerning Medi-Cal resource dependence, bed size, competition (including assisted living facilities and home health agencies), resource munificence, and slack resources. Both a 15 and 25 mile fixed radius were used as alternative market definitions instead of counties. The OLS results supported that case-mix adjusted licensed vocational nurse (LVN) and total nurse staffing hours per resident day increased overall. Nursing homes with the highest Medi-Cal dependence increased only increased NA staffing more than nursing homes with the lowest Medi-Cal dependence post AB 1629. The fixed effects with instrumental variable estimation procedure provided marginal support that nursing homes with more home health agency competition, in a 15 mile market, had higher LVN staffing. This estimation procedure also supported that nursing homes with more slack resources (post AB 1629) increased nurse aide and total nurse staffing while nursing homes located in markets with a greater percentage of residents over the age of 85 had more nurse aide staffing.
|
Page generated in 0.034 seconds