Spelling suggestions: "subject:"costeffective"" "subject:"mosteffective""
21 |
Evaluation of the efficacy of different best management practices under current and future climate regimes in Ludlow watershedFan, Rong 16 October 2015 (has links)
No description available.
|
22 |
Window-based Cost-effective Auto-scaling Solution with Optimized Scale-in StrategyPerera, Ashansa January 2016 (has links)
Auto-scaling is a major way of minimizing the gap between the demand and the availability of the computing resources for the applications with dynamic workloads. Even though a lot of effort has been taken to address the requirement of auto-scaling for the distributed systems, most of the available solutions are application-specific and consider only on fulfilling the application level requirements. Today, with the pay-as-you-go model of cloud computing, many different price plans have been offered by the cloud providers which leads the resource price to become an important decision-making criterion at the time of auto-scaling. One major step is using the spot instances which are more advantageous in the aspect of cost for elasticity. However, using the spot instances for auto-scaling should be handled carefully to avoid its drawbacks since the spot instances can be terminated at any time by the infrastructure providers. Despite the fact that some cloud providers such as Amazon Web Services and Google Compute Engine have their own auto-scaling solutions, they do not follow the goal of cost-effectiveness. In this work, we introduce our auto-scaling solution that is targeted for middle-layers in-between the cloud and the application, such as Karamel. Our work combines the aspect of minimizing the cost of the deployment with maintaining the demand for the resources. Our solution is a rule-based system that is built on top of resource utilization metrics as a more general metric for workloads. Further, the machine terminations and the billing period of the instances are taken into account as the cloud source events. Different strategies such as window based profiling, dynamic event profiling, and optimized scale-in strategy have been used to achieve our main goal of providing a cost-effective auto-scaling solution for cloud-based deployments. With the help of our simulation methodology, we explore our parameter space to find the best values under different workloads. Moreover, our cloud-based experiments show that our solution performs much more economically compare to the available cloud-based auto-scaling solutions.
|
23 |
The optimum prepaid monetary incentives for mail surveysJobber, David, Saunders, J., Mitchell, V. 2009 July 1920 (has links)
No
|
24 |
Hälsoekonomiska aspekter av magsäcksoperationer : En litteraturstudie / Health economic aspects of bariatric surgery : A literature reviewGånedahl, Hanna, Viklund, Pernilla January 2012 (has links)
Bakgrund: Fetma är ett folkhälsoproblem som har ökat dramatiskt de senaste två decennierna. För att behandla extrem fetma har magsäcksoperationer blivit en allt mer vanlig metod. De hälsoekonomiska aspekterna av operation har ännu inte studerats i Sverige. Syfte: Studiens syfte var att belysa hälsoekonomiska aspekter av magsäcksoperationer som intervention mot fetma. Metod: Metoden var en litteraturstudie. Elva vetenskapliga studier valdes ut, analyserades och sammanställdes utifrån hälsoekonomiska aspekter. Resultat: Magsäcksoperationer var kostnadseffektiva som intervention mot fetma i jämförelse med ingen intervention, traditionell intervention och medicinsk behandling. Studiernas resultat varierade i tid till break even och beräkning av inkrementell kostnadskvot. Troliga anledningar till dessa skillnader var studiernas olika ursprungsländer och tidsperspektiv. Slutsats: Ur ett hälsoekonomiskt perspektiv rekommenderas operationer som intervention mot fetma. Dock bör etiska aspekter beaktas när samhällets begränsade ekonomiska resurser ska fördelas mellan olika interventioner. / Background: Obesity has increased dramatically in the last 20 years and has become a major public health issue. Bariatric surgery has become a more commonly used method for treating morbid obesity. The health economic aspects of bariatric surgery have not yet been studied in Sweden. Aim: The study highlights the health economic aspects of bariatric surgery as an intervention to treat obesity. Method: The method used was a literature review. Eleven scientific studies were selected, analyzed and compiled using a health economic perspective. Results: Bariatric surgery was a cost effective intervention for treating obesity, compared with no interventions, traditional interventions and medical treatment. The results of the studies vary in time to break even and incremental cost ratio. The studies different countries of origin and time perspectives are possible reasons for these differences. Conclusion: From a health economic perspective bariatric surgery was recommended as an intervention for treating obesity. However, ethical issues should be considered when the society's limited financial resources are distributed between different interventions.
|
25 |
Cost-effective cardiology in the new national health system in South Africa : a proposalCilliers, Willie 12 1900 (has links)
Thesis (MBA (Business Management))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: South Africa is on the verge of major changes in the private medical sector. The government’s planned National Health Insurance has far reaching implications for all role players in the industry, as well as for the general public. This paper looks at the changes that have been made since the ANC government came to power in 1994 and then continues to look at possible models for the new National Health Insurance plan. A proposal on practicing cost-effective cardiology within this new system is made. The data of a pilot project between a private service provider and a managed healthcare company is analysed as a basis of this discussion. / AFRIKAANSE OPSOMMING: Suid-Afrika se mediese bedryf staan op die vooraand van groot veranderinge. Die regering se beplande Nasionale Gesondheidsplan het verreikende implikasies vir alle rolspelers in die bedryf, sowel as die algemene man op straat. Die dokument kyk oorsigtelik na die veranderinge wat ondergaan is sedert die ANC regering aan bewind gekom het in 1994 en gaan daarna voort om na moontlike opsies te kyk hoe die nuwe gesondheidsmodel daarna gaan uitsien. Voorstelle word gemaak oor hoe privaat kardiologie in die nuwe sisteem koste-effektief beoefen kan word. ‘n Lootsprojek van ‘n privaat diensverskaffer en ‘n bestuurde gesongheidsorg maatskappy se data word ontleed as basis vir die bespreking.
|
26 |
Very Cost Effective Partitions in GraphsVasylieva, Inna 01 May 2013 (has links)
For a graph G=(V,E) and a set of vertices S, a vertex v in S is said to be very cost effective if it is adjacent to more vertices in V -S than in S.
A bipartition pi={S, V- S} is called very cost effective if both S and V- S are very cost effective sets. Not all graphs have a very cost effective bipartition, for example, the complete graphs of odd order do not. We consider several families of graphs G, including Cartesian products and cacti graphs, to determine whether G has a very cost effective bipartition.
|
27 |
AUTOMATED ASSESSMENT FOR THE THERAPY SUCCESS OF FOREIGN ACCENT SYNDROME : Based on Emotional TemperatureChalasani, Trishala January 2017 (has links)
Context. Foreign Accent Syndrome is a rare neurological disorder, where among other symptoms of the patient’s emotional speech is affected. As FAS is one of the mildest speech disorders, there has not been much research done on the cost-effective biomarkers which reflect recovery of competences speech. Objectives. In this pilot study, we implement the Emotional Temperature biomarker and check its validity for assessing the FAS. We compare the results of implemented biomarker with another biomarker based on the global distances for FAS and identify the better one. Methods. To reach the objective, the emotional speech data of two patients at different phases of the treatment are considered. After preprocessing, experiments are performed on various window sizes and the observed correctly classified instances in automatic recognition are used to calculate Emotional temperature. Further, we use the better biomarker for tracking the recovery in the patient’s speech. Results. The Emotional temperature of the patient is calculated and compared with the ground truth and with that of the other biomarker. The Emotional temperature is calculated to track the emergence of compensatory skills in speech. Conclusions. A biomarker based on the frame-view of speech signal has been implemented. The implementation has used the state of art feature set and thus is an unproved version of the classical Emotional Temperature. The biomarker has been used to automatically assess the recovery of two patients diagnosed with FAS. The biomarker has been compared against the global view biomarker and has advantages over it. It also has been compared to human evaluations and captures the same dynamics.
|
28 |
Dynamic Question Ordering: Obtaining Useful Information While Reducing User BurdenEarly, Kirstin 01 August 2017 (has links)
As data become more pervasive and computing power increases, the opportunity for transformative use of data grows. Collecting data from individuals can be useful to the individuals (by providing them with personalized predictions) and the data collectors (by providing them with information about populations). However, collecting these data is costly: answering survey items, collecting sensed data, and computing values of interest deplete finite resources of time, battery, life, money, etc. Dynamically ordering the items to be collected, based on already known information (such as previously collected items or paradata), can lower the costs of data collection by tailoring the information-acquisition process to the individual. This thesis presents a framework for an iterative dynamic item ordering process that trades off item utility with item cost at data collection time. The exact metrics for utility and cost are application-dependent, and this frame- work can apply to many domains. The two main scenarios we consider are (1) data collection for personalized predictions and (2) data collection in surveys. We illustrate applications of this framework to multiple problems ranging from personalized prediction to questionnaire scoring to government survey collection. We compare data quality and acquisition costs of our method to fixed order approaches and show that our adaptive process obtains results of similar quality at lower cost. For the personalized prediction setting, the goal of data collection is to make a prediction based on information provided by a respondent. Since it is possible to give a reasonable prediction with only a subset of items, we are not concerned with collecting all items. Instead, we want to order the items so that the user provides information that most increases the prediction quality, while not being too costly to provide. One metric for quality is prediction certainty, which reflects how likely the true value is to coincide with the estimated value. Depending whether the prediction problem is continuous or discrete, we use prediction interval width or predicted class probability to measure the certainty of a prediction. We illustrate the results of our dynamic item ordering framework on tasks of predicting energy costs, student stress levels, and device identification in photographs and show that our adaptive process achieves equivalent error rates as a fixed order baseline with cost savings up to 45%. For the survey setting, the goal of data collection is often to gather information from a population, and it is desired to have complete responses from all samples. In this case, we want to maximize survey completion (and the quality of necessary imputations), and so we focus on ordering items to engage the respondent and collect hopefully all the information we seek, or at least the information that most characterizes the respondent so imputed values will be accurate. One item utility metric for this problem is information gain to get a “representative” set of answers from the respondent. Furthermore, paradata collected during the survey process can inform models of user engagement that can influence either the utility metric ( e.g., likelihood therespondent will continue answering questions) or the cost metric (e.g., likelihood the respondent will break off from the survey). We illustrate the benefit of dynamic item ordering for surveys on two nationwide surveys conducted by the U.S. Census Bureau: the American Community Survey and the Survey of Income and Program Participation.
|
29 |
Mise en oeuvre matérielle de décodeurs LDPC haut débit, en exploitant la robustesse du décodage par passage de messages aux imprécisions de calcul / Efficient Hardware Implementations of LDPC Decoders, through Exploiting Impreciseness in Message-Passing Decoding AlgorithmsNguyen Ly, Thien Truong 03 May 2017 (has links)
Les codes correcteurs d'erreurs sont une composante essentielle de tout système de communication, capables d’assurer le transport fiable de l’information sur un canal de communication bruité. Les systèmes de communication de nouvelle génération devront faire face à une demande sans cesse croissante en termes de débit binaire, pouvant aller de 1 à plusieurs centaines de gigabits par seconde. Dans ce contexte, les codes LDPC (pour Low-Density Parity-Check, en anglais), sont reconnus comme une des solutions les mieux adaptées, en raison de la possibilité de paralléliser massivement leurs algorithmes de décodage et les architectures matérielles associées. Cependant, si l’utilisation d’architectures massivement parallèles permet en effet d’atteindre des débits très élevés, cette solution entraine également une augmentation significative du coût matériel.L’objectif de cette thèse est de proposer des implémentations matérielles de décodeurs LDPC très haut débit, en exploitant la robustesse des algorithmes de décodage par passage de messages aux imprécisions de calcul. L’intégration dans le décodage itératif de mécanismes de calcul imprécis, s’accompagne du développement de nouvelles approches d’optimisation du design en termes de coût, débit et capacité de correction.Pour ce faire, nous avons considéré l’optimisation conjointe de (i) le bloc de quantification qui fournit l'information à précision finie au décodeur, et (ii) les unités de traitement imprécis des données, pour la mise à jour des messages échangés pendant de processus de décodage. Ainsi, nous avons tout d’abord proposé un quantificateur à faible complexité, qui peut être optimisé par évolution de densité en fonction du code LDPC utilisé et capable d’approcher de très près les performances d’un quantificateur optimal. Le quantificateur proposé a été en outre optimisé et utilisé pour chacun des décodeurs imprécis proposés ensuite dans cette thèse.Nous avons ensuite proposé, analysé et implémenté plusieurs décodeurs LDPC imprécis. Les deux premiers décodeurs sont des versions imprécises du décodeur « Offset Min-Sum » (OMS) : la surestimation des messages des nœuds de contrôle est d’abord compensée par un simple effacement du bit de poids faible (« Partially OMS »), ensuite le coût matériel est d’avantage réduit en supprimant un signal spécifique (« Imprecise Partially OMS »). Les résultats d’implémentation sur cible FPGA montrent une réduction importante du coût matériel, tout en assurant une performance de décodage très proche du OMS, malgré l'imprécision introduite dans les unités de traitement.Nous avions ensuite introduit les décodeurs à alphabet fini non-surjectifs (NS-FAIDs, pour « Non-Surjective Finite Alphabet Iterative Decoders », en anglais), qui étendent le concept d’« imprécision » au bloc mémoire du décodeur LDPC. Les décodeurs NS-FAIDs ont été optimisés par évolution de densité pour des codes LDPC réguliers et irréguliers. Les résultats d'optimisation révèlent différents compromis possibles entre la performance de décodage et l'efficacité de la mise en œuvre matérielle. Nous avons également proposé trois architectures matérielles haut débit, intégrant les noyaux de décodage NS-FAID. Les résultats d’implémentation sur cible FPGA et ASIC montrent que les NS-FAIDs permettent d’obtenir des améliorations significatives en termes de coût matériel et de débit, par rapport au décodeur Min-Sum, avec des performances de décodage meilleures ou très légèrement dégradées. / The increasing demand of massive data rates in wireless communication systems will require significantly higher processing speed of the baseband signal, as compared to conventional solutions. This is especially challenging for Forward Error Correction (FEC) mechanisms, since FEC decoding is one of the most computationally intensive baseband processing tasks, consuming a large amount of hardware resources and energy. The conventional approach to increase throughput is to use massively parallel architectures. In this context, Low-Density Parity-Check (LDPC) codes are recognized as the foremost solution, due to the intrinsic capacity of their decoders to accommodate various degrees of parallelism. They have found extensive applications in modern communication systems, due to their excellent decoding performance, high throughput capabilities, and power efficiency, and have been adopted in several recent communication standards.This thesis focuses on cost-effective, high-throughput hardware implementations of LDPC decoders, through exploiting the robustness of message-passing decoding algorithms to computing inaccuracies. It aims at providing new approaches to cost/throughput optimizations, through the use of imprecise computing and storage mechanisms, without jeopardizing the error correction performance of the LDPC code. To do so, imprecise processing within the iterative message-passing decoder is considered in conjunction with the quantization process that provides the finite-precision information to the decoder. Thus, we first investigate a low complexity code and decoder aware quantizer, which is shown to closely approach the performance of the quantizer with decision levels optimized through exhaustive search, and then propose several imprecise designs of Min-Sum (MS)-based decoders. Proposed imprecise designs are aimed at reducing the size of the memory and interconnect blocks, which are known to dominate the overall area/delay performance of the hardware design. Several approaches are proposed, which allow storing the exchanged messages using a lower precision than that used by the processing units, thus facilitating significant reductions of the memory and interconnect blocks, with even better or only slight degradation of the error correction performance.We propose two new decoding algorithms and hardware implementations, obtained by introducing two levels of impreciseness in the Offset MS (OMS) decoding: the Partially OMS (POMS), which performs only partially the offset correction, and the Imprecise Partially OMS (I-POMS), which introduces a further level of impreciseness in the check-node processing unit. FPGA implementation results show that they can achieve significant throughput increase with respect to the OMS, while providing very close decoding performance, despite the impreciseness introduced in the processing units.We further introduce a new approach for hardware efficient LDPC decoder design, referred to as Non-Surjective Finite-Alphabet Iterative Decoders (FAIDs). NS-FAIDs are optimized by Density Evolution for regular and irregular LDPC codes. Optimization results reveal different possible trade-offs between decoding performance and hardware implementation efficiency. To validate the promises of optimized NS-FAIDs in terms of hardware implementation benefits, we propose three high-throughput hardware architectures, integrating NS-FAIDs decoding kernels. Implementation results on both FPGA and ASIC technology show that NS-FAIDs allow significant improvements in terms of both throughput and hardware resources consumption, as compared to the Min-Sum decoder, with even better or only slightly degraded decoding performance.
|
30 |
Extraction of grape seed to produce a proanthocyanidin rich extractChikoto, Havanakwavo January 2004 (has links)
The aim of this study was to develop a cost-effective process to produce a grape seed extract of high quality using only non-toxic extractants. When this study was started no grape seed extract was produced in South Africa. Large quantities were imported to supply the local demand in the human and animal herbal medicine industry. Grape seed extract is mainly used to boost the immune system of humans and animals based on its antioxidant activity.
Initial work with different extractants established the polarity of the compounds with antioxidant activity. Antioxidant related activity was determined with five analysis techniques. Parameters such as the type, preparation and pre-treatment of grape seed, ratio of extractant to grape seed, composition of extractant, extraction time, extraction temperature, the interaction between temperature and time, drying temperature and subsequent treatment of extracts to remove compounds without antioxidant activity were evaluated. In all cases the cost implications of different methods used were kept in mind.
Not only the quality but also the quantity extracted is important in establishing a viable extraction plant. According to the patent literature most techniques used to date produce yields of 0.5 to 2.5 %. The laboratory product went through five stages of development. The percentage extracted for our five laboratory products decreased from 12.0, 10.1, 6.0, 5.9 to 5.5 % whereas antioxidant activity for our product increased from 30, 55, 67 78 to 172 % compared to the best available commercial product.
An important reason for the success of the procedure developed, is that we analyzed the different products developed with sophisticated procedures that gave information about the chemical composition of the extract. From this information procedures could be developed to increase the yield and activity.
The procedure has been licensed to a private company that is in the process of establishing a factory for the large-scale production of grape seed extract. The detail regarding the procedure is confidential to protect the intellectual property and industrial exploitation of the process. / Dissertation (MSc)--University of Pretoria, 2004. / gm2014 / Paraclinical Sciences / unrestricted
|
Page generated in 0.0337 seconds