• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 106
  • 44
  • 38
  • 20
  • 19
  • 15
  • 15
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

總額預算制度下醫院所有權結構與營運績效關係之研究

劉惠玲 Unknown Date (has links)
所有權結構、支付制度與競爭係影響醫院績效之關鍵因子,本研究援用相關文獻之發現,推論出三項因素對醫院績效之關係,並以我國獨特之總額預算制度為研究對象,蒐集、串連與合併不同來源之資料,實證檢視衝量競爭與所有權結構對醫院營運績效與醫療品質之聯合效果。 台灣於民國91年7月實施醫院總額預算制度後,浮動點值制度之設計為醫院間引入了衝量競爭(即虛假價格競爭),而結算後之點值則係反映出醫院間衝量競爭後之結果,醫院除了需面對支付點值所致之財務衝擊外,尚須面對自全民健保實行後,備受醫院詬病之核減制度之衝擊,因此,本文首先嘗試估算醫院受到核減與支付點值所致之財務衝擊程度。無論是國外或國內之研究,對於不同所有權結構醫院之績效表現是否有差異性,一直無法獲得一致性之結論,除了納入營運效率之績效指標外,本研究亦採用疾病別與醫院層級別之醫療品質指標來檢視不同所有權結構醫院之績效表現。更以考量核減與支付點值所致之財務衝擊程度,取代目前文獻僅以總額前、後之二元變數,評估財務衝擊程度對營運效率、醫療品質與財務績效之影響。最後,則是檢視總額預算制度下,醫院受到之財務衝擊度是否會縮小不同所有權結構醫院之營運效率與醫療品質表現之差距。 實證研究發現,不同所有權結構醫院之營運效率並未有顯著差異,但不同所有權結構醫院在某些疾病別品質指標(子宮肌瘤切除手術之住院超過7日機率與再入院率)與醫院層級品質指標(院內感染率與淨死亡率)表現上則有差異性;且公立或非營利醫院受到核減與支付點值之財務衝擊高於私立醫院,因此不同所有權結構醫院之行為與績效存有某些差異性。台灣的醫院在總額預算制度下,若受到之財務衝擊程度愈大,其營運效率會變差、醫療品質也受到負面之影響、財團法人醫院之醫務利益率與稅後淨利率也會降低,但現金流量比則會增加,故財務衝擊愈大,醫院之績效愈低。若同時考量財務衝擊度對不同所有權結構醫院之營運效率與醫療品質之聯合效果後,可發現財務衝擊雖然不會縮小公立(或非營利)醫院與私立醫院營運效率之差距,卻縮小公立(或非營利)醫院與私立醫院醫療品質之差距,故以台灣資料可部分支持「不同所有權結構醫院績效差距縮小之因素係競爭力量之崛起」之論點。 / Hospital ownership, payment system and competition are all key drivers to influence hospital performance. This research infer and depict the association of these three drivers from the related literature and empirically examined the effects of fictitious price competition due to the floating point-value system and ownership on hospital operational performance and quality of care by combing and merging different sources of data. Deduction rate of claim and the floating point-value system are the two controversial debates to the payment system. I attempt to estimate hospital financial pressures as precipitated by deduction rate of claim and floating point-value system. To investigate whether for-profit, not-for-profit, and government hospitals differ in operating performance and quality of care, five diagnose-level and two hospital-level quality indicators are selected. Different from prior research, the financial pressure is captured by hospital data instead of a binary variable (pre and post global budget) and I examine the effect of financial pressure on hospital operational efficiency, quality of care and financial performance. Finally, we test whether differences in operational efficiency and quality care among hospitals with different ownership forms will mitigate or narrow, as hospital financial pressure increases. The results show that for-profit, not-for-profit and government hospitals are far more alike than different in operational efficiency, but ownership affects not only the rate of readmission and the rate of the length of stay larger than 7 days of uterine myomectomy, but also the hospital-level quality indicators: the rate of nosocomial infection and hospital mortality rate. I also find higher financial pressure incurred at government or not-for-profit hospitals than for-profits hospitals. Given my findings, we conclude that hospital ownership status affect performance in terms of quality of care and financial pressure from rate of deduction and float point-value system. The study shows that financial pressure adversely affects operational efficiency and quality of care. As not-for proprietary hospital financial pressure increases, the profit margin and net profit ratio will decrease, but the cash flow ratio will increase. Nonetheless, deduction rate of claim and global budget has a negative impact on hospital performance. This research further considers the joint effect of financial pressure on difference between quality of care and efficiency of for-profit hospitals and the other two types. My results indicate that hospital financial pressure mitigates the difference of quality of care between for-profit hospital and not-for profit (or government) hospitals, but does not narrow the difference in quality of care between for-profit hospital and not-for profit (or government) hospitals. This finding partly supports that increased competition should force not-for-profit (or governmental) hospitals to be increasingly similar to their for-profit counterparts.
102

Approximations polynomiales rigoureuses et applications / Rigorous Polynomial Approximations and Applications

Joldes, Mioara Maria 26 September 2011 (has links)
Quand on veut évaluer ou manipuler une fonction mathématique f, il est fréquent de la remplacer par une approximation polynomiale p. On le fait, par exemple, pour implanter des fonctions élémentaires en machine, pour la quadrature ou la résolution d'équations différentielles ordinaires (ODE). De nombreuses méthodes numériques existent pour l'ensemble de ces questions et nous nous proposons de les aborder dans le cadre du calcul rigoureux, au sein duquel on exige des garanties sur la précision des résultats, tant pour l'erreur de méthode que l'erreur d'arrondi.Une approximation polynomiale rigoureuse (RPA) pour une fonction f définie sur un intervalle [a,b], est un couple (P, Delta) formé par un polynôme P et un intervalle Delta, tel que f(x)-P(x) appartienne à Delta pour tout x dans [a,b].Dans ce travail, nous analysons et introduisons plusieurs procédés de calcul de RPAs dans le cas de fonctions univariées. Nous analysons et raffinons une approche existante à base de développements de Taylor.Puis nous les remplaçons par des approximants plus fins, tels que les polynômes minimax, les séries tronquées de Chebyshev ou les interpolants de Chebyshev.Nous présentons aussi plusieurs applications: une relative à l'implantation de fonctions standard dans une bibliothèque mathématique (libm), une portant sur le calcul de développements tronqués en séries de Chebyshev de solutions d'ODE linéaires à coefficients polynômiaux et, enfin, un processus automatique d'évaluation de fonction à précision garantie sur une puce reconfigurable. / For purposes of evaluation and manipulation, mathematical functions f are commonly replaced by approximation polynomials p. Examples include floating-point implementations of elementary functions, integration, ordinary differential equations (ODE) solving. For that, a wide range of numerical methods exists. We consider the application of such methods in the context of rigorous computing, where we need guarantees on the accuracy of the result, with respect to both the truncation and rounding errors.A rigorous polynomial approximation (RPA) for a function f defined over an interval [a,b] is a couple (P, Delta) where P is a polynomial and Delta is an interval such that f(x)-P(x) belongs to Delta, for all x in [a,b]. In this work we analyse and bring forth several ways of obtaining RPAs for univariate functions. Firstly, we analyse and refine an existing approach based on Taylor expansions. Secondly, we replace them with better approximations such as minimax approximations, Chebyshev truncated series or interpolation polynomials.Several applications are presented: one from standard functions implementation in mathematical libraries (libm), another regarding the computation of Chebyshev series expansions solutions of linear ODEs with polynomial coefficients, and finally an automatic process for function evaluation with guaranteed accuracy in reconfigurable hardware.
103

SIMD-aware word length optimization for floating-point to fixed-point conversion targeting embedded processors / Optimisation SIMD de la largeur des mots pour la conversion de virgule flottante en virgule fixe pour des processeurs embarqués

El Moussawi, Ali Hassan 16 December 2016 (has links)
Afin de limiter leur coût et/ou leur consommation électrique, certains processeurs embarqués sacrifient le support matériel de l'arithmétique à virgule flottante. Pourtant, pour des raisons de simplicité, les applications sont généralement spécifiées en utilisant l'arithmétique à virgule flottante. Porter ces applications sur des processeurs embarqués de ce genre nécessite une émulation logicielle de l'arithmétique à virgule flottante, qui peut sévèrement dégrader la performance. Pour éviter cela, l'application est converti pour utiliser l'arithmétique à virgule fixe, qui a l'avantage d'être plus efficace à implémenter sur des unités de calcul entier. La conversion de virgule flottante en virgule fixe est une procédure délicate qui implique des compromis subtils entre performance et précision de calcul. Elle permet, entre autre, de réduire la taille des données pour le coût de dégrader la précision de calcul. Par ailleurs, la plupart de ces processeurs fournissent un support pour le calcul vectoriel de type SIMD (Single Instruction Multiple Data) afin d'améliorer la performance. En effet, cela permet l'exécution d'une opération sur plusieurs données en parallèle, réduisant ainsi le temps d'exécution. Cependant, il est généralement nécessaire de transformer l'application pour exploiter les unités de calcul vectoriel. Cette transformation de vectorisation est sensible à la taille des données ; plus leurs tailles diminuent, plus le taux de vectorisation augmente. Il apparaît donc un compromis entre vectorisation et précision de calcul. Plusieurs travaux ont proposé des méthodologies permettant, d'une part la conversion automatique de virgule flottante en virgule fixe, et d'autre part la vectorisation automatique. Dans l'état de l'art, ces deux transformations sont considérées indépendamment, pourtant elles sont fortement liées. Dans ce contexte, nous étudions la relation entre ces deux transformations, dans le but d'exploiter efficacement le compromis entre performance et précision de calcul. Ainsi, nous proposons d'abord un algorithme amélioré pour l'extraction de parallélisme SLP (Superword Level Parallelism ; une technique de vectorisation). Puis, nous proposons une nouvelle méthodologie permettant l'application conjointe de la conversion de virgule flottante en virgule fixe et de l'exploitation du SLP. Enfin, nous implémentons cette approche sous forme d'un flot de compilation source-à-source complètement automatisé, afin de valider ces travaux. Les résultats montrent l'efficacité de cette approche, dans l'exploitation du compromis entre performance et précision, vis-à-vis d'une approche classique considérant ces deux transformations indépendamment. / In order to cut-down their cost and/or their power consumption, many embedded processors do not provide hardware support for floating-point arithmetic. However, applications in many domains, such as signal processing, are generally specified using floating-point arithmetic for the sake of simplicity. Porting these applications on such embedded processors requires a software emulation of floating-point arithmetic, which can greatly degrade performance. To avoid this, the application is converted to use fixed-point arithmetic instead. Floating-point to fixed-point conversion involves a subtle tradeoff between performance and precision ; it enables the use of narrower data word lengths at the cost of degrading the computation accuracy. Besides, most embedded processors provide support for SIMD (Single Instruction Multiple Data) as a mean to improve performance. In fact, this allows the execution of one operation on multiple data in parallel, thus ultimately reducing the execution time. However, the application should usually be transformed in order to take advantage of the SIMD instruction set. This transformation, known as Simdization, is affected by the data word lengths ; narrower word lengths enable a higher SIMD parallelism rate. Hence the tradeoff between precision and Simdization. Many existing work aimed at provide/improving methodologies for automatic floating-point to fixed-point conversion on the one side, and Simdization on the other. In the state-of-the-art, both transformations are considered separately even though they are strongly related. In this context, we study the interactions between these transformations in order to better exploit the performance/accuracy tradeoff. First, we propose an improved SLP (Superword Level Parallelism) extraction (an Simdization technique) algorithm. Then, we propose a new methodology to jointly perform floating-point to fixed-point conversion and SLP extraction. Finally, we implement this work as a fully automated source-to-source compiler flow. Experimental results, targeting four different embedded processors, show the validity of our approach in efficiently exploiting the performance/accuracy tradeoff compared to a typical approach, which considers both transformations independently.
104

Optimální odhad stavu modelu navigačního systému / Optimal state estimation of a navigation model system

Papež, Milan January 2013 (has links)
This thesis presents an investigation of the possibility of using the fixed-point arithmetic in the inertial navigation systems, which use the local level navigation frame mechanization equations. Two square root filtering methods, the Potter's square root Kalman filter and UD factorized Kalman filter, are compared with respect to the conventional Kalman filter and its Joseph's stabilized form. The effect of rounding errors to the Kalman filter optimality and the covariance matrix or its factors conditioning is evaluated for a various lengths of the fractional part of the fixed-point computational word. Main contribution of this research lies in an evaluation of the minimal fixed-point arithmetic word length for the Phi-angle error model with noise statistics which correspond to the tactical grade inertial measurements units.
105

Deriving an Natural Language Processing inference Cost Model with Greenhouse Gas Accounting : Towards a sustainable usage of Machine Learning / Härledning av en Kostnadsmodell med växthusgasredovisning angående slutledning inom Naturlig Språkbehandling : Mot en hållbar användning av Maskininlärning

Axberg, Tom January 2022 (has links)
The interest in using State-Of-The-Art (SOTA) Pre-Trained Language Model (PLM) in product development is growing. The fact that developers can use PLM has changed the way to build reliable models, and it is the go-to method for many companies and organizations. Selecting the Natural Language Processing (NLP) model with the highest accuracy is the usual way of deciding which PLM to use. However, with growing concerns about negative climate changes, we need new ways of making decisions that consider the impact on our future needs. The best solution with the highest accuracy might not be the best choice when other parameters matter, such as sustainable development. This thesis investigates how to calculate an approximate total cost considering Operating Expenditure (OPEX) and CO2~emissions for a deployed NLP solution over a given period, specifically the inference phase. We try to predict the total cost with Floating Point Operation (FLOP) and test NLP models on a classification task. We further present the tools to make energy measurements and examine the metric FLOP to predict costs. Using a bottom-up approach, we investigate the components that affect the cost and measure the energy consumption for different deployed models. By constructing this cost model and testing it against real-life examples, essential information about a given NLP implementation and the relationship between monetary and environmental costs will be derived. The literature studies reveal that the derival of a cost model is a complex area, and the results confirm that it is not a straightforward procedure to approximate energy costs. Even if a cost model was not feasible to derive with the resources given, this thesis covers the area and shows why it is complex by examine FLOP. / Intresset att använda State-Of-The-Art (SOTA) Pre-Trained Language Model (PLM) i produktutveckling växer. Det faktum att utvecklare kan använda PLM har förändrat sättet att träna tillförlitliga modeller på och det är den bästa metoden för många företag och organisationer att använda SOTA Naturlig Språkbehandling (NLP). Att välja NLP-modellen med högsta noggrannhet är det vanliga sättet att bestämma vilken PLM som ska användas. Men med växande oro för miljöförändringar behöver vi nya sätt att fatta beslut som kommer att påverka våra framtida behov. Denna avhandling undersöker hur man beräknar en ungefärlig totalkostnad med hänsyn till Operating Expenditure (OPEX) och CO2~utsläpp för en utplacerad NLP-lösning under en given period, dvs slutledningsfasen. Vi försöker förutspå den totala kostnaden med flyttalsoperationer och testar mot en klassificerings uppgift. Vi undersöker verktygen för att göra mätningar samt variabeln Flyttalsoperationer för att förutspå energiförbrukning.
106

Contributions à la vérification formelle d'algorithmes arithmétiques / Contributions to the Formal Verification of Arithmetic Algorithms

Martin-Dorel, Erik 26 September 2012 (has links)
L'implantation en Virgule Flottante (VF) d'une fonction à valeurs réelles est réalisée avec arrondi correct si le résultat calculé est toujours égal à l'arrondi de la valeur exacte, ce qui présente de nombreux avantages. Mais pour implanter une fonction avec arrondi correct de manière fiable et efficace, il faut résoudre le «dilemme du fabricant de tables» (TMD en anglais). Deux algorithmes sophistiqués (L et SLZ) ont été conçus pour résoudre ce problème, via des calculs longs et complexes effectués par des implantations largement optimisées. D'où la motivation d'apporter des garanties fortes sur le résultat de ces pré-calculs coûteux. Dans ce but, nous utilisons l'assistant de preuves Coq. Tout d'abord nous développons une bibliothèque d'«approximation polynomiale rigoureuse», permettant de calculer un polynôme d'approximation et un intervalle bornant l'erreur d'approximation à l'intérieur de Coq. Cette formalisation est un élément clé pour valider la première étape de SLZ, ainsi que l'implantation d'une fonction mathématique en général (avec ou sans arrondi correct). Puis nous avons implanté en Coq, formellement prouvé et rendu effectif 3 vérifieurs de certificats, dont la preuve de correction dérive du lemme de Hensel que nous avons formalisé dans les cas univarié et bivarié. En particulier, notre «vérifieur ISValP» est un composant clé pour la certification formelle des résultats générés par SLZ. Ensuite, nous nous sommes intéressés à la preuve mathématique d'algorithmes VF en «précision augmentée» pour la racine carré et la norme euclidienne en 2D. Nous donnons des bornes inférieures fines sur la plus petite distance non nulle entre sqrt(x²+y²) et un midpoint, permettant de résoudre le TMD pour cette fonction bivariée. Enfin, lorsque différentes précisions VF sont disponibles, peut survenir le phénomène de «double-arrondi», qui peut changer le comportement de petits algorithmes usuels en arithmétique. Nous avons prouvé en Coq un ensemble de théorèmes décrivant le comportement de Fast2Sum avec double-arrondis. / The Floating-Point (FP) implementation of a real-valued function is performed with correct rounding if the output is always equal to the rounding of the exact value, which has many advantages. But for implementing a function with correct rounding in a reliable and efficient manner, one has to solve the ``Table Maker's Dilemma'' (TMD). Two sophisticated algorithms (L and SLZ) have been designed to solve this problem, relying on some long and complex calculations that are performed by some heavily-optimized implementations. Hence the motivation to provide strong guarantees on these costly pre-computations. To this end, we use the Coq proof assistant. First, we develop a library of ``Rigorous Polynomial Approximation'', allowing one to compute an approximation polynomial and an interval that bounds the approximation error in Coq. This formalization is a key building block for verifying the first step of SLZ, as well as the implementation of a mathematical function in general (with or without correct rounding). Then we have implemented, formally verified and made effective 3 interrelated certificates checkers in Coq, whose correctness proof derives from Hensel's lemma that we have formalized for both univariate and bivariate cases. In particular, our ``ISValP verifier'' is a key component for formally verifying the results generated by SLZ. Then, we have focused on the mathematical proof of ``augmented-precision'' FP algorithms for the square root and the Euclidean 2D norm. We give some tight lower bounds on the minimum non-zero distance between sqrt(x²+y²) and a midpoint, allowing one to solve the TMD for this bivariate function. Finally, the ``double-rounding'' phenomenon can typically occur when several FP precision are available, and may change the behavior of some usual small FP algorithms. We have formally verified in Coq a set of results describing the behavior of the Fast2Sum algorithm with double-roundings.

Page generated in 0.0644 seconds