• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 682
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1503
  • 1029
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1231

Breaking the Weak Governance Curse: Global Regulation and Governance Reform in Resource-rich Developing Countries

Ferreira, Patricia 11 December 2012 (has links)
There is growing consensus that unless resource-rich developing countries improve their domestic governance systems, rising exploitation of mineral, oil and gas resources may result in long-term adverse developmental outcomes associated with the “resource curse”. Despite the consensus, reforms do not abound. This dissertation investigates the obstacles to such reforms, and the mechanisms and strategies that can possibly overcome these obstacles. I argue that two trapping mechanisms are binding these countries to a “weak governance curse”. One mechanism is the phenomenon of path dependence, which makes a dysfunctional governance path initiated at a past historical juncture resistant to change over time. The other mechanism is rent-seeking behaviour associated with high resource rents, which creates perverse incentives for political and economic actors to resist reforms. The Law and Development literature has recently produced a rich body of knowledge on governance reform in developing countries, yet it has largely neglected the potential role of innovative global regulatory mechanisms, beyond development assistance, in this process. I argue that this evolving literature ought to draw from global regulation studies to investigate the interaction between unconventional global regulatory mechanisms and domestic governance reform. In this thesis I analyze whether extraterritorial home country regulations, such as anti-bribery, anti-money laundering and securities disclosure regulations, and transnational public-private partnerships, such as the Extractive Industries Transparency Initiative, may offer institutional opportunities for external and internal actors to facilitate policy reforms in resource-rich and governance-poor countries. My conclusion is twofold. First, there is reason for cautious optimism regarding the potential for unconventional global regulatory mechanisms to provoke positive feedback effects in domestic governance reform. These mechanisms can open innovative institutional pathways of influence to outsiders and insiders promoting governance reform. Second, instead of searching for a regulatory silver bullet, the most promising way to promote reforms in resilient dysfunctional governance systems is to make use of the wide range of conventional and unconventional mechanisms available. A constellation of regulatory instruments opens up the possibility for outside and inside reformers to benefit from a different policy mix of available mechanisms, depending on the specific circumstances of a given country at a particular time.
1232

Estimation And Hypothesis Testing In Stochastic Regression

Sazak, Hakan Savas 01 December 2003 (has links) (PDF)
Regression analysis is very popular among researchers in various fields but almost all the researchers use the classical methods which assume that X is nonstochastic and the error is normally distributed. However, in real life problems, X is generally stochastic and error can be nonnormal. Maximum likelihood (ML) estimation technique which is known to have optimal features, is very problematic in situations when the distribution of X (marginal part) or error (conditional part) is nonnormal. Modified maximum likelihood (MML) technique which is asymptotically giving the estimators equivalent to the ML estimators, gives us the opportunity to conduct the estimation and the hypothesis testing procedures under nonnormal marginal and conditional distributions. In this study we show that MML estimators are highly efficient and robust. Moreover, the test statistics based on the MML estimators are much more powerful and robust compared to the test statistics based on least squares (LS) estimators which are mostly used in literature. Theoretically, MML estimators are asymptotically minimum variance bound (MVB) estimators but simulation results show that they are highly efficient even for small sample sizes. In this thesis, Weibull and Generalized Logistic distributions are used for illustration and the results given are based on these distributions. As a future study, MML technique can be utilized for other types of distributions and the procedures based on bivariate data can be extended to multivariate data.
1233

政府績效管理資訊化的交易成本分析:以「政府計畫管理資訊網」為例 / Information and communication technologies (ICTs) and government performance management: A case study of GPMnet in Taiwan

謝叔芳, Hsieh, Hsu Fang Unknown Date (has links)
自1980年代政府再造潮流以來,績效管理及資訊通信技術業已成為政府提昇績效的重要工具,在此一背景下,我國亦於民國94年完成「政府計畫管理資訊網(GPMnet)」整合,用以協助執行績效管理作業。不過,由於資訊科技涵蓋面向相當寬廣,影響層面頗為廣泛,因此也引發樂觀、悲觀及務實主義等不同立場的爭辯,其運用成效確實有待進一步的評估。在相關文獻的基礎上,本研究採用交易成本理論途徑,首先透過問卷調查瞭解GPMnet使用者的態度及行為偏好,其次則經由訪談資料進一步解析資訊通信科技對於政府績效管理成本的增加與減少。 本研究採取混合方法論(mixed methodology)進行研究設計,兼採量化資料及質化資料蒐集分析。量化資料部分,以GPMnet使用者為分析單位進行問卷調查,回收148份有效樣本;質化資料部分,依主辦、主管、會審及研考等4項權限功能,選取8位GPMnet使用者進行訪談,以了解不同權限受訪者使用GPMnet的經驗與看法。 資料分析部分,本研究以偏最小平方法分析問卷資料,調查結果分析顯示,GPMnet系統使用的交易成本認知與態度、主觀系統績效有顯著負向關係;不確定性、資產專屬、使用頻率與交易成本之假設則未獲實證資料支持。此外,訪談資料分析發現,制度環境下,因受限於現行不同機關有不同資訊系統、GPMnet多個子系統,以及紙本流程仍然存在的情況下,使用GPMnet執行績效管理作業會增加行政成本負擔;此外,在實際使用的情形之下,因為系統可以保存過去資料、提供清楚欄位、網路化傳遞、進行進度控管及主動公開資訊等功能,減少了行政作業交易成本。相對的,也造成學習時間不符成本、溝通費時、校對、資訊過載、介面不友善及系統不穩定等負面影響,增加績效管理作業的交易成本。 最後,本研究建議在學術研究上,結構模式的觀察變項應更謹慎設計,資訊系統評估理論應重視成本觀點。至於在實務面則應全面落實電子化績效管理,在GPMnet系統資源環境更應進行資料備份,以減少資訊的過度負荷。 / Governments invest much more attention, time, and money on performance management and evaluation on the public sector today than ever before. To better utilize agency program management systems under the Executive Yuan, the Research, Development and Evaluation Commission (RDEC) has completed the planning of the "Policy Program Management Information System" (Government Program network, GPMnet). The system is a common service platform created to integrate various policy implementation management information systems to enhance the performance of different agencies in program management. However, the performance of GPMnet needs to be evaluated. In order to evaluate the system, this study introduces an empirical research which focuses on a transaction cost approach that has often been used to support the idea of information and communication technology and its positive impact on the economic system. The data was collected by mixed methodology, combining quantitative data from 148 users and eight interviews with a semi-structured questionnaire. The Partial Least Squares was used to analyze the quantitative data. According to the research findings, information-related problems represent only some of the elements contributing to the transaction costs. These costs also emerge due to the institutional factors contributing to their growths. The study of the consequences associated with ICT design and its implementation, based on the transaction cost theory, should therefore consider the costs of ICTs.
1234

多期最適資產配置:一般化最小平方法之應用

劉家銓 Unknown Date (has links)
本文主要是針對保險業及退休基金的資產負債管理議題為研究重心,延續Huang (2004)的研究,其研究是以理論求解的方式求出多期最適資產配置的唯一解,而其研究也衍生出兩個議題:首先是文中允許資產買賣空;再者其模型僅解決單期挹注資金的問題,而不考慮多期挹注資金。但這對於實際市場操作上會有一些的問題。因此本文延續了其研究,希望解決這兩個議題,讓模型更能解出一般化的資產負債管理問題。 本文所選擇的投資的標的是以一般退休基金與保險業所採用,分別是短債(short-term bonds)、永續債卷(consols)、指數連結型債券(index-linked gilts(ILG))、股票(equity)為四種投資標的,以蒙地卡羅模型模擬出4000組Wilkie 投資模型(1995)下的四種標的年報酬率以及負債年成長率,利用這些預期的模擬值找出最適的投資比例以及應該挹注的金額。而本文主要將問題化為決策變數的二次函數,並以一般化最小平方法(generalized least square,GLS)來求出決策變數,而用此方法最大的優點在於一般化最小平方法具有唯一解,且在利用軟體求解的速度相當快,因此是非常有效率的。本文探討的問題可以分成兩個部分。我們首先討論「單期挹注資金」的問題,只考慮在期初挹注資金。接著我們考慮「多期挹注資金」的問題,是在計畫期間內能將資金分成多期投入。兩者都能將目標函數化為最小平方的形式,因此本文除了找出合理的資產配置以及解決多期挹注資金的問題之外,也將重點著重於找一個能快速且精準的方法來解決資產配置的問題。 / This paper deals with the insurance and pension asset liability management issue. Huang (2004) derives a theoretical close solution of multi-period asset allocation. However, there are two further problems in his paper. First, short selling is allowable. Second, multi-period investing is not acceptable. These two restrictions sometimes are big problems in practice. This paper extends his paper and releases these two restrictions. In other words, we intend to find a solution of multi-period asset allocation so that we can invest money and change proportion of investment in each period without problems of short selling. In this paper, we use the standard asset classes used by pension or insurance funds such as short-term bonds, consols, index-linked gilts and equities. We generate thousand times of Monte Caro simulations of Wilkie investment model (1995) to predict future asset returns. Furthermore, in order to improve time-efficiency and accuracy, we derive a quadratic objective function and obtain a unique solution using sequential quadratic programming.
1235

電路設計中電流值之罕見事件的統計估計探討 / A study of statistical method on estimating rare event in IC Current

彭亞凌, Peng, Ya Ling Unknown Date (has links)
距離期望值4至6倍標準差以外的罕見機率電流值,是當前積體電路設計品質的關鍵之一,但隨著精確度的標準提升,實務上以蒙地卡羅方法模擬電路資料,因曠日廢時愈發不可行,而過去透過參數模型外插估計或迴歸分析方法,也因變數蒐集不易、操作電壓減小使得電流值尾端估計產生偏差,上述原因使得尾端電流值估計困難。因此本文引進統計方法改善罕見機率電流值的估計:先以Box-Cox轉換觀察值為近似常態,改善尾端分配值的估計,再以加權迴歸方法估計罕見電流值,其中迴歸解釋變數為Log或Z分數轉換的經驗累積機率,而加權方法採用Down-weight加重極值樣本資訊的重要性,此外,本研究也考慮能蒐集完整變數的情況,改以電路資料作為解釋變數進行加權迴歸。另一方面,本研究也採用極值理論作為估計方法。 本文先以電腦模擬評估各方法的優劣,假設母體分配為常態、T分配、Gamma分配,以均方誤差作為衡量指標,模擬結果驗證了加權迴歸方法的可行性。而後參考模擬結果決定篩選樣本方式進行實證研究,資料來源為新竹某科技公司,實證結果顯示加權迴歸配合Box-Cox轉換能以十萬筆樣本數,準確估計左、右尾機率10^(-4) 、10^(-5)、10^(-6)、10^(-7)極端電流值。其中右尾部分的加權迴歸解釋變數採用對數轉換,而左尾部分的加權迴歸解釋變數採用Z分數轉換,估計結果較為準確,又若能蒐集電路資訊作為解釋變數,在左尾部份可以有最準確的估計結果;而篩選樣本尾端1%和整筆資料的方式對於不同方法的估計準確度各有利弊,皆可考慮。另外,1%門檻值比例的極值理論能穩定且中等程度的估計不同電壓下的電流值,且有短程估計最準的趨勢。 / To obtain the tail distribution of current beyond 4 to 6 sigma is nowadays a key issue in integrated circuit (IC) design and computer simulation is a popular tool to estimate the tail values. Since creating rare events via simulation is time-consuming, often the linear extrapolation methods (such as regression analysis) are applied to enhance efficiency. However, it is shown from past work that the tail values is likely to behave differently if the operating voltage is getting lower. In this study, a statistical method is introduced to deal with the lower voltage case. The data are evaluated via the Box-Cox (or power) transformation and see if they need to be transformed into normally distributed data, following by weighted regression to extrapolate the tail values. In specific, the independent variable is the empirical CDF with logarithm or z-score transformation, and the weight is down-weight in order to emphasize the information of extreme values observations. In addition to regression analysis, Extreme Value Theory (EVT) is also adopted in the research. The computer simulation and data sets from a famous IC manufacturer in Hsinchu are used to evaluate the proposed method, with respect to mean squared error. In computer simulation, the data are assumed to be generated from normal, student t, or Gamma distribution. For empirical data, there are 10^8 observations and tail values with probabilities 10^(-4),10^(-5),10^(-6),10^(-7) are set to be the study goal given that only 10^5 observations are available. Comparing to the traditional methods and EVT, the proposed method has the best performance in estimating the tail probabilities. If the IC current is produced from regression equation and the information of independent variables can be provided, using the weighted regression can reach the best estimation for the left-tailed rare events. Also, using EVT can also produce accurate estimates provided that the tail probabilities to be estimated and the observations available are on the similar scale, e.g., probabilities 10^(-5)~10^(-7) vs.10^5 observations.
1236

In silico tools in risk assessment : of industrial chemicals in general and non-dioxin-like PCBs in particular

Stenberg, Mia January 2012 (has links)
Industrial chemicals in European Union produced or imported in volumes above 1 tonne annually, necessitate a registration within REACH. A common problem, concerning these chemicals, is deficient information and lack of data for assessing the hazards posed to human health and the environment. Animal studies for the type of toxicological information needed are both expensive and time consuming, and to that an ethical aspect is added. Alternative methods to animal testing are thereby requested. REACH have called for an increased use of in silico tools for non-testing data as structure-activity relationships (SARs), quantitative structure-activity relationships (QSARs), and read-across. The main objective of the studies underlying this thesis is related to explore and refine the use of in silico tools in a risk assessment context of industrial chemicals. In particular, try to relate properties of the molecular structure to the toxic effect of the chemical substance, by using principles and methods of computational chemistry. The initial study was a survey of all industrial chemicals; the Industrial chemical map was created. A part of this map was identified including chemicals of potential concern. Secondly, the environmental pollutants, polychlorinated biphenyls (PCBs) were examined and in particular the non-dioxin-like PCBs (NDL-PCBs). A set of 20 NDL-PCBs was selected to represent the 178 PCB congeners with three to seven chlorine substituents. The selection procedure was a combined process including statistical molecular design for a representative selection and expert judgements to be able to include congeners of specific interest. The 20 selected congeners were tested in vitro in as much as 17 different assays. The data from the screening process was turned into interpretable toxicity profiles with multivariate methods, used for investigation of potential classes of NDL-PCBs. It was shown that NDL-PCBs cannot be treated as one group of substances with similar mechanisms of action. Two groups of congeners were identified. A group including in general lower chlorinated congeners with a higher degree of ortho substitution showed a higher potency in more assays (including all neurotoxic assays). A second group included abundant congeners with a similar toxic profile that might contribute to a common toxic burden. To investigate the structure-activity pattern of PCBs effect on DAT in rat striatal synaptosomes, ten additional congeners were selected and tested in vitro. NDL-PCBs were shown to be potent inhibitors of DAT binding. The congeners with highest DAT inhibiting potency were tetra- and penta-chlorinated with 2-3 chlorine atoms in ortho-position. The model was not able to distinguish the congeners with activities in the lower μM range, which could be explained by a relatively unspecific response for the lower ortho chlorinated PCBs. / Den europeiska kemikalielagstiftningen REACH har fastställt att kemikalier som produceras eller importeras i en mängd över 1 ton per år, måste registreras och riskbedömmas. En uppskattad siffra är att detta gäller för 30 000 kemikalier. Problemet är dock att data och information ofta är otillräcklig för en riskbedömning. Till stor del har djurförsök använts för effektdata, men djurförsök är både kostsamt och tidskrävande, dessutom kommer den etiska aspekten in. REACH har därför efterfrågat en undersökning av möjligheten att använda in silico verktyg för att bidra med efterfrågad data och information. In silico har en ungefärlig betydelse av i datorn, och innebär beräkningsmodeller och metoder som används för att få information om kemikaliers egenskaper och toxicitet. Avhandlingens syfte är att utforska möjligheten och förfina användningen av in silico verktyg för att skapa information för riskbedömning av industrikemikalier. Avhandlingen beskriver kvantitativa modeller framtagna med kemometriska metoder för att prediktera, dvs förutsäga specifika kemikaliers toxiska effekt. I den första studien (I) undersöktes 56 072 organiska industrikemikalier. Med multivariata metoder skapades en karta över industrikemikalierna som beskrev dess kemiska och fysikaliska egenskaper. Kartan användes för jämförelser med kända och potentiella miljöfarliga kemikalier. De mest kända miljöföroreningarna visade sig ha liknande principal egenskaper och grupperade i kartan. Genom att specialstudera den delen av kartan skulle man kunna identifiera fler potentiellt farliga kemiska substanser. I studie två till fyra (II-IV) specialstuderades miljögiftet PCB. Tjugo PCBs valdes ut så att de strukturellt och fysiokemiskt representerade de 178 PCB kongenerna med tre till sju klorsubstituenter. Den toxikologiska effekten hos dessa 20 PCBs undersöktes i 17 olika in vitro assays. De toxikologiska profilerna för de 20 testade kongenerna fastställdes, dvs vilka som har liknande skadliga effekter och vilka som skiljer sig åt. De toxicologiska profilerna användes för klassificering av PCBs. Kvantitativa modeller utvecklades för prediktioner, dvs att förutbestämma effekter hos ännu icke testade PCBs, och för att få ytterligare kunskap om strukturella egenskaper som ger icke önskvärda effekter i människa och natur. Information som kan användas vid en framtida riskbedömning av icke-dioxinlika PCBs. Den sista studien (IV) är en struktur-aktivitets studie som undersöker de icke-dioxinlika PCBernas hämmande effekt av signalsubstansen dopamin i hjärnan.
1237

Disturbance monitoring in distributed power systems

Glickman, Mark January 2007 (has links)
Power system generators are interconnected in a distributed network to allow sharing of power. If one of the generators cannot meet the power demand, spare power is diverted from neighbouring generators. However, this approach also allows for propagation of electric disturbances. An oscillation arising from a disturbance at a given generator site will affect the normal operation of neighbouring generators and might cause them to fail. Hours of production time will be lost in the time it takes to restart the power plant. If the disturbance is detected early, appropriate control measures can be applied to ensure system stability. The aim of this study is to improve existing algorithms that estimate the oscillation parameters from acquired generator data to detect potentially dangerous power system disturbances. When disturbances occur in power systems (due to load changes or faults), damped oscillations (or &quotmodes") are created. Modes which are heavily damped die out quickly and pose no threat to system stability. Lightly damped modes, by contrast, die out slowly and are more problematic. Of more concern still are &quotnegatively damped" modes which grow exponentially with time and can ultimately cause the power system to fail. Widespread blackouts are then possible. To avert power system failures it is necessary to monitor the damping of the oscillating modes. This thesis proposes a number of damping estimation algorithms for this task. If the damping is found to be very small or even negative, then additional damping needs to be introduced via appropriate control strategies. This thesis presents a number of new algorithms for estimating the damping of modal oscillations in power systems. The first of these algorithms uses multiple orthogonal sliding windows along with least-squares techniques to estimate the modal damping. This algorithm produces results which are superior to those of earlier sliding window algorithms (that use only one pair of sliding windows to estimate the damping). The second algorithm uses a different modification of the standard sliding window damping estimation algorithm - the algorithm exploits the fact that the Signal to Noise Ratio (SNR) within the Fourier transform of practical power system signals is typically constant across a wide frequency range. Accordingly, damping estimates are obtained at a range of frequencies and then averaged. The third algorithm applied to power system analysis is based on optimal estimation theory. It is computationally efficient and gives optimal accuracy, at least for modes which are well separated in frequency.
1238

Modelling water droplet movement on a leaf surface

Oqielat, Moa'ath Nasser January 2009 (has links)
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
1239

A rigorous approach to the technical implementation of legally defined marine boundaries

Fraser, Roger W. January 2007 (has links) (PDF)
The management and administration of legally defined marine boundaries in Australia is subject to a variety of political, legal and technical challenges. The purpose of this thesis is to address three of the technical challenges faced in the implementation of marine boundaries which cannot be dealt with by applying conventional land cadastre and land administration principles. The three challenges that are identified and addressed are (i) marine boundary delimitation and positioning uncertainty, (ii) the construction and maintenance of four dimensional marine parcels, and (iii) the modelling and management of marine boundary uncertainty metadata.
1240

Ιδιότητες και εκτίμηση για την γενικευμένη εκθετική κατανομή

Κάτρης, Χρήστος 12 April 2010 (has links)
Αρχικά γίνεται μια ιστορική αναδρομή, μια παρουσίαση της διπαραμετρικής Γενικευμένης εκθετικής κατανομής (τύπος κατανομής, συνάρτηση πυκνότητας πιθανότητας κλπ) και αναφέρονται βασικά χαρακτηριστικά της κατανομής. Στη συνέχεια αναφέρονται βασικοί ορισμοί και θεωρήματα σχετικά κυρίως με τη σημειακή παραμετρική εκτίμηση καθώς και την εκτίμηση κατά Bayes. Το επόμενο κεφάλαιο πραγματεύεται την ανάλυση του μοντέλου και τις βασικές ιδιότητες της Γενικευμένης εκθετικής κατανομής. Επίσης μελετώνται ειδικά θέματα, όπως συναρτήσεις επιβίωσης, πληροφορία Fisher, διατεταγμένες παρατηρήσεις, κατανομή του αθροίσματος και παραγωγή τυχαίων αριθμών, στα πλαίσια της Γενικευμένης εκθετικής κατανομής. Στη συνέχεια αναλύονται και εφαρμόζονται μέθοδοι σημειακής εκτίμησης (Μέγιστη Πιθανοφάνεια, Μέθοδος ροπών, Μέθοδος εκατοστημορίων, Ελάχιστα και σταθμισμένα ελάχιστα Τετράγωνα, L-ροπές) για την εκτίμηση των παραμέτρων της κατανομής. Μελετάται και η απόδοση των εκτιμητών για τις διάφορες μεθόδους εκτίμησης. Ακολουθεί η εκτίμηση τύπου Bayes των παραμέτρων (με συναρτήσεις ζημίας τετραγωνικού σφάλματος και LINEX αντίστοιχα). Αναφέρονται πάλι συμπεράσματα για την απόδοση των εκτιμητών και σύγκριση με τους εκτιμητές μέγιστης πιθανοφάνειας. Τελικά παρουσιάζουμε την προσέγγιση ενός αναλογιστικού πίνακα μέσω της Γενικευμένης εκθετικής κατανομής. / In the beginning, we mention a historical recursion, a presentation of the 2-parameter Generalized exponential distribution ( distribution type, probability density function etc.) and we also mention basic characteristics of the distribution. Basic definitions and theorems about point estimation and Bayes estimation are reported. Furthermore, we discource on the analysis of the model and basic properties of the Generalized exponential distribution. Special themes, such as survival functions, Fisher information, order statistics, sum distribution and production of random numbers are analyzed in the frame of the Generalized exponential distribution. Moreover, we analyze and apply point estimation methods (maximum likelihood, method of moments, percentile estimation, least (and weighted least) squares, method of L-moments) in order to estimate parameters of the distribution. Performance of the estimators for different estimation methods is also analyzed. Next, bayesian estimation of the parameters (under squared error loss function and LINEX loss function) is coming up for discussion. We also analyze the performance of the estimators and compare them to the maximum likelihood estimators. Finally, we present approximation of an actuarial table via Generalized exponential distribution.

Page generated in 0.0276 seconds