• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 45
  • 29
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 460
  • 73
  • 56
  • 55
  • 47
  • 40
  • 39
  • 37
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A TOOL FOR PERFORMANCE EVALUATION OF REAL-TIME UNIX OPERATING SYSTEMS

Furht, B., Boujarwah, A., Gluch, D., Joseph, D., Kamath, D., Matthews, P., McCarty, M., Stoehr, R., Sureswaran, R. 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / In this paper we present the REAL/STONE Real-Time Tester, a tool for performance evaluation of real-time UNIX operating systems. The REAL/STONE Real-Time Tester is a synthetic benchmark that simulates a typical real-time environment. The tool performs typical real-time operations, such as: (a) reads data from an external source and accesses it periodically, (b) processes data through a number of real-time processes, and © displays the final data. This study can help users in selecting the most-effective real-time UNIX operating system for a given application.
82

A benchmark for impact assessment of affordable housing

Okehielem, Nelson January 2011 (has links)
There is a growing recognition in the built environment for the significance of benchmarking. It is recognized as a key driver for measuring success criteria in the built environment sector. In spite of the huge application of this technique to the sector and other sectors, very little is known of it in affordable housing sub-sector and where it has been used, components of housing quality were not holistically considered. This study considers this identified deficiency in developing a benchmark for assessing affordable housing quality impact factors. As part of this study, samples of 4 affordable Housing projects were examined. Two each were originally selected from under 5 categories of ‘operational quality standards’ within United Kingdom. Samples of 10 projects were extracted from a total of 80 identified UK affordable housing projects. Investigative study was conducted on these projects showing varying impact factors and constituent parameters responsible for their quality. Identified impact criteria found on these projects were mapped against a unifying set standard and weighted with ‘relative importance index’. Adopting quality function deployment (QFD) technique, a quality matrix was developed from these quality standards groupings with their impact factors. An affordable housing quality benchmark and a relative toolkit evolved from resultant quality matrix of project case studies and questionnaire served on practitioners’ performance. Whereas the toolkit was empirically tested for reliability and construct validity, the benchmark was subjected to refinement with the use of project case study.
83

Optimal Portfolio in Outperforming Its Liability Benchmark for a Defined Benefit Pension Plan

李意豐, Yi-Feng Li Unknown Date (has links)
摘要 本文於確定給付退休金計劃下,探討基金經理人於最差基金財務短絀情境發生前極大化管理目標之最適投資組合,基金比值過程定義為基金現值與負債指標之比例,管理人將於指定最差基金比值發生前極大化達成既定經營目標之機率,隨時間改變之基金投資集合包括無風險之現金、債券與股票。本研究建構隨機控制模型描述此最適化問題,並以動態規劃方法求解,由結果歸納,經理人之最適策略包含極小化基金比值變異之避險因素,風險偏好及跨期投資集合相關之避險因素與模型狀態變數相關之避險因素。本研究利用馬可夫練逼近法逼近隨機控制的數值解,結果顯示基金經理人須握有很大部位的債券,且不同的投資期間對於最適投資決策有很大的影響。 關鍵字: 短絀、確定給付、負債指標、隨機控制、動態規劃。 / Abstract This paper analyzes the portfolio problem that is a pension fund manager has to maximize the possibility of reaching his managerial goal before the worst scenario shortfall occurs in a defined benefit pension scheme. The fund ratio process defined as the ratio between the fund level and its accrued liability benchmark is attained to maximize the probability that the predetermined target is achieved before it falls below an intolerable boundary. The time-varying opportunity set in our study includes risk-free cash, bonds and stock index. The problems are formulated as a stochastic control framework and are solved through dynamics programming. In this study, the optimal portfolio are characterized by three components, the liability hedging component, the intertemporal hedging component against changes in the opportunity set, and the temporal hedging component minimizing the variation in fund ratio growth. The Markov chain approximation methods are employed to approximate the stochastic control solutions numerically. The result shows that fund managers should hold large proportions of bonds and time horizon plays a crucial role in constructing the optimal portfolio. Keywords: shortfall; defined benefit; liability benchmark; stochastic control; dynamic programming.
84

Benchmarking the Resilience of Organisations

Stephenson, Amy Victoria January 2010 (has links)
Our world is more technologically advanced and interdependent, risks are increasingly shared across local, regional and national boundaries and we are more culturally diverse than ever before. As a result, communities are increasingly confronted with emergencies and crises which challenge their social and economic stability. To be resilient, communities rely on services and employment provided by organisations, to enable them to plan for, respond to, and recover from emergencies and crises. However organisational and community resilience are two sides of the same coin; if organisations are not prepared to respond to emergencies and crises, communities too are not prepared. Resilient organisations are also better poised to develop competitive advantage. However despite the potential business and performance rewards of becoming more resilient, organisations struggle to prioritise resilience and to allocate resources to resilience, which could be put to more immediate use. To enable organisations to invest in their resilience, the business case for resilience must be better than the case for new equipment or new staff. This thesis develops a methodology and survey tool for measuring and benchmarking organisational resilience. Previous qualitative case study research is reviewed and operationalised as a resilience measurement tool. The tool is tested on a random sample of Auckland organisations and factor analysis is used to further develop the instrument. The resilience benchmarking methodology is designed to guide organisations’ use of the resilience measurement tool and its incorporation into business-as-usual continuous improvement. Significant contributions of this thesis include a new model of organisational resilience, the resilience measurement tool, and the resilience benchmarking methodology. Together these outputs translate the concept of resilience for organisations and provide information on resilience strengths and weaknesses that enable them to proactively address their resilience and to develop a business case for resilience investment.
85

Analysis and Experimental Comparison of Graph Databases / Analysis and Experimental Comparison of Graph Databases

Kolomičenko, Vojtěch January 2013 (has links)
In the recent years a new type of NoSQL databases, called Graph databases (GDBs), has gained significant popularity due to the increasing need of processing and storing data in the form of a graph. The objective of this thesis is a research on possibilities and limitations of GDBs and conducting an experimental comparison of selected GDB implementations. For this purpose the requirements of a universal GDB benchmark have been formulated and an extensible benchmarking tool, named BlueBench, has been developed.
86

Automatické generování umelých XML dokumentu / Automatic Generation of Synthetic XML Documents

Betík, Roman January 2015 (has links)
The aim of this thesis is to research the current possibilities and limitations of automatic generation of synthetic XML documents. The first part of the work discusses the properties of the most used XML data generators and compares them to each other. The next part of the thesis proposes an algorithm for XML data generation which focuses on subset of the main XML data characteristics (number of elements, number of attributes, fan-out, mixed contents etc.). The main target of the algorithm is to generate XML documents using a set of settings which are easy to understand. The last part of the work compares the proposed solution with the existing ones. The comparison focuses on how easy it is to generate XML documents, what structures can be created and finally it compares properties of the similar XML data created using different XML data generators. Powered by TCPDF (www.tcpdf.org)
87

Automatické generování umelých XML dokumentu / Automatic Generation of Synthetic XML Documents

Betík, Roman January 2013 (has links)
The aim of this thesis is to research the current possibilities and limitations of automatic generation of synthetic XML documents. The first part of the work discusses the properties of the most used XML data generators and compares them to each other. The next part of the thesis proposes an algorithm for XML data generation which focuses on subset of the main XML data characteristics (number of elements, number of attributes, fan-out, mixed contents etc.). The main target of the algorithm is to generate XML documents using a set of settings which are easy to understand. The last part of the work compares the proposed solution with the existing ones. The comparison focuses on how easy it is to generate XML documents, what structures can be created and finally it compares properties of the similar XML data created using different XML data generators. Powered by TCPDF (www.tcpdf.org)
88

Deployment of Performance Evaluation Tools in Industrial Use Case / Deployment of Performance Evaluation Tools in Industrial Use Case

Täuber, Jiří January 2013 (has links)
Nowadays software performance is evaluated not only by specialized review companies but it is more and more starting to be a common practice for the software developers themselves. Companies are often forced to develop and maintain their own tools for measuring performance of the developed applications. On the Faculty of Mathematics and Physics there has been created a toolkit for automation of software performance evaluation called BEEN. This toolkit should significantly ease the management of individual performance measurements but it is not possible to test it thoroughly in the environment where it was created. The goal of this thesis is to deploy BEEN in a real environment of commercially oriented company and evaluate the usability of this toolkit for the developers. We will focus on evaluating both objective and subjective positives and drawbacks of this toolkit as observed by unbiased users.
89

Benchmarking Open-Source Tree Learners in R/RWeka

Schauerhuber, Michael, Zeileis, Achim, Meyer, David, Hornik, Kurt January 2007 (has links) (PDF)
The two most popular classification tree algorithms in machine learning and statistics - C4.5 and CART - are compared in a benchmark experiment together with two other more recent constant-fit tree learners from the statistics literature (QUEST, conditional inference trees). The study assesses both misclassification error and model complexity on bootstrap replications of 18 different benchmark datasets. It is carried out in the R system for statistical computing, made possible by means of the RWeka package which interfaces R to the open-source machine learning toolbox Weka. Both algorithms are found to be competitive in terms of misclassification error - with the performance difference clearly varying across data sets. However, C4.5 tends to grow larger and thus more complex trees. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
90

A Benchmark for ASP Systems: Resource Allocation in Business Processes

Giray, Havur, Cristina, Cabanillas, Axel, Polleres 26 November 2018 (has links) (PDF)
The goal of this paper is to benchmark Answer Set Programming (ASP) systems to test their performance when dealing with a complex optimization problem. In particular, the problem tackled is resource allocation in the area of Business Process Management (BPM). Like many other scheduling problems, the allocation of resources and starting times to business process activities is a challenging optimization problem for ASP solvers. Our problem encoding is ASP Core-2 standard compliant and it is realized in a declarative and compact fashion. We develop an instance generator that produces problem instances of different size and hardness with respect to adjustable parameters. By using the baseline encoding and the instance generator, we provide a comparison between the two award-winning ASP solvers clasp and wasp and report the grounding performance of gringo and i-dlv. The benchmark suggests that there is room for improvement concerning both the grounders and the solvers. Fostered by the relevance of the problem addressed, of which several variants have been described in different domains, we believe this is a solid application-oriented benchmark for the ASP community. / Series: Working Papers on Information Systems, Information Business and Operations

Page generated in 0.0531 seconds