271 |
Applying Calibration to Improve Uncertainty AssessmentFondren, Mark E 16 December 2013 (has links)
Uncertainty has a large effect on projects in the oil and gas industry, because most aspects of project evaluation rely on estimates. Industry routinely underestimates uncertainty, often significantly. The tendency to underestimate uncertainty is nearly universal. The cost associated with underestimating uncertainty, or overconfidence, can be substantial. Studies have shown that moderate overconfidence and optimism can result in expected portfolio disappointment of more than 30%. It has been shown that uncertainty can be assessed more reliably through look-backs and calibration, i.e., comparing actual results to probabilistic predictions over time. While many recognize the importance of look-backs, calibration is seldom practiced in industry. I believe a primary reason for this is lack of systematic processes and software for calibration.
The primary development of my research is a database application that provides a way to track probabilistic estimates and their reliability over time. The Brier score and its components, mainly calibration, are used for evaluating reliability. The system is general in the types of estimates and forecasts that it can monitor, including production, reserves, time, costs, and even quarterly earnings. Forecasts may be assessed visually, using calibration charts, and quantitatively, using the Brier score. The calibration information can be used to modify probabilistic estimation and forecasting processes as needed to be more reliable. Historical data may be used to externally adjust future forecasts so they are better calibrated. Three experiments with historical data sets of predicted vs. actual quantities, e.g., drilling costs and reserves, are presented and demonstrate that external adjustment of probabilistic forecasts improve future estimates. Consistent application of this approach and database application over time should improve probabilistic forecasts, resulting in improved company and industry performance.
|
272 |
The effects of inheritance on the properties of physical storage models in object oriented databasesWillshire, Mary Jane 12 1900 (has links)
No description available.
|
273 |
Using Economic Models to Tune Resource Allocations in Database Management SystemsZhang, Mingyi 17 November 2008 (has links)
Resource allocation in a database management system (DBMS) is a performance management process in which an autonomic DBMS makes resource allocation decisions based on properties like workload business importance. We propose the use of economic models in a DBMS to guide the resource allocation decisions. An economic model is described in terms of business trades and concepts, and it has been successfully applied in some computer system resource allocation problems.
In this thesis, we present approaches that use economic models to allocate single and multiple DBMS resources, such as main memory buffer pool space and system CPU shares, to workloads running concurrently on a DBMS based on the workloads’ business importance policies. We first illustrate how economic models can be used to allocate single DBMS resources, namely system CPU shares, to competing workloads on a DBMS. We then extend this approach to using economic models to simultaneously allocate multiple DBMS resources, namely buffer pool memory space and system CPU shares, to competing workloads on a DBMS based on the workload business importance policy in order to achieve their service level agreements. Experiments are conducted using IBM® DB2® databases to verify the effectiveness of our approach. / Thesis (Master, Computing) -- Queen's University, 2008-11-17 15:35:50.303
|
274 |
Data base design for integrated computer-aided engineeringHatchell, Brian 05 1900 (has links)
No description available.
|
275 |
CAD/CAM data base management systems requirements for mechanical partsWhelan, Peter Timothy 08 1900 (has links)
No description available.
|
276 |
Thermal/structural integration through relational database managementBenatar, Gil 05 1900 (has links)
No description available.
|
277 |
Towards Privacy Preserving of Forensic DNA DatabasesLiu, Sanmin 2011 December 1900 (has links)
Protecting privacy of individuals is critical for forensic genetics. In a kinship/identity testing, related DNA profiles between user's query and the DNA database need to be extracted. However, unrelated profiles cannot be revealed to each other. The challenge is today's DNA database usually contains millions of DNA profiles, which is too big to perform privacy-preserving query with current cryptosystem directly. In this thesis, we propose a scalable system to support privacy-preserving query in DNA Database. A two-phase strategy is designed: the first is a Short Tandem Repeat index tree for quick fetching candidate profiles from disk. It groups loci of DNA profiles by matching probability, so as to reduce I/O cost required to find a particular profile. The second is an Elliptic Curve Cryptosystem based privacy-preserving matching engine, which performs match between candidates and user's sample. In particular, a privacy-preserving DNA profile matching algorithm is designed, which achieves O(n) computing time and communication cost. Experimental results show that our system performs well at query latency, query hit rate, and communication cost. For a database of one billion profiles, it takes 80 seconds to return results to the user.
|
278 |
Data Structures and Reduction Techniques for Fire TestsTobeck, Daniel January 2007 (has links)
To perform fire engineering analysis, data on how an object or group of objects burn
is almost always needed. This data should be collected and stored in a logical and
complete fashion to allow for meaningful analysis later. This thesis details the design
of a new fire test Data Base Management System (DBMS) termed UCFIRE which
was built to overcome the limitations of existing fire test DBMS and was based
primarily on the FDMS 2.0 and FIREBASEXML specifications. The UCFIRE DBMS
is currently the most comprehensive and extensible DBMS available in the fire
engineering community and can store the following test types: Cone Calorimeter,
Furniture Calorimeter, Room/Corner Test, LIFT and Ignitability Apparatus Tests.
Any data reduction which is performed on this fire test data should be done in an
entirely mechanistic fashion rather than rely on human intuition which is subjective.
Currently no other DBMS allows for the semi-automation of the data reduction
process. A number of pertinent data reduction algorithms were investigated and
incorporated into the UCFIRE DBMS. An ASP.NET Web Service (WEBFIRE) was
built to reduce the bandwidth required to exchange fire test information between the
UCFIRE DBMS and a UCFIRE document stored on a web server.
A number of Mass Loss Rate (MLR) algorithms were investigated and it was found
that the Savitzky-Golay filtering algorithm offered the best performance. This
algorithm had to be further modified to autonomously filter other noisy events that
occurred during the fire tests. This algorithm was then evaluated on test data from
exemplar Furniture Calorimeter and Cone Calorimeter tests.
The LIFT test standard (ASTM E 1321-97a) requires its ignition and flame spread
data to be scrutinised but does not state how to do this. To meet these requirements
the fundamentals of linear regression were reviewed and an algorithm to
mechanistically scrutinise ignition and flame spread data was developed. This
algorithm seemed to produce reasonable results when used on exemplar ignition and
flame spread test data.
|
279 |
The practice of relationship marketing in hotelsOsman, Hanaa January 2001 (has links)
No description available.
|
280 |
The capture of meaning in database administrationRobinson, H. M. January 1988 (has links)
No description available.
|
Page generated in 0.0357 seconds