261 |
Data management on distributed databasesWah, Benjamin W. January 1900 (has links)
"Revision of the author's thesis, University of California, Berkeley, 1979." / Includes bibliographical references (p. [273]-281) and index.
|
262 |
Evaluation of potential DSS tool for BDF-HQ manpower and operational equipment rsource planning /Alhamdan, Ali M. January 2003 (has links) (PDF)
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, June 2003. / Thesis advisor(s): Daniel R. Dolk, Glenn Cook. Includes bibliographical references (p. 113-114). Also available online.
|
263 |
Die ontwerp en ontwikkeling van 'n rekenaargesteunde opleidingsbestuurstelsel vir die chemiese industrie (Afrikaans)Botha, Johannes Lodewikus. January 2001 (has links)
Thesis (Ph. D. (Educational Management and Policy Studies))--University of Pretoria, 2001. / Includes bibliographical references.
|
264 |
Processing and management of uncertain information in vague databases /Lu, An. January 2009 (has links)
Includes bibliographical references (p. 147-159).
|
265 |
Skyline queries in database systems /Fu, Gregory Chung Yin. January 2003 (has links)
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 51-52). Also available in electronic version. Access restricted to campus users.
|
266 |
GANNET: A machine learning approach to document retrievalChen, Hsinchun, Kim, Jinwoo 12 1900 (has links)
Artificial Intelligence Lab, Department of MIS, University of Arizona / Information science researchers have recently turned to new artificial intelligence-based inductive learning techniques including neural networks, symbolic learning and genetic algorithms. An overview of the new techniques and their usage in information science research is provided. The algorithms adopted for a hybrid genetic algorithms and neural nets based system, called GANNET, are presented. GANNET performed concept (keyword) optimization for user-selected documents during information retrieval using the genetic algorithms. It then used the optimized concepts to perform concept exploration in a large network of related concepts through the Hopfield net parallel relaxation procedure. Based on a test collection of about 3,000 articles from DIALOG and an automatically created thesaurus, and using Jaccard's score as a performance measure, the experiment showed that GANNET improved the Jaccard's scores by about 50% and helped identify the underlying concepts that best describe the user-selected documents.
|
267 |
COPLINK Knowledge Management for Law Enforcement: Text Analysis, Visualization and CollaborationAtabakhsh, Homa, Schroeder, Jennifer, Chen, Hsinchun, Chau, Michael, Xu, Jennifer J., Zhang, Jing, Bi, Haidong January 2001 (has links)
Artificial Intelligence Lab, Department of MIS, University of Arizona / Crime and police report information is rapidly migrating from paper records to automated
records management databases. Most mid and large sized police agencies have such systems that
provide access to information by their own personnel, but lack any efficient manner by which to
provide that information to other agencies. Criminals show no regard for jurisdictional
boundaries and in fact take advantage of the lack of communication across jurisdictions. Federal
standards initiatives such as the National Incident Based Reporting System (NIBRS, US
Department of Justice 1998), are attempting to provide reporting standards to police agencies to
facilitate future reporting and information sharing among agencies as these electronic reporting
systems become more widespread. We integrated platform-independence, stability, scalability, and an intuitive graphical user interface to develop the COPLINK system, which is currently being deployed at Tucson
Police Department (TPD). User evaluations of the application allowed us to study the impact of
COPLINK on law enforcement personnel as well as to identify requirements for improving the
system and extending the project. We are currently in the process of extending the functionality
of COPLINK in several areas. These include textual analysis, collaboration, visualization and
geo-mapping.
|
268 |
Incorporating semantic integrity constraints in a database schemaYang, Heng-li 11 1900 (has links)
A database schema should consist of structures and semantic integrity constraints. Se
mantic integrity constraints (SICs) are invariant restrictions on the static states of the
stored data and the state transitions caused by the primitive operations: insertion, dele
tion, or update. Traditionally, database design has been carried out on an ad hoc basis
and focuses on structure and efficiency. Although the E-R model is the popular concep
tual modelling tool, it contains few inherent SICs. Also, although the relational database
model is the popular logical data model, a relational database in fourth or fifth normal
form may still represent little of the data semantics. Most integrity checking is distributed
to the application programs or transactions. This approach to enforcing integrity via the
application software causes a number of problems.
Recently, a number of systems have been developed for assisting the database design
process. However, only a few of those systems try to help a database designer incorporate
SICs in a database schema. Furthermore, current SIC representation languages in the
literature cannot be used to represent precisely the necessary features for specifying
declarative and operational semantics of a SIC, and no modelling tool is available to
incorporate SICs.
This research solves the above problems by presenting two models and one subsystem.
The E-R-SIC model is a comprehensive modelling tool for helping a database designer in
corporate SICs in a database schema. It is application domain-independent and suitable
for implementation as part of an automated database design system. The SIC Repre
sentation model is used to represent precisely these SICs. The SIC elicitation subsystem
would verify these general SICs to a certain extent, decompose them into sub-SICs if
necessary, and transform them into corresponding ones in the relational model.
A database designer using these two modelling tools can describe more data semantics
than with the widely used relational model. The proposed SIC elicitation subsystem can
provide more modelling assistance for him (her) than current automated database design
systems.
|
269 |
Geometric searching with spacefilling curvesNulty, William Glenn 08 1900 (has links)
No description available.
|
270 |
Applying Calibration to Improve Uncertainty AssessmentFondren, Mark E 16 December 2013 (has links)
Uncertainty has a large effect on projects in the oil and gas industry, because most aspects of project evaluation rely on estimates. Industry routinely underestimates uncertainty, often significantly. The tendency to underestimate uncertainty is nearly universal. The cost associated with underestimating uncertainty, or overconfidence, can be substantial. Studies have shown that moderate overconfidence and optimism can result in expected portfolio disappointment of more than 30%. It has been shown that uncertainty can be assessed more reliably through look-backs and calibration, i.e., comparing actual results to probabilistic predictions over time. While many recognize the importance of look-backs, calibration is seldom practiced in industry. I believe a primary reason for this is lack of systematic processes and software for calibration.
The primary development of my research is a database application that provides a way to track probabilistic estimates and their reliability over time. The Brier score and its components, mainly calibration, are used for evaluating reliability. The system is general in the types of estimates and forecasts that it can monitor, including production, reserves, time, costs, and even quarterly earnings. Forecasts may be assessed visually, using calibration charts, and quantitatively, using the Brier score. The calibration information can be used to modify probabilistic estimation and forecasting processes as needed to be more reliable. Historical data may be used to externally adjust future forecasts so they are better calibrated. Three experiments with historical data sets of predicted vs. actual quantities, e.g., drilling costs and reserves, are presented and demonstrate that external adjustment of probabilistic forecasts improve future estimates. Consistent application of this approach and database application over time should improve probabilistic forecasts, resulting in improved company and industry performance.
|
Page generated in 0.0285 seconds