Spelling suggestions: "subject:"data objects"" "subject:"mata objects""
1 |
Konzeptioneller Entwurf und prototypische Implementierung einer Sicherheitsarchitektur für die Java-Data-Objects-SpezifikationenMerz, Matthias January 2007 (has links)
Zugl.: Mannheim, Univ., Diss., 2007
|
2 |
以XML資料庫為儲存體的Java永續物件之研究 / Persistent Java Data Objects on XML Databases侯語政, Hou, Yu-Cheng Unknown Date (has links)
物件永續化是應用系統設計時經常會面臨的需求,傳統做法是由開發人員自行設法將物件轉為資料庫可接受的格式,再存入後端資料庫。但這往往使得開發人員必須同時處理兩種資料模型,除了應用系統所用的物件模型之外,開發人員還要處理後端資料庫所用的資料模型,譬如表格等,以及兩種模型間的轉換。這不僅增加系統開發的複雜度,維護系統亦不容易。新的資料永續性技術Java資料物件(JDO)提供一個標準的框架,能夠幫助開發人員代為處理物件永續化的問題。因此開發人員能夠以單純的物件模型發展應用系統。另一方面,XML技術的興起帶動XML文件在資料交換與儲存方面的加速發展,其中專門儲存XML文件的資料庫也日益普遍。我們的研究是在瞭解如何使用XML資料庫為後端的資料儲存庫,而對Java物件進行物件永續化。 / Object persistence often comes up at the development of the application systems. Traditionally, the developers should try to transfer the objects to forms that databases can accept, and then store them in databases. But this often makes developers deal with two kinds of data models at the same time: besides object model that the application usually uses, the developers should also deal with the data model used for the backend database, like the relation model, and the conversion between both models. This not only increases the complexity of the system, but also the difficulty to maintain the system. A new technology of object persistence is Java Data Objects (JDO), which offer a standard framework to help developers to deal with object persistence so that the developers can concern themselves with object model only. On the other hand, the rise of XML technologies makes it attractive in data exchange and storage. The use of XML databases as data repositories becomes more and more common. Our research in the thesis is to realize JDO by serializing Java objects as XML documents and use XML databases as persistent repositories to store the resulting documents.
|
3 |
A heuristic information retrieval study : an investigation of methods for enhanced searching of distributed data objects exploiting bidirectional relevance feedbackPetratos, Panagiotis January 2004 (has links)
The primary aim of this research is to investigate methods of improving the effectiveness of current information retrieval systems. This aim can be achieved by accomplishing numerous supporting objectives. A foundational objective is to introduce a novel bidirectional, symmetrical fuzzy logic theory which may prove valuable to information retrieval, including internet searches of distributed data objects. A further objective is to design, implement and apply the novel theory to an experimental information retrieval system called ANACALYPSE, which automatically computes the relevance of a large number of unseen documents from expert relevance feedback on a small number of documents read. A further objective is to define a methodology used in this work as an experimental information retrieval framework consisting of multiple tables including various formulae which anow a plethora of syntheses of similarity functions, ternl weights, relative term frequencies, document weights, bidirectional relevance feedback and history adjusted term weights. The evaluation of bidirectional relevance feedback reveals a better correspondence between system ranking of documents and users' preferences than feedback free system ranking. The assessment of similarity functions reveals that the Cosine and Jaccard functions perform significantly better than the DotProduct and Overlap functions. The evaluation of history tracking of the documents visited from a root page reveals better system ranking of documents than tracking free information retrieval. The assessment of stemming reveals that system information retrieval performance remains unaffected, while stop word removal does not appear to be beneficial and can sometimes be harmful. The overall evaluation of the experimental information retrieval system in comparison to a leading edge commercial information retrieval system and also in comparison to the expert's golden standard of judged relevance according to established statistical correlation methods reveal enhanced system information retrieval effectiveness.
|
4 |
Konzeptioneller Entwurf und prototypische Implementierung einer Sicherheitsarchitektur für die Java Data Objects-spezifikation /Merz, Matthias. January 1900 (has links)
Zugleich: Diss. Mannheim, 2007. / Literaturverz.
|
5 |
Temporální rozšíření pro Java Data Objects / A Temporal Extension for Java Data ObjectsHorčička, Jakub January 2012 (has links)
The content of this thesis is devided into five parts. Firstly basic principles, data models and some languages of temporal databases are introduced. Next chapter describes ways and techniques for persistent storing of data objects in the programming language Java. Following chapters contain main ideas and key concepts of the Java Data Objects standard, draft for temporal extension of this standard and in the final chapter there are details of the actual implementation.
|
6 |
Contributions to fuzzy object comparison and applications : similarity measures for fuzzy and heterogeneous data and their applicationsBashon, Yasmina Massoud January 2013 (has links)
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
|
7 |
Contributions to fuzzy object comparison and applications. Similarity measures for fuzzy and heterogeneous data and their applications.Bashon, Yasmina M. January 2013 (has links)
This thesis makes an original contribution to knowledge in the fi eld
of data objects' comparison where the objects are described by attributes
of fuzzy or heterogeneous (numeric and symbolic) data types.
Many real world database systems and applications require information
management components that provide support for managing
such imperfect and heterogeneous data objects. For example,
with new online information made available from various sources, in
semi-structured, structured or unstructured representations, new information
usage and search algorithms must consider where such data
collections may contain objects/records with di fferent types of data:
fuzzy, numerical and categorical for the same attributes.
New approaches of similarity have been presented in this research to
support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity
measures presented in this thesis, to handle the vagueness (fuzzy data
type) within data objects. A framework of new and unif ied similarity
measures for comparing heterogeneous objects described by numerical,
categorical and fuzzy attributes has also been introduced.
Examples are used to illustrate, compare and discuss the applications
and e fficiency of the proposed approaches to heterogeneous data comparison. / Libyan Embassy
|
8 |
A Dredging Knowledge-Base Expert System for Pipeline Dredges with Comparison to Field DataWilson, Derek Alan 2010 December 1900 (has links)
A Pipeline Analytical Program and Dredging Knowledge{Base Expert{System
(DKBES) determines a pipeline dredge's production and resulting cost and schedule.
Pipeline dredge engineering presents a complex and dynamic process necessary to
maintain navigable waterways. Dredge engineers use pipeline engineering and slurry
transport principles to determine the production rate of a pipeline dredge system.
Engineers then use cost engineering factors to determine the expense of the dredge
project.
Previous work in engineering incorporated an object{oriented expert{system to
determine cost and scheduling of mid{rise building construction where data objects
represent the fundamental elements of the construction process within the program
execution. A previously developed dredge cost estimating spreadsheet program which
uses hydraulic engineering and slurry transport principles determines the performance
metrics of a dredge pump and pipeline system. This study focuses on combining
hydraulic analysis with the functionality of an expert{system to determine the performance
metrics of a dredge pump and pipeline system and its resulting schedule.
Field data from the U.S. Army Corps of Engineers pipeline dredge, Goetz, and
several contract daily dredge reports show how accurately the DKBES can predict
pipeline dredge production. Real{time dredge instrumentation data from the Goetz compares the accuracy of the Pipeline Analytical Program to actual dredge operation.
Comparison of the Pipeline Analytical Program to pipeline daily dredge reports
shows how accurately the Pipeline Analytical Program can predict a dredge project's
schedule over several months. Both of these comparisons determine the accuracy
and validity of the Pipeline Analytical Program and DKBES as they calculate the
performance metrics of the pipeline dredge project.
The results of the study determined that the Pipeline Analytical Program compared
closely to the Goetz eld data where only pump and pipeline hydraulics a ected
the dredge production. Results from the dredge projects determined the Pipeline Analytical
Program underestimated actual long{term dredge production. Study results
identi ed key similarities and di erences between the DKBES and spreadsheet program
in terms of cost and scheduling. The study then draws conclusions based on
these ndings and o ers recommendations for further use.
|
Page generated in 0.0709 seconds