Spelling suggestions: "subject:"fdtd""
1 |
On detecting and repairing inconsistent schema mappingsHo, Terence Cheung-Fai 11 1900 (has links)
Huge amount of data flows around the Internet every second, but for the data to be
useful at its destination, it must be presented in a way such that the target has little problem interpreting it. Current data exchange technologies may rearrange the
structure of data to suit expectations at the target. However, there may be semantics
behind data (e.g. knowing the title of a book can determine its #pages) that may
be violated after data translation. These semantics are expressed as integrity constraints (IC) in a database. Currently, there is no guarantee that the exchanged data
conforms to the target’s ICs. As a result, existing applications (e.g. user queries)
that assume such semantics will no longer function correctly. Current constraint
repair techniques deal with data after it has been translated; thus take no consideration of the integrity constraints at the source. Moreover, such constraint repair
methods usually involve addition/deletion/modification of data, which may yield
incomplete or false data. We consider the constraints of both source and target
schemas; together with the mapping, we can efficiently detect which constraint is
violated and suggest ways to correct the mappings.
|
2 |
On detecting and repairing inconsistent schema mappingsHo, Terence Cheung-Fai 11 1900 (has links)
Huge amount of data flows around the Internet every second, but for the data to be
useful at its destination, it must be presented in a way such that the target has little problem interpreting it. Current data exchange technologies may rearrange the
structure of data to suit expectations at the target. However, there may be semantics
behind data (e.g. knowing the title of a book can determine its #pages) that may
be violated after data translation. These semantics are expressed as integrity constraints (IC) in a database. Currently, there is no guarantee that the exchanged data
conforms to the target’s ICs. As a result, existing applications (e.g. user queries)
that assume such semantics will no longer function correctly. Current constraint
repair techniques deal with data after it has been translated; thus take no consideration of the integrity constraints at the source. Moreover, such constraint repair
methods usually involve addition/deletion/modification of data, which may yield
incomplete or false data. We consider the constraints of both source and target
schemas; together with the mapping, we can efficiently detect which constraint is
violated and suggest ways to correct the mappings.
|
3 |
Efektivita výroby v zakázkové truhlářské dílně a možnosti jejího zvýšeníUrubek, Lukáš January 2015 (has links)
This thesis describes the design to increase efficiency of furniture production in a small company. The aim is to summarize the current situation at the new production program and recommend purchase of new machines and their selection. This new machines are deployed in the foor plan in several variants and is choosen the best version. Another aim is to prepare a budget expenses and revenues from the existing machine park and new machine park to determine the effectiveness of investments.
|
4 |
On detecting and repairing inconsistent schema mappingsHo, Terence Cheung-Fai 11 1900 (has links)
Huge amount of data flows around the Internet every second, but for the data to be
useful at its destination, it must be presented in a way such that the target has little problem interpreting it. Current data exchange technologies may rearrange the
structure of data to suit expectations at the target. However, there may be semantics
behind data (e.g. knowing the title of a book can determine its #pages) that may
be violated after data translation. These semantics are expressed as integrity constraints (IC) in a database. Currently, there is no guarantee that the exchanged data
conforms to the target’s ICs. As a result, existing applications (e.g. user queries)
that assume such semantics will no longer function correctly. Current constraint
repair techniques deal with data after it has been translated; thus take no consideration of the integrity constraints at the source. Moreover, such constraint repair
methods usually involve addition/deletion/modification of data, which may yield
incomplete or false data. We consider the constraints of both source and target
schemas; together with the mapping, we can efficiently detect which constraint is
violated and suggest ways to correct the mappings. / Science, Faculty of / Computer Science, Department of / Graduate
|
5 |
Discovering web page communities for web-based data managementHou, Jingyu January 2002 (has links)
The World Wide Web is a rich source of information and continues to expand in size and complexity. Mainly because the data on the web is lack of rigid and uniform data models or schemas, how to effectively and efficiently manage web data and retrieve information is becoming a challenge problem. Discovering web page communities, which capture the features of the web and web-based data to find intrinsic relationships among the data, is one of the effective ways to solve this problem. A web page community is a set of web pages that has its own logical and semantic structures. In this work, we concentrate on the web data in web page format and exploit hyperlink information to discover (construct) web page communities. Three main web page communities are studied in this work: the first one is consisted of hub and authority pages, the second one is composed of relevant web pages with respect to a given page (URL), and the last one is the community with hierarchical cluster structures. For analysing hyperlinks, we establish a mathematical framework, especially the matrix-based framework, to model hyperlinks. Within this mathematical framework, hyperlink analysis is placed on a solid mathematic base and the results are reliable. For the web page community that is consisted of hub and authority pages, we focus on eliminating noise pages from the concerned page source to obtain another good quality page source, and in turn improve the quality of web page communities. We propose an innovative noise page elimination algorithm based on the hyperlink matrix model and mathematic operations, especially the singular value decomposition (SVD) of matrix. The proposed algorithm exploits hyperlink information among the web pages, reveals page relationships at a deeper level, and numerically defines thresholds for noise page elimination. The experiment results show the effectiveness and feasibility of the algorithm. This algorithm could also be used solely for web-based data management systems to filter unnecessary web pages and reduce the management cost. In order to construct a web page community that is consisted of relevant pages with respect to a given page (URL), we propose two hyperlink based relevant page finding algorithms. The first algorithm comes from the extended co-citation analysis of web pages. It is intuitive and easy to be implemented. The second one takes advantage of linear algebra theories to reveal deeper relationships among the web pages and identify relevant pages more precisely and effectively. The corresponding page source construction for these two algorithms can prevent the results from being affected by malicious hyperlinks on the web. The experiment results show the feasibility and effectiveness of the algorithms. The research results could be used to enhance web search by caching the relevant pages for certain searched pages. For the purpose of clustering web pages to construct a community with its hierarchical cluster structures, we propose an innovative web page similarity measurement that incorporates hyperlink transitivity and page importance (weight).Based on this similarity measurement, two types of hierarchical web page clustering algorithms are proposed. The first one is the improvement of the conventional K-mean algorithms. It is effective in improving page clustering, but is sensitive to the predefined similarity thresholds for clustering. Another type is the matrix-based hierarchical algorithm. Two algorithms of this type are proposed in this work. One takes cluster-overlapping into consideration, another one does not. The matrix-based algorithms do not require predefined similarity thresholds for clustering, are independent of the order in which the pages are presented, and produce stable clustering results. The matrix-based algorithms exploit intrinsic relationships among web pages within a uniform matrix framework, avoid much influence of human interference in the clustering procedure, and are easy to be implemented for applications. The experiments show the effectiveness of the new similarity measurement and the proposed algorithms in web page clustering improvement. For applying above mathematical algorithms better in practice, we generalize the web page discovering as a special case of information retrieval and present a visualization system prototype, as well as technical details on visualization algorithm design, to support information retrieval based on linear algebra. The visualization algorithms could be smoothly applied to web applications. XML is a new standard for data representation and exchange on the Internet. In order to extend our research to cover this important web data, we propose an object representation model (ORM) for XML data. A set of transformation rules and algorithms are established to transform XML data (DTD and XML documents with DTD or without DTD) into this model. This model capsulizes elements of XML data and data manipulation methods. DTD-Tree is also defined to describe the logical structure of DTD. It also can be used as an application program interface (API) for processing DTD, such as transforming a DTD document into the ORM. With this data model, semantic meanings of the tags (elements) in XML data can be used for further research in XML data management and information retrieval, such as community construction for XML data.
|
6 |
Indexing collections of XML documents with arbitrary linksSayed, Awny Abd el-Hady Ahmed. Unknown Date (has links) (PDF)
Essen, University Duisburg-Essen, Informatik und Wirtschaftsinformatik, Diss., 2005--Duisburg.
|
7 |
Indexing collections of XML documents with arbitrary linksSayed, Awny Abd el-Hady Ahmed January 2005 (has links) (PDF)
Duisburg, Essen, Univ. Duisburg-Essen, Informatik und Wirtschaftsinformatik, Diss., 2005
|
8 |
ATool - Typographie als Quelle der Textstruktur /Meyer, Oliver. January 2006 (has links)
Techn. Hochsch., Diss., 2005--Aachen.
|
9 |
Moderní nátěrové systémy určené pro dokončování skříňového nábytkuČtvrtník, Jan January 2015 (has links)
The subject of this thesis is dealing with finishing of cabinet furniture with modern coating systems. The theoretical part will analyze the current state of finishing furniture parts, the basic types of coating and there will be assessed factors affecting quality finishes for furniture designed for an interior. This section also includes the issue of defects in film and what causes them. The outcome of this thesis is also evaluating of appropriate paints for finishing of cabinet furniture. Samples will be made from beech wood and chipboard covered with beech veneer. As the test paints were selected waterborne, polyurethane and acrylic coatings. The samples then undergo a series of tests on the basis of which the suitability of a particular paint will evaluate.
|
10 |
Vliv použitého lepidla a klimatických podmínek na pevnost lepeného spoje dýha - DTDŠudřich, Pavel January 2011 (has links)
No description available.
|
Page generated in 0.0291 seconds