• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 18
  • 10
  • 8
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 1
  • 1
  • Tagged with
  • 204
  • 204
  • 204
  • 204
  • 44
  • 42
  • 41
  • 41
  • 40
  • 33
  • 24
  • 20
  • 20
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Conceptual model builder

Lin, Chia-Yang 01 January 2004 (has links)
Whenever one designs a new database system, an Entity-Relationship Diagram (ER diagram) is always needed to present the structure of this database. Using the graphically well-arranged ER Diagram helps you to easily understand the entities, attributes, domains, primary keys, foreign keys, constraints, and relationships inside a database. This data-modeling tool is an ideal choice for companies and developers.
192

Modeling cadastral spatial relationships using an object-oriented information structure

Kjerne, Daniel 01 January 1987 (has links)
This thesis identifies a problem in the current practice for storage of locational data of entities in the cadastral layer of a land information system (LIS), and presents as a solution an information model that uses an object-oriented paradigm.
193

CircularTrip and ArcTrip:effective grid access methods for continuous spatial queries.

Cheema, Muhammad Aamir, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
A k nearest neighbor query q retrieves k objects that lie closest to the query point q among a given set of objects P. With the availability of inexpensive location aware mobile devices, the continuous monitoring of such queries has gained lot of attention and many methods have been proposed for continuously monitoring the kNNs in highly dynamic environment. Multiple continuous queries require real-time results and both the objects and queries issue frequent location updates. Most popular spatial index, R-tree, is not suitable for continuous monitoring of these queries due to its inefficiency in handling frequent updates. Recently, the interest of database community has been shifting towards using grid-based index for continuous queries due to its simplicity and efficient update handling. For kNN queries, the order in which cells of the grid are accessed is very important. In this research, we present two efficient and effective grid access methods, CircularTrip and ArcTrip, that ensure that the number of cells visited for any continuous kNN query is minimum. Our extensive experimental study demonstrates that CircularTrip-based continuous kNN algorithm outperforms existing approaches in terms of both efficiency and space requirement. Moreover, we show that CircularTrip and ArcTrip can be used for many other variants of nearest neighbor queries like constrained nearest neighbor queries, farthest neighbor queries and (k + m)-NN queries. All the algorithms presented for these queries preserve the properties that they visit minimum number of cells for each query and the space requirement is low. Our proposed techniques are flexible and efficient and can be used to answer any query that is hybrid of above mentioned queries. For example, our algorithms can easily be used to efficiently monitor a (k + m) farthest neighbor query in a constrained region with the flexibility that the spatial conditions that constrain the region can be changed by the user at any time.
194

E-model: event-based graph data model theory and implementation

Kim, Pilho 06 July 2009 (has links)
The necessity of managing disparate data models is increasing within all IT areas. Emerging hybrid relational-XML systems are under development in this context to support both relational and XML data models. However, there are ever-growing needs for adequate data models for texts and multimedia, which are applications that require proper storage, and their capability to coexist and collaborate with other data models is as important as that of a relational-XML hybrid model. This work proposes a new data model named E-model that supports rich relations and reflects the dynamic nature of information. This E-model introduces abstract data typing objects and rules of relation that support: (1) the notion of time in object definition and relation, (2) multiple-type relations, (3) complex schema modeling methods using a relational directed acyclic graph, and (4) interoperation with popular data models. To implement the E-model prototype, extensive data operation APIs have been developed on top of relational databases. In processing dynamic queries, our prototype achieves an order of magnitude improvement in speed compared with popular data models. Based on extensive E-model APIs, a new language named EML is proposed. EML extends the SQL-89 standard with various E-model features: (1) unstructured queries, (2) unified object namespaces, (3) temporal queries, (4) ranking orders, (5) path queries, and (6) semantic expansions. The E-model system can interoperate with popular data models with its rich relations and flexible structure to support complex data models. It can act as a stand-alone database server or it can also provide materialized views for interoperation with other data models. It can also co-exist with established database systems as a centralized online archive or as a proxy database server. The current E-model prototype system was implemented on top of a relational database. This allows significant benefits from established database engines in application development. In addition to extensive features added to SQL, our EML prototype achieves an order of magnitude speed improvement in dynamic queries compared to popular database models. Availability Release the entire work immediately for access worldwide after my graduation.
195

CircularTrip and ArcTrip:effective grid access methods for continuous spatial queries.

Cheema, Muhammad Aamir, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
A k nearest neighbor query q retrieves k objects that lie closest to the query point q among a given set of objects P. With the availability of inexpensive location aware mobile devices, the continuous monitoring of such queries has gained lot of attention and many methods have been proposed for continuously monitoring the kNNs in highly dynamic environment. Multiple continuous queries require real-time results and both the objects and queries issue frequent location updates. Most popular spatial index, R-tree, is not suitable for continuous monitoring of these queries due to its inefficiency in handling frequent updates. Recently, the interest of database community has been shifting towards using grid-based index for continuous queries due to its simplicity and efficient update handling. For kNN queries, the order in which cells of the grid are accessed is very important. In this research, we present two efficient and effective grid access methods, CircularTrip and ArcTrip, that ensure that the number of cells visited for any continuous kNN query is minimum. Our extensive experimental study demonstrates that CircularTrip-based continuous kNN algorithm outperforms existing approaches in terms of both efficiency and space requirement. Moreover, we show that CircularTrip and ArcTrip can be used for many other variants of nearest neighbor queries like constrained nearest neighbor queries, farthest neighbor queries and (k + m)-NN queries. All the algorithms presented for these queries preserve the properties that they visit minimum number of cells for each query and the space requirement is low. Our proposed techniques are flexible and efficient and can be used to answer any query that is hybrid of above mentioned queries. For example, our algorithms can easily be used to efficiently monitor a (k + m) farthest neighbor query in a constrained region with the flexibility that the spatial conditions that constrain the region can be changed by the user at any time.
196

Fatiamento de malhas triangulares: teoria e experimentos

Gregori, Rodrigo Mello Mattos Habib 29 August 2014 (has links)
Manufatura Aditiva, também conhecida por Impressão 3D, é um processo baseado na sobreposição de camadas para produzir um objeto físico. Os dados para a produção desse objeto vêm de um modelo geométrico tridimensional, geralmente representado por uma malha de triângulos. Um dos principais procedimentos no processo de produção é fatiar a malha triangular e gerar uma série de contornos, os quais representam as camadas do objeto. Há diversas estratégicas para fatiar malhas triangulares, porém, a maior parte dos trabalhos na literatura foca-se em problemas como a qualidade do modelo, melhorias específicas no processo de fatiamento e uso de memória; poucos trabalhos, no entanto, abordam o problema por uma perspectiva de complexidade algorítmica. Algoritmos propostos atualmente para este problema executam em tempo O(n² + k²) ou O(n² + nlognk); o algoritmo proposto nesta dissertação possui complexidade O(nk) para uma entrada com n triângulos e k planos e, com K é o número médio de planos que cortam cada triângulo nesta entrada específica. O algoritmo proposto, chamado de Fatiamento por Estocada (FE) é comparado teórica e experimentalmente com alguns dos métodos conhecidos na literatura e os resultados mostram melhora considerável em tempo de execução. / Additive Manufacturing, also known as 3D printing, is a process based on the addition of sucessive layers in order to build a physical object. The data for building this object come from geometric 3D model, usually represented by a triangle mesh. One of the main procedures in this process is to slice the triangle mesh and output a sequence of contours, representing each one of the layers of the object. There are many strategies for slicing meshes, however, most of the current literature is concerned with ad hoc issues such as the quality of the model, specific improvements in the slicing process and memory usage, whereas few of them address the problem from an algorithmic complecity perspective. While current algorithms for this problem ruin in O(n² + k²) or O(n² + nlognk), the proposed algorithm runs in O(nk), for a given input with n triangles, k planes and where k is the average number of slices cutting each triangle in this specific input. This is asymptotically the best that can be achieved under certain fairly common assumptions. The proposed algorithm, called here Slicing by Stabbing (SS), was compared both theoretically and experimentally against known methods in the literature and the results show considerable improvement in execution time.
197

Fatiamento de malhas triangulares: teoria e experimentos

Gregori, Rodrigo Mello Mattos Habib 29 August 2014 (has links)
Manufatura Aditiva, também conhecida por Impressão 3D, é um processo baseado na sobreposição de camadas para produzir um objeto físico. Os dados para a produção desse objeto vêm de um modelo geométrico tridimensional, geralmente representado por uma malha de triângulos. Um dos principais procedimentos no processo de produção é fatiar a malha triangular e gerar uma série de contornos, os quais representam as camadas do objeto. Há diversas estratégicas para fatiar malhas triangulares, porém, a maior parte dos trabalhos na literatura foca-se em problemas como a qualidade do modelo, melhorias específicas no processo de fatiamento e uso de memória; poucos trabalhos, no entanto, abordam o problema por uma perspectiva de complexidade algorítmica. Algoritmos propostos atualmente para este problema executam em tempo O(n² + k²) ou O(n² + nlognk); o algoritmo proposto nesta dissertação possui complexidade O(nk) para uma entrada com n triângulos e k planos e, com K é o número médio de planos que cortam cada triângulo nesta entrada específica. O algoritmo proposto, chamado de Fatiamento por Estocada (FE) é comparado teórica e experimentalmente com alguns dos métodos conhecidos na literatura e os resultados mostram melhora considerável em tempo de execução. / Additive Manufacturing, also known as 3D printing, is a process based on the addition of sucessive layers in order to build a physical object. The data for building this object come from geometric 3D model, usually represented by a triangle mesh. One of the main procedures in this process is to slice the triangle mesh and output a sequence of contours, representing each one of the layers of the object. There are many strategies for slicing meshes, however, most of the current literature is concerned with ad hoc issues such as the quality of the model, specific improvements in the slicing process and memory usage, whereas few of them address the problem from an algorithmic complecity perspective. While current algorithms for this problem ruin in O(n² + k²) or O(n² + nlognk), the proposed algorithm runs in O(nk), for a given input with n triangles, k planes and where k is the average number of slices cutting each triangle in this specific input. This is asymptotically the best that can be achieved under certain fairly common assumptions. The proposed algorithm, called here Slicing by Stabbing (SS), was compared both theoretically and experimentally against known methods in the literature and the results show considerable improvement in execution time.
198

Esquemas de aproximação em multinível e aplicações / Multilevel approximation schemes and applications

Castro, Douglas Azevedo, 1982- 12 December 2011 (has links)
Orientador: Sônia Maria Gomes, Jorge Stolfi / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T12:39:30Z (GMT). No. of bitstreams: 1 Castro_DouglasAzevedo_D.pdf: 8872633 bytes, checksum: a17b2761789c6a831631ac143fdf5ca7 (MD5) Previous issue date: 2011 / Resumo: O objetivo desta tese é desenvolver algoritmos baseados em malhas e bases funcionais inovadoras usando técnicas de multiescala para aproximação de funções e resolução de problemas de equações diferenciais. Para certas classes de problemas, é possível incrementar a eficiência dos algoritmos de multiescala usando bases adaptativas, associadas a malhas construídas de forma a se ajustarem com o fenômeno a ser modelado. Nesta abordagem, em cada nível da hierarquia, os detalhes entre a aproximação desse nível e a aproximação definida no próximo nível menos refinado pode ser usada como indicador de regiões que necessitam de mais ou menos refinamento. Desta forma, em regiões onde a solução é suave, basta utilizar os elementos dos níveis menos refinados da hierarquia, enquanto que o maior refinamento é feito apenas onde a solução tiver variações bruscas. Consideramos dois tipos de formulações para representações multiescala, dependendo das bases adotadas: splines diádicos e wavelets. A primeira abordagem considera espaços aproximantes por funções splines sobre uma hierarquia de malhas cuja resolução depende do nível. A outra abordagem considera ferramentas da analise wavelet para representações em multirresolução de médias celulares. O enfoque está no desenvolvimento de algoritmos baseados em dados amostrais d-dimensionais em malhas diádicas que são armazenados em uma estrutura de árvore binária. A adaptatividade ocorre quando o refinamento é interrompido em algumas regiões do domínio, onde os detalhes entre dois níveis consecutivos são suficientemente pequenos. Um importante aspecto deste tipo de representação é que a mesma estrutura de dados é usada em qualquer dimensão, além de facilitar o acesso aos dados nela armezenados. Utilizamos as técnicas desenvolvidas na construção de um método adaptativo de volumes finitos em malhas diádicas para a solução de problemas diferenciais. Analisamos o desempenho do método adaptativo em termos da compressão de memória e tempo de CPU em comparação com os resultados do esquema de referência em malha uniforme no nível mais refinado. Neste sentido, comprovamos a eficiência do método adaptativo, que foi avaliada levando-se em consideração os efeitos da escolha de diferentes tipos de fluxo numérico e dos parâmetros de truncamento / Abstract: The goal of this thesis is to develop algorithms based on innovative meshes and functional bases using multiscale techniques for function approximation and solution of differential equation problems. For certain classes of problems, one can increase the efficiency of multiscale algorithms using hierarchical adaptive bases, associated to meshes whose resolution varies according to the local features of the phenomenon to be modeled. In this approach, at each level of the hierarchy the details-differences between the approximation for that level and that of the next coarser level-can be used as indicators of regions that need more or less refinement. In this way, in regions where the solution is smooth, it suffices to use elements of the less refined levels of the hierarchy, while the maximum refinement is used only where the solution has sharp variations. We consider two classes of formulations for multiscale representations, depending on the bases used: dyadic splines and wavelets. The first approach uses approximation spaces consisting of spline functions defined over a mesh hierarchy whose resolution depends on the level. The other approach uses tools from wavelet analysis for multiresolu-tion representations of cell averages. The focus is on the development of algorithms based on sampled d-dimensional data on dyadic meshes which are stored in a binary tree structures. The adaptivity happens when the refinement is interrupted in certain regions of the domain, where the details between two consecutive levels are sufficiently small. This representation greatly simplifies the access to the data and it can be used in any dimension. We use these techniques to build an adaptive finite volume method on dyadic grids for the solution of differential problems. We analyze the performance of the method in terms of memory compression and CPU time, comparing it with the reference scheme (which uses a uniform mesh at the maximum refinement level). In these tests, we confirmed the efficiency of the adaptive method for various numeric flow formulas and various choices of the thresholding parameters / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
199

Future of asynchronous transfer mode networking

Hachfi, Fakhreddine Mohamed 01 January 2004 (has links)
The growth of Asynchronous Transfer Mode (ATM) was considered to be the ideal carrier of the high bandwidth applications like video on demand and multimedia e-learning. ATM emerged commercially in the beginning of the 1990's. It was designed to provide a different quality of service at a speed up 100 Gbps for both real time and non real time application. The turn of the 90's saw a variety of technologies being developed. This project analyzes these technologies, compares them to the Asynchronous Transfer Mode and assesses the future of ATM.
200

Automated image classification via unsupervised feature learning by K-means

Karimy Dehkordy, Hossein 09 July 2015 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Research on image classification has grown rapidly in the field of machine learning. Many methods have already been implemented for image classification. Among all these methods, best results have been reported by neural network-based techniques. One of the most important steps in automated image classification is feature extraction. Feature extraction includes two parts: feature construction and feature selection. Many methods for feature extraction exist, but the best ones are related to deep-learning approaches such as network-in-network or deep convolutional network algorithms. Deep learning tries to focus on the level of abstraction and find higher levels of abstraction from the previous level by having multiple layers of hidden layers. The two main problems with using deep-learning approaches are the speed and the number of parameters that should be configured. Small changes or poor selection of parameters can alter the results completely or even make them worse. Tuning these parameters is usually impossible for normal users who do not have super computers because one should run the algorithm and try to tune the parameters according to the results obtained. Thus, this process can be very time consuming. This thesis attempts to address the speed and configuration issues found with traditional deep-network approaches. Some of the traditional methods of unsupervised learning are used to build an automated image-classification approach that takes less time both to configure and to run.

Page generated in 0.1094 seconds