91 |
PDF document search within a very large databaseWang, Lizhong January 2017 (has links)
Digital search engine, taking a search request from user and then returning a result responded to the request to the user, is indispensable for modern humans who are used to surfing the Internet. On the other hand, the digital document PDF is accepted by more and more people and becomes widely used in this day and age due to the convenience and effectiveness. It follows that, the traditional library has already started to be replaced by the digital one. Combining these two factors, a document based search engine that is able to query a digital document database with an input file is urgently needed. This thesis is a software development that aims to design and implement a prototype of such search engine, and propose latent optimization methods for Loredge. This research can be mainly divided into two categories: Prototype Development and Optimization Analysis. It involves an analytical research on sample documents provided by Loredge and a multi-perspective performance analysis. The prototype contains reading, preprocessing and similarity measurement. The reading part reads in a PDF file by using an imported Java library Apache PDFBox. The preprocessing processes the in-reading document and generates document fingerprint. The similarity measurement is the final stage that measures the similarity between the input fingerprint with all the document fingerprints in the database. The optimization analysis is to balance resource consumptions involving response time, accuracy rate and memory consumption. According to the performance analysis, the shorter the document fingerprint is, the better performance the search program presents. Moreover, a permanent feature database and a similarity based filtration mechanism are proposed to further optimize the program. This project has laid a solid foundation for further study in the document based search engine by providing a feasible prototype and enough relevant experimental data. This study figures out that the following study should mainly focuses on improving the effectiveness of the database access, which involves data entry labeling and search algorithm optimization. / Digital sökmotor, som tar en sökfråga från användaren och sedan returnerar ett resultat som svarar på den begäran tillbaka till användaren, är oumbärligt för moderna människor som brukar surfa på Internet. Å andra sidan, det digitala dokumentets format PDF accepteras av fler och fler människor, och det används i stor utsträckning i denna tidsålder på grund av bekvämlighet och effektivitet. Det följer att det traditionella biblioteket redan har börjat bytas ut av det digitala biblioteket. När dessa två faktorer kombineras, framgår det att det brådskande behövs en dokumentbaserad sökmotor, som har förmåga att fråga en digital databas om en viss fil. Den här uppsatsen är en mjukvaruutveckling som syftar till att designa och implementera en prototyp av en sådan sökmotor, och föreslå relevant optimeringsmetod för Loredge. Den här undersökningen kan huvudsakligen delas in i två kategorier, prototyputveckling och optimeringsanalys. Arbeten involverar en analytisk forskning om exempeldokument som kommer från Loredge och en prestandaanalys utifrån flera perspektiv. Prototypen innehåller läsning, förbehandling och likhetsmätning. Läsningsdelen läser in en PDF-fil med hjälp av en importerad Java bibliotek, Apache PDFBox. Förbehandlingsdelen bearbetar det inlästa dokumentet och genererar ett dokumentfingeravtryck. Likhetsmätningen är det sista steget, som mäter likheten mellan det inlästa fingeravtrycket och fingeravtryck av alla dokument i Loredge databas. Målet med optimeringsanalysen är att balansera resursförbrukningen, som involverar responstid, noggrannhet och minnesförbrukning. Ju kortare ett dokuments fingeravtryck är, desto bättre prestanda visar sökprogram enligt resultat av prestandaanalysen. Dessutom föreslås en permanent databas med fingeravtryck, och en likhetsbaserad filtreringsmekanism för att ytterligare optimera sökprogrammet. Det här projektet har lagt en solid grund för vidare studier om dokumentbaserad sökmotorn, genom att tillhandahålla en genomförbar prototyp och tillräckligt relevanta experimentella data. Den här studie visar att kommande forskning bör huvudsakligen inriktas på att förbättra effektivitet i databasåtkomsten, vilken innefattar data märkning och optimering av sökalgoritm.
|
92 |
Kosinová a sinová věta na střední škole / Cosine and sine theorem at the secondary schoolZenkl, David January 2016 (has links)
This thesis is concerned with a constructivist approach to the introduction of the cosine and sine theorems at the secondary school. The aim was to develop recommendations for teaching which are based on the idea of motivating teaching cosine and sine theorems. This approach is based on available literature and builds on experience from my own teaching of this topic. By motivating teaching, I mean an approach that is consistent with the principles of constructivism and emphasizes pupils' active learning. Current textbooks for secondary schools were analyzed from a mathematical and didactic point of view. The aim of this analysis is to describe how the topic is elaborated in publications available to teachers, and to get inspiration for my own approach. My own teaching approach was based on the theory of generic models and has been implemented in two classes of the secondary grammar school. Data collected during teaching cosine and sine theorems (video recordings of lessons, field notes from teaching and pupil artifacts) were analyzed in a qualitative way. The thesis describes the teaching in detail, with an emphasis on key phases of the discovery of the two theorems. Pupils' involvement in this process is closely followed. Where teaching did not work as planned, possible reasons are found and...
|
93 |
Compression d'images dans les réseaux de capteurs sans fil / Image compression in Wireless Sensor NetworksMakkaoui, Leila 26 November 2012 (has links)
Les réseaux de capteurs sans fil d'images sont utilisés aujourd'hui dans de nombreuses applications qui diffèrent par leurs objectifs et leurs contraintes individuelles. Toutefois, le dénominateur commun de toutes les applications de réseaux de capteurs reste la vulnérabilité des noeuds-capteurs en raison de leurs ressources matérielles limitées dont la plus contraignante est l'énergie. En effet, les technologies sans fil disponibles dans ce type de réseaux sont généralement à faible portée, et les ressources matérielles (CPU, batterie) sont également de faible puissance. Il faut donc répondre à un double objectif : l'efficacité d'une solution tout en offrant une bonne qualité d'image à la réception. La contribution de cette thèse porte principalement sur l'étude des méthodes de traitement et de compression d'images au noeud-caméra, nous avons proposé une nouvelle méthode de compression d'images qui permet d'améliorer l'efficacité énergétique des réseaux de capteurs sans fil. Des expérimentations sur une plate-forme réelle de réseau de capteurs d'images ont été réalisées afin de démontrer la validité de nos propositions, en mesurant des aspects telles que la quantité de mémoire requise pour l'implantation logicielle de nos algorithmes, leur consommation d'énergie et leur temps d'exécution. Nous présentons aussi, les résultats de synthèse de la chaine de compression proposée sur des systèmes à puce FPGA et ASIC / The increasing development of Wireless Camera Sensor Networks today allows a wide variety of applications with different objectives and constraints. However, the common problem of all the applications of sensor networks remains the vulnerability of sensors nodes because of their limitation in material resources, the most restricting being energy. Indeed, the available wireless technologies in this type of networks are usually a low-power, short-range wireless technology and low power hardware resources (CPU, battery). So we should meet a twofold objective: an efficient solution while delivering outstanding image quality on reception. This thesis concentrates mainly on the study and evaluation of compression methods dedicated to transmission over wireless camera sensor networks. We have suggested a new image compression method which decreases the energy consumption of sensors and thus maintains a long network lifetime. We evaluate its hardware implementation using experiments on real camera sensor platforms in order to show the validity of our propositions, by measuring aspects such as the quantity of memory required for the implantation program of our algorithms, the energy consumption and the execution time. We then focus on the study of the hardware features of our proposed method of synthesis of the compression circuit when implemented on a FPGA and ASIC chip prototype
|
94 |
Sparse Fast Trigonometric TransformsBittens, Sina Vanessa 13 June 2019 (has links)
No description available.
|
95 |
Nuoxus - um modelo de caching proativo de conteúdo multimídia para Fog Radio Access Networks (F-RANs)Costa, Felipe Rabuske 28 February 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-05-11T12:40:43Z
No. of bitstreams: 1
Felipe Rabuske Costa_.pdf: 3408830 bytes, checksum: 25a67ecb02629c811b5f305a1f2e3d27 (MD5) / Made available in DSpace on 2018-05-11T12:40:43Z (GMT). No. of bitstreams: 1
Felipe Rabuske Costa_.pdf: 3408830 bytes, checksum: 25a67ecb02629c811b5f305a1f2e3d27 (MD5)
Previous issue date: 2018-02-28 / Nenhuma / Estima-se que até o ano de 2020, cerca de 50 bilhões de dispositivos móveis estarão conectados a redes sem fio e que 78% de todo o tráfego de dados gerado por esse tipo de dispositivos será conteúdo multimídia. Essas estimativas fomentam o desenvolvimento da quinta geração de redes móveis (5G). Uma das arquiteturas mais recentemente proposta, chamada de Fog Radio Access Networks (F-RAN), dá aos componentes localizados na borda da rede poder de processamento e armazenamento endereçados às atividades da rede. Um dos principais problemas dessa arquitetura é o intenso tráfego de dados no seu canal de comunicação centralizado chamado fronthaul, utilizado para conectar as antenas (F-APs) à rede externa. Dado esse contexto, esse trabalho apresenta o Nuoxus, um modelo de caching de conteúdo multimídia voltado para F-RANs que visa amenizar esse problema. Ao armazenar esse tipo de conteúdo nos nós de rede mais próximos ao usuário, o número de acessos concorrentes ao fronthaul é reduzido, sendo esse um dos fatores agravantes na latência de comunicação na rede. O Nuoxus pode ser executado em qualquer nó da rede que possua capacidade de armazenamento e processamento, ficando responsável por gerenciar o caching de conteúdo desse nó. Sua política de substituição de conteúdo utiliza a similaridade de requisições entre os nós filhos e o restante da rede como um fator para definir a relevância de armazenar o conteúdo requisitado em cache. Além disso, utilizando esse mesmo processo, o Nuoxus sugere, de forma proativa, aos demais nós filhos que apresentam um alto grau de similaridade que façam o caching desse conteúdo, visando um
possível futuro acesso. A análise do estado da arte demonstra que até o momento não existe
nenhum outro trabalho que explore o histórico de requisições para fazer caching de conteúdo
em arquiteturas multicamadas para redes sem fio de forma proativa e sem utilizar algum componente centralizado para fazer coordenação e predição de caching. A fim de comprovar a eficiência do modelo, foi desenvolvido um protótipo utilizando o simulador ns-3. Os resultados obtidos demostram que a utilização do Nuoxus foi capaz de reduzir a latência de rede em cerca de 29.75%. Além disso, quando comparado com outras estratégias de caching, o número de acesso à cache dos componentes de rede aumentou em 53.16% em relação à estratégia que obteve o segundo melhor resultado. / It is estimated that by the year 2020, about 50 billion mobile devices will be connected to wireless networks and 78% of the data traffic of this kind of device will be multimedia content. These estimates foster the development of the 5th generation of mobile networks (5G). One of the most recently proposed architectures, named Fog Radio Access Networks or F-RAN, gives the components located at the edge of the network the processing power and storage capacity to address network activities. One of the main problems of this architecture is the intense data traffic in its centralized component named fronthaul, which is used to connect the antennas (FAPs) to the external network. Given this context, we propose Nuoxus, a multimedia content caching model for F-RANs that aims to mitigate this problem. By storing the content in the nodes closest to the user, the number of concurrent accesses to the fronthaul is reduced, which decreases the communication latency of the network. Nuoxus can run on any network node that has storage and processing capacity, becoming the responsible for managing the cache of that node. Its content replacement policy uses the similarity of requests between the child nodes and the rest of the network as a factor to decide the relevance of storing the requested content in the cache. Furthermore, by using this same process, Nuoxus proactively suggests to the child nodes whose degree of similarity is high to perform the caching of the content, assuming they will access the content at a future time. The State-of-the-art analysis shows that there is no other work that explores the history of requests to cache content in multi-layer architectures for wireless networks in a proactive manner, without using some centralized component to do coordination and prediction of caching. To demonstrate the efficiency of the model, a prototype was developed using the ns 3 simulator. The results obtained demonstrate that the use of Nuoxus reduced network latency in 29.75%. In addition, when compared to other caching strategies, the cache hit increased by 53.16% when compared to the strategy that obtained the second-best result.
|
96 |
Atribuindo significado ao seno e cosseno utilizando o software Cabri-GéomètreMartins, Vera Lúcia de Oliveira Ferreira 22 May 2003 (has links)
Made available in DSpace on 2016-04-27T16:58:20Z (GMT). No. of bitstreams: 1
vera martins.pdf: 540342 bytes, checksum: cce5bbd0100c27ec63e22cb37816c3f8 (MD5)
Previous issue date: 2003-05-22 / The objective of this work is to introduce the concepts of sine and cosine in a coordinated
form, starting from right-angled triangles, passing through the
trigonometric cycle and ending with the graphs of the corresponding functions,
aiming to provide conditions which would enable students to attribute meaning to
these concepts. To this end, a teaching sequence comprised of seven activities
was devised as a means to investigate whether students of the 2nd year of Ensino
Médio (High School), who had already studied trigonometry of the right-angled
triangles and the trigonometric cycle, would use this knowledge, during the
teaching sequence and with the help of the software Cabri-Géomètre, in the
construction of graphs of the sine and cosine functions. The design and analysis of
the teaching sequence is based on elements of the tool-object dialectic and the
notion of the interaction between frameworks of Régine Douady. The activities
were administered to a group of 16 students from a state school in the centre of
the city of São Paulo during the year of 2002. In the problem-solving processes
developed to solve the proposed questions and through the results obtained, the
software Cabri-Géomètre demonstrated its efficiency, helping the students to
associate the concepts already studied with respect to the right-angled triangle
and the trigonometric cycle with the sine and cosine functions. The results also
indicated that the majority of students perceived that the sine and cosine studied in
the case of right-angled triangle do not differ from those studied in relation to the
trigonometric cycle and, moreover, that the sine-curve and cosine-curve provide
accurate portrayals of these concepts / O objetivo deste trabalho é introduzir o conceito de seno e cosseno de forma
coordenada, partindo do triângulo retângulo, passando pelo ciclo trigonométrico e
finalizando com os gráficos das funções correspondentes, tentando propiciar aos
alunos, condições para atribuir significado a tais conceitos. Para isto foi elaborada
uma seqüência didática composta de sete atividades, com intuito de investigar se
alunos do 2° ano do ensino médio, que já trabalharam com trigonometria no
triângulo retângulo e no ciclo trigonométrico, possam, por meio dela e com auxílio
do software Cabri-Géomètre, utilizar estes conhecimentos, na construção dos
gráficos das funções seno e cosseno. A elaboração e análise da seqüência de
ensino, apoiam-se em elementos da dialética ferramenta-objeto e na noção de
interação entre domínios, de Régine Douady. A aplicação das atividades ocorreu
no ano de 2002 em uma escola da rede estadual de ensino, da região central da
cidade de São Paulo. O grupo participante era composto por 16 alunos. No
decorrer da resolução das questões propostas e pelos resultados obtidos
verificou-se que, o software Cabri-Géomètre se mostrou bastante eficaz,
auxiliando os alunos a associar os conceitos já estudados no triângulo retângulo e
no ciclo trigonométrico, com as funções seno e cosseno. Os resultados obtidos
também apontam que, a maioria dos alunos percebeu que o seno e o cosseno
estudados no triângulo retângulo não diferem daqueles estudados no ciclo
trigonométrico, e mais, que a senóide e a cossenóide retratam fielmente estes
conceitos
|
97 |
Contour Based 3D Biological Image Reconstruction and Partial RetrievalLi, Yong 28 November 2007 (has links)
Image segmentation is one of the most difficult tasks in image processing. Segmentation algorithms are generally based on searching a region where pixels share similar gray level intensity and satisfy a set of defined criteria. However, the segmented region cannot be used directly for partial image retrieval. In this dissertation, a Contour Based Image Structure (CBIS) model is introduced. In this model, images are divided into several objects defined by their bounding contours. The bounding contour structure allows individual object extraction, and partial object matching and retrieval from a standard CBIS image structure. The CBIS model allows the representation of 3D objects by their bounding contours which is suitable for parallel implementation particularly when extracting contour features and matching them for 3D images require heavy computations. This computational burden becomes worse for images with high resolution and large contour density. In this essence we designed two parallel algorithms; Contour Parallelization Algorithm (CPA) and Partial Retrieval Parallelization Algorithm (PRPA). Both algorithms have considerably improved the performance of CBIS for both contour shape matching as well as partial image retrieval. To improve the effectiveness of CBIS in segmenting images with inhomogeneous backgrounds we used the phase congruency invariant features of Fourier transform components to highlight boundaries of objects prior to extracting their contours. The contour matching process has also been improved by constructing a fuzzy contour matching system that allows unbiased matching decisions. Further improvements have been achieved through the use of a contour tailored Fourier descriptor to make translation and rotation invariance. It is proved to be suitable for general contour shape matching where translation, rotation, and scaling invariance are required. For those images which are hard to be classified by object contours such as bacterial images, we define a multi-level cosine transform to extract their texture features for image classification. The low frequency Discrete Cosine Transform coefficients and Zenike moments derived from images are trained by Support Vector Machine (SVM) to generate multiple classifiers.
|
98 |
A Spatially-filtered Finite-difference Time-domain Method with Controllable Stability Beyond the Courant LimitChang, Chun 19 July 2012 (has links)
This thesis introduces spatial filtering, which is a technique to extend the time step size beyond the conventional stability limit for the Finite-Difference Time-Domain (FDTD) method, at the expense of transforming field nodes between the spatial domain and the discrete spatial-frequency domain and removing undesired spatial-frequency components at every FDTD update cycle. The spatially-filtered FDTD method is demonstrated to be almost as accurate as and more efficient than the conventional FDTD method via theories and numerical examples. Then, this thesis combines spatial filtering and an existing subgridding scheme to form the spatially-filtered subgridding scheme. The spatially-filtered subgridding scheme is more efficient than existing subgridding schemes because the former allows the time step size used in the dense mesh to be larger than the dense mesh CFL limit. However, trade-offs between accuracy and efficiency are required in complicated structures.
|
99 |
A Spatially-filtered Finite-difference Time-domain Method with Controllable Stability Beyond the Courant LimitChang, Chun 19 July 2012 (has links)
This thesis introduces spatial filtering, which is a technique to extend the time step size beyond the conventional stability limit for the Finite-Difference Time-Domain (FDTD) method, at the expense of transforming field nodes between the spatial domain and the discrete spatial-frequency domain and removing undesired spatial-frequency components at every FDTD update cycle. The spatially-filtered FDTD method is demonstrated to be almost as accurate as and more efficient than the conventional FDTD method via theories and numerical examples. Then, this thesis combines spatial filtering and an existing subgridding scheme to form the spatially-filtered subgridding scheme. The spatially-filtered subgridding scheme is more efficient than existing subgridding schemes because the former allows the time step size used in the dense mesh to be larger than the dense mesh CFL limit. However, trade-offs between accuracy and efficiency are required in complicated structures.
|
100 |
Hybrid 2D and 3D face verificationMcCool, Christopher Steven January 2007 (has links)
Face verification is a challenging pattern recognition problem. The face is a biometric that, we as humans, know can be recognised. However, the face is highly deformable and its appearance alters significantly when the pose, illumination or expression changes. These changes in appearance are most notable for texture images, or two-dimensional (2D) data. But the underlying structure of the face, or three dimensional (3D) data, is not changed by pose or illumination variations. Over the past five years methods have been investigated to combine 2D and 3D face data to improve the accuracy and robustness of face verification. Much of this research has examined the fusion of a 2D verification system and a 3D verification system, known as multi-modal classifier score fusion. These verification systems usually compare two feature vectors (two image representations), a and b, using distance or angular-based similarity measures. However, this does not provide the most complete description of the features being compared as the distances describe at best the covariance of the data, or the second order statistics (for instance Mahalanobis based measures). A more complete description would be obtained by describing the distribution of the feature vectors. However, feature distribution modelling is rarely applied to face verification because a large number of observations is required to train the models. This amount of data is usually unavailable and so this research examines two methods for overcoming this data limitation: 1. the use of holistic difference vectors of the face, and 2. by dividing the 3D face into Free-Parts. The permutations of the holistic difference vectors is formed so that more observations are obtained from a set of holistic features. On the other hand, by dividing the face into parts and considering each part separately many observations are obtained from each face image; this approach is referred to as the Free-Parts approach. The extra observations from both these techniques are used to perform holistic feature distribution modelling and Free-Parts feature distribution modelling respectively. It is shown that the feature distribution modelling of these features leads to an improved 3D face verification system and an effective 2D face verification system. Using these two feature distribution techniques classifier score fusion is then examined. This thesis also examines methods for performing classifier fusion score fusion. Classifier score fusion attempts to combine complementary information from multiple classifiers. This complementary information can be obtained in two ways: by using different algorithms (multi-algorithm fusion) to represent the same face data for instance the 2D face data or by capturing the face data with different sensors (multimodal fusion) for instance capturing 2D and 3D face data. Multi-algorithm fusion is approached as combining verification systems that use holistic features and local features (Free-Parts) and multi-modal fusion examines the combination of 2D and 3D face data using all of the investigated techniques. The results of the fusion experiments show that multi-modal fusion leads to a consistent improvement in performance. This is attributed to the fact that the data being fused is collected by two different sensors, a camera and a laser scanner. In deriving the multi-algorithm and multi-modal algorithms a consistent framework for fusion was developed. The consistent fusion framework, developed from the multi-algorithm and multimodal experiments, is used to combine multiple algorithms across multiple modalities. This fusion method, referred to as hybrid fusion, is shown to provide improved performance over either fusion system on its own. The experiments show that the final hybrid face verification system reduces the False Rejection Rate from 8:59% for the best 2D verification system and 4:48% for the best 3D verification system to 0:59% for the hybrid verification system; at a False Acceptance Rate of 0:1%.
|
Page generated in 0.0582 seconds