• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 65
  • 26
  • 16
  • 15
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 58
  • 46
  • 37
  • 31
  • 30
  • 28
  • 27
  • 25
  • 25
  • 21
  • 20
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Raízes e perspectivas do urbanismo meridional português-a arte urbana dos aglomerados portugueses de influência mediterrânica

Dias, Francisco da Silva January 2000 (has links)
No description available.
72

Fast fourier transform for option pricing: improved mathematical modeling and design of an efficient parallel algorithm

Barua, Sajib 19 May 2005 (has links)
The Fast Fourier Transform (FFT) has been used in many scientific and engineering applications. The use of FFT for financial derivatives has been gaining momentum in the recent past. In this thesis, i) we have improved a recently proposed model of FFT for pricing financial derivatives to help design an efficient parallel algorithm. The improved mathematical model put forth in our research bridges a gap between quantitative approaches for the option pricing problem and practical implementation of such approaches on modern computer architectures. The thesis goes further by proving that the improved model of fast Fourier transform for option pricing produces accurate option values. ii) We have developed a parallel algorithm for the FFT using the classical Cooley-Tukey algorithm and improved this algorithm by introducing a data swapping technique that brings data closer to the respective processors and hence reduces the communication overhead to a large extent leading to better performance of the parallel algorithm. We have tested the new algorithm on a 20 node SunFire 6800 high performance computing system and compared the new algorithm with the traditional Cooley-Tukey algorithm. Option values are calculated for various strike prices with a proper selection of strike-price spacing to ensure fine-grid integration for FFT computation as well as to maximize the number of strikes lying in the desired region of the stock price. Compared to the traditional Cooley-Tukey algorithm, the current algorithm with data swapping performs better by more than 15% for large data sizes. In the rapidly changing market place, these improvements could mean a lot for an investor or financial institution because obtaining faster results offers a competitive advantages. / October 2004
73

Akyaka After 25 Years: Spatial And Conceptual Re-readings In Architectural Discourse

Batirbek, Canay 01 February 2010 (has links) (PDF)
In this study, it is aimed to explore Akyaka&rsquo / s self-generated practice and its route of progress with the definitions of place. As the ignorance of Akyaka -as an unconventional body characterized with Nail &Ccedil / akirhan&rsquo / s Aga Khan Award winner traditional type of house in 1983- by the conventional architectural discourse prevents learning from it. Therefore, the research will focus on Akyaka&rsquo / s distinctive story which is taking its references from a place and producing the place of its own, out of the boundaries of the profession. Akyaka will be researched in the framework of several different aspects in relation to protection of coastal region, architectural representation, Aga Khan Award for Architecture, Turkish Architecture&rsquo / s identity quests, tourism&rsquo / s agendas, continuity of tradition and its controversy with modern, locality&rsquo / s sustainability, and pastiche in architecture. The town will be listened to in terms of its geographical, socio-cultural and architectural/architectonic bases. In this context, if this unrecognized formation has something to say after 25 years will be studied introducing the place phenomenon reproduced by the settlement as a field of discussion.
74

A Local Expansion Approach for Continuous Nearest Neighbor Queries

Liu, Ta-Wei 16 June 2008 (has links)
Queries on spatial data commonly concern a certain range or area, for example, queries related to intersections, containment and nearest neighbors. The Continuous Nearest Neighbor (CNN) query is one kind of the nearest neighbor queries. For example, people may want to know where those gas stations are along the super highway from the starting position to the ending position. Due to that there is no total ordering of spatial proximity among spatial objects, the space filling curve (SFC) approach has proposed to preserve the spatial locality. Chen and Chang have proposed efficient algorithms based on SFC to answer nearest neighbor queries, so we may perform a sequence of individually nearest neighbor queries to answer such a CNN query in the centralized system by one of Chen and Chang's algorithms. However, each searched range of these nearest neighbor queries could be overlapped, and these queries may access several same pages on the disk, resulting in many redundant disk accesses. On the other hand, Zheng et al. have proposed an algorithm based on the Hilbert curve for the CNN query for the wireless broadcast environment, and it contains two phases. In the first phase, Zheng et al.'s algorithm designs a searched range to find candidate objects. In the second phase, it uses some heuristics to filter the candidate objects for the final answer. However, Zheng et al.'s algorithm may check some data blocks twice or some useless data blocks, resulting in some redundant disk accesses. Therefore, in this thesis, to avoid these disadvantages in the first phase of Zheng et al.'s algorithm, we propose a local expansion approach based on the Peano curve for the CNN query in the centralized system. In the first phase, we determine the searched range to obtain all candidate objects. Basically, we first calculate the route between the starting point and the ending point. Then, we move forward one block from the starting point to the ending point, and locally spread the searched range to find the candidate objects. In the second phase, we use heuristics mentioned in Zheng et al.'s algorithm to filter the candidate objects for the final answer. Based on such an approach, we proposed two algorithms: the forward moving (FM) algorithm and the forward moving* (FM*) algorithm. The FM algorithm assumes that each object is in the center of a block, and the FM* algorithm assumes that each object could be in any place of a block. Our local expansion approach can avoid the duplicated check in Zheng et al.'s algorithm, and determine a searched range with higher accuracy than that of Zhenget al.'s algorithm. From our simulation results, we show that the performance of the FM or FM* algorithm is better than that of Zheng et al.'s algorithm, in terms of the accuracy and the processing time.
75

A scalable metric learning based voting method for expression recognition

Wan, Shaohua 09 October 2013 (has links)
In this research work, we propose a facial expression classification method using metric learning-based k-nearest neighbor voting. To achieve accurate classification of a facial expression from frontal face images, we first learn a distance metric structure from training data that characterizes the feature space pattern, then use this metric to retrieve the nearest neighbors from the training dataset, and finally output the classification decision accordingly. An expression is represented as a fusion of face shape and texture. This representation is based on registering a face image with a landmarking shape model and extracting Gabor features from local patches around landmarks. This type of representation achieves robustness and effectiveness by using an ensemble of local patch feature detectors at a global shape level. A naive implementation of the metric learning-based k-nearest neighbor would incur a time complexity proportional to the size of the training dataset, which precludes this method being used with enormous datasets. To scale to potential larger databases, a similar approach to that in [24] is used to achieve an approximate yet efficient ML-based kNN voting based on Locality Sensitive Hashing (LSH). A query example is directly hashed to the bucket of a pre-computed hash table where candidate nearest neighbors can be found, and there is no need to search the entire database for nearest neighbors. Experimental results on the Cohn-Kanade database and the Moving Faces and People database show that both ML-based kNN voting and its LSH approximation outperform the state-of-the-art, demonstrating the superiority and scalability of our method. / text
76

Ανάπτυξη τεχνικής αύξησης της αξιοπιστίας των κρυφών μνημών πρώτου επιπέδου βασισμένη στη χωρική τοπικότητα των μπλοκ μνήμης

Μαυρόπουλος, Μιχαήλ 16 May 2014 (has links)
Στην παρούσα διπλωματική εργασία θα ασχοληθούμε με το πρόβλημα της αξιοπιστίας των κρυφών μνημών δεδομένων και εντολών πρώτου επιπέδου. Η υψηλή πυκνότητα ολοκλήρωσης και η υψηλή συχνότητα λειτουργίας των σύγχρονων ολοκληρωμένων κυκλωμάτων έχει οδηγήσει σε σημαντικά προβλήματα αξιοπιστίας, που οφείλονται είτε στην κατασκευή, είτε στη γήρανση των ολοκληρωμένων κυκλωμάτων. Στην παρούσα εργασία γίνεται αρχικά μια αποτίμηση της μείωσης της απόδοσης των κρυφών μνημών πρώτου επιπέδου όταν εμφανίζονται μόνιμα σφάλματα για διαφορετικές τεχνολογίες ολοκλήρωσης. Στη συνέχεια παρουσιάζεται μια νέα τεχνική αντιμετώπισης της επίδρασης των σφαλμάτων, η οποία βασίζεται στη πρόβλεψη της χωρικής τοπικότητας των μπλοκ μνήμης που εισάγονται στις κρυφές μνήμες πρώτου επιπέδου. Η αξιολόγηση της εν λόγω τεχνικής γίνεται με τη χρήση ενός εξομοιωτή σε επίπεδο αρχιτεκτονικής. / In this thesis we will work on the problem of reliability of first-level data and instruction cache memories. Technology scaling improvement is affecting the reliability of ICs due to increases in static and dynamic variations as well as wear out failures. First of all, in this work we try to estimate the impact of permanent faults in first level faulty caches. Then we propose a methodology to mitigate this negative impact of defective bits. Out methodology based on prediction of spatial locality of the incoming blocks to cache memory. Finally using cycle accurate simulation we showcase that our approach is able to offer significant benefits in cache performance.
77

Application of locality sensitive hashing to feature matching and loop closure detection

Shahbazi, Hossein Unknown Date
No description available.
78

Cost-effective and privacy-conscious cloud service provisioning: architectures and algorithms

Palanisamy, Balaji 27 August 2014 (has links)
Cloud Computing represents a recent paradigm shift that enables users to share and remotely access high-powered computing resources (both infrastructure and software/services) contained in off-site data centers thereby allowing a more efficient use of hardware and software infrastructures. This growing trend in cloud computing, combined with the demands for Big Data and Big Data analytics, is driving the rapid evolution of datacenter technologies towards more cost-effective, consumer-driven, more privacy conscious and technology agnostic solutions. This dissertation is dedicated to taking a systematic approach to develop system-level techniques and algorithms to tackle the challenges of large-scale data processing in the Cloud and scaling and delivering privacy-aware services with anytime-anywhere availability. We analyze the key challenges in effective provisioning of Cloud services in the context of MapReduce-based parallel data processing considering the concerns of cost-effectiveness, performance guarantees and user-privacy and we develop a suite of solution techniques, architectures and models to support cost-optimized and privacy-preserving service provisioning in the Cloud. At the cloud resource provisioning tier, we develop a utility-driven MapReduce Cloud resource planning and management system called Cura for cost-optimally allocating resources to jobs. While existing services require users to select a number of complex cluster and job parameters and use those potentially sub-optimal per-job configurations, the Cura resource management achieves global resource optimization in the cloud by minimizing cost and maximizing resource utilization. We also address the challenges of resource management and job scheduling for large-scale parallel data processing in the Cloud in the presence of networking and storage bottlenecks commonly experienced in Cloud data centers. We develop Purlieus, a self-configurable locality-based data and virtual machine management framework that enables MapReduce jobs to access their data either locally or from close-by nodes including all input, output and intermediate data achieving significant improvements in job response time. We then extend our cloud resource management framework to support privacy-preserving data access and efficient privacy-conscious query processing. Concretely, we propose and implement VNCache: an efficient solution for MapReduce analysis of cloud-archived log data for privacy-conscious enterprises. Through a seamless data streaming and prefetching model in VNCache, Hadoop jobs begin execution as soon as they are launched without requiring any apriori downloading. At the cloud consumer tier, we develop mix-zone based techniques for delivering anonymous cloud services to mobile users on the move through Mobimix, a novel road-network mix-zone based framework that enables real time, location based service delivery without disclosing content or location privacy of the consumers.
79

Fast fourier transform for option pricing: improved mathematical modeling and design of an efficient parallel algorithm

Barua, Sajib 19 May 2005 (has links)
The Fast Fourier Transform (FFT) has been used in many scientific and engineering applications. The use of FFT for financial derivatives has been gaining momentum in the recent past. In this thesis, i) we have improved a recently proposed model of FFT for pricing financial derivatives to help design an efficient parallel algorithm. The improved mathematical model put forth in our research bridges a gap between quantitative approaches for the option pricing problem and practical implementation of such approaches on modern computer architectures. The thesis goes further by proving that the improved model of fast Fourier transform for option pricing produces accurate option values. ii) We have developed a parallel algorithm for the FFT using the classical Cooley-Tukey algorithm and improved this algorithm by introducing a data swapping technique that brings data closer to the respective processors and hence reduces the communication overhead to a large extent leading to better performance of the parallel algorithm. We have tested the new algorithm on a 20 node SunFire 6800 high performance computing system and compared the new algorithm with the traditional Cooley-Tukey algorithm. Option values are calculated for various strike prices with a proper selection of strike-price spacing to ensure fine-grid integration for FFT computation as well as to maximize the number of strikes lying in the desired region of the stock price. Compared to the traditional Cooley-Tukey algorithm, the current algorithm with data swapping performs better by more than 15% for large data sizes. In the rapidly changing market place, these improvements could mean a lot for an investor or financial institution because obtaining faster results offers a competitive advantages.
80

Fast fourier transform for option pricing: improved mathematical modeling and design of an efficient parallel algorithm

Barua, Sajib 19 May 2005 (has links)
The Fast Fourier Transform (FFT) has been used in many scientific and engineering applications. The use of FFT for financial derivatives has been gaining momentum in the recent past. In this thesis, i) we have improved a recently proposed model of FFT for pricing financial derivatives to help design an efficient parallel algorithm. The improved mathematical model put forth in our research bridges a gap between quantitative approaches for the option pricing problem and practical implementation of such approaches on modern computer architectures. The thesis goes further by proving that the improved model of fast Fourier transform for option pricing produces accurate option values. ii) We have developed a parallel algorithm for the FFT using the classical Cooley-Tukey algorithm and improved this algorithm by introducing a data swapping technique that brings data closer to the respective processors and hence reduces the communication overhead to a large extent leading to better performance of the parallel algorithm. We have tested the new algorithm on a 20 node SunFire 6800 high performance computing system and compared the new algorithm with the traditional Cooley-Tukey algorithm. Option values are calculated for various strike prices with a proper selection of strike-price spacing to ensure fine-grid integration for FFT computation as well as to maximize the number of strikes lying in the desired region of the stock price. Compared to the traditional Cooley-Tukey algorithm, the current algorithm with data swapping performs better by more than 15% for large data sizes. In the rapidly changing market place, these improvements could mean a lot for an investor or financial institution because obtaining faster results offers a competitive advantages.

Page generated in 0.0482 seconds