241 |
A content dissemination framework for vehicular networkingLeontiadis, I. January 2010 (has links)
Vehicular Networks are a peculiar class of wireless mobile networks in which vehicles are equipped with radio interfaces and are, therefore, able to communicate with fixed infrastructure (if available) or other vehicles. Content dissemination has a potential number of applications in vehicular networking, including advertising, traffic warnings, parking notifications and emergency announcements. This thesis addresses two possible dissemination strategies: i) Push-based that is aiming to proactively deliver information to a group of vehicles based on their interests and the level of matching content, and ii) Pull-based that is allowing vehicles to explicitly request custom information. Our dissemination framework is taking into consideration very specific information only available in vehicular networks: the geographical data produced by the navigation system. With its aid, a vehicle's mobility patterns become predictable. This information is exploited to efficiently deliver the content where it is needed. Furthermore, we use the navigation system to automatically filter information which might be relevant to the vehicles. Our framework has been designed and implemented in .NET C# and Microsoft MapPoint. It was tested using a small number of vehicles in the area of Cambridge, UK. Moreover, to prove the correctness of our protocols, we further evaluated it in a large-scale network simulation over a number of realistic vehicular trace-based scenarios. Finally, we built a test-case application aiming to prove that vehicles can gain from such a framework. In this application every vehicle collects and disseminates road traffic information. Vehicles that receive this information can individually evaluate the traffic conditions and take an alternative route, if needed. To evaluate this approach, we collaborated with UCLA's Network Research Lab (NRL), to build a simulator that combines network and dynamic mobility emulation simultaneously. When our dissemination framework is used, the drivers can considerably reduce their trip-times.
|
242 |
Bandwidth-aware distributed ad-hoc grids in deployed wireless sensor networksRondini, E. January 2010 (has links)
Nowadays, cost effective sensor networks can be deployed as a result of a plethora of recent engineering advances in wireless technology, storage miniaturisation, consolidated microprocessor design, and sensing technologies. Whilst sensor systems are becoming relatively cheap to deploy, two issues arise in their typical realisations: (i) the types of low-cost sensors often employed are capable of limited resolution and tend to produce noisy data; (ii) network bandwidths are relatively low and the energetic costs of using the radio to communicate are relatively high. To reduce the transmission of unnecessary data, there is a strong argument for performing local computation. However, this can require greater computational capacity than is available on a single low-power processor. Traditionally, such a problem has been addressed by using load balancing: fragmenting processes into tasks and distributing them amongst the least loaded nodes. However, the act of distributing tasks, and any subsequent communication between them, imposes a geographically defined load on the network. Because of the shared broadcast nature of the radio channels and MAC layers in common use, any communication within an area will be slowed by additional traffic, delaying the computation and reporting that relied on the availability of the network. In this dissertation, we explore the tradeoff between the distribution of computation, needed to enhance the computational abilities of networks of resource-constrained nodes, and the creation of network traffic that results from that distribution. We devise an application-independent distribution paradigm and a set of load distribution algorithms to allow computationally intensive applications to be collaboratively computed on resource-constrained devices. Then, we empirically investigate the effects of network traffic information on the distribution performance. We thus devise bandwidth-aware task offload mechanisms that, combining both nodes computational capabilities and local network conditions, investigate the impacts of making informed offload decisions on system performance. The highly deployment-specific nature of radio communication means that simulations that are capable of producing validated, high-quality, results are extremely hard to construct. Consequently, to produce meaningful results, our experiments have used empirical analysis based on a network of motes located at UCL, running a variety of I/O-bound, CPU-bound and mixed tasks. Using this setup, we have established that even relatively simple load sharing algorithms can improve performance over a range of different artificially generated scenarios, with more or less timely contextual information. In addition, we have taken a realistic application, based on location estimation, and implemented that across the same network with results that support the conclusions drawn from the artificially generated traffic.
|
243 |
Patch-based models for visual object classesAghajanian, J. January 2011 (has links)
This thesis concerns models for visual object classes that exhibit a reasonable amount of regularity, such as faces, pedestrians, cells and human brains. Such models are useful for making “within-object” inferences such as determining their individual characteristics and establishing their identity. For example, the model could be used to predict the identity of a face, the pose of a pedestrian or the phenotype of a cell and segment parts of a human brain. Existing object modelling techniques have several limitations. First, most current methods have targeted the above tasks individually using object specific representations; therefore, they cannot be applied to other problems without major alterations. Second, most methods have been designed to work with small databases which do not contain the variations in pose, illumination, occlusion and background clutter seen in ‘real world’ images. Consequently, many existing algorithms fail when tested on unconstrained databases. Finally, the complexity of the training procedure in these methods makes it impractical to use large datasets. In this thesis, we investigate patch-based models for object classes. Our models are capable of exploiting very large databases of objects captured in uncontrolled environments. We represent the test image with a regular grid of patches from a library of images of the same object. All the domain specific information is held in this library: we use one set of images of the object to help draw inferences about others. In each experimental chapter we investigate a different within-object inference task. In particular we develop models for classification, regression, semantic segmentation and identity recognition. In each task, we achieve results that are comparable to or better than the state of the art. We conclude that patch-based representation can be successfully used for the above tasks and shows promise for other applications such as generation and localization.
|
244 |
Innovative boundary integral and hybrid methods for diffuse optical imagingElisee, J. P. January 2011 (has links)
Diffuse Optical Imaging (DOI), the study of the propagation of Near Infra-Red (NIR) light in biological media, is an emerging method in medical imaging. Its state-of-the-art is non-invasive, versatile and reasonably inexpensive. In Diffuse Optical Tomography (DOT), the adaptation of numerical methods such as the Finite Element Method (FEM) and, more recently the Boundary Element Method (BEM), has allowed the treatment of complex problems, even for in vivo functional three-dimensional imaging. This work is the first attempt to combine these two methods in DOT. The BEM-FEM is designed to tackle layered turbid media problems. It focuses on the region of interest by restraining the reconstruction to it. All other regions are treated as piecewise-constant in a surface-integral approach. We validated the model in concentric spheres and found that it compared well with an analytical result. We then performed functional imaging of the neonate’s motor cortex in vivo, in a reconstruction restricted to the brain, both with FEM and BEM-FEM. Another use of the BEM in DOI is also outlined. NIR Spectroscopy (NIRS) devices are particularly used in brain monitoring and Diffuse Optical Cortical Mapping (DOCM). Unfortunately, they are very often accompanied by rudimentary analysis of the data and the 3D appreciation of the problem is missed. The BEM DOCM developed in the current work represents an improvement, especially since a topographical representation of a motor activation in the cortex is clearly reconstructed in vivo. In the interest of computational speed an acceleration technique for the BEM has been developed. The Fast Multipole Method (FMM), which is based on the decomposition of Green’s function on a basis of Bessel and Hankel functions, eases the evaluation of the BEM matrix, along with a faster calculation of the solutions.
|
245 |
Cost-effective resource management for distributed computingMohd Nazir, M. A. N. January 2011 (has links)
Current distributed computing and resource management infrastructures (e.g., Cluster and Grid) suffer from a wide variety of problems related to resource management, which include scalability bottleneck, resource allocation delay, limited quality-of-service (QoS) support, and lack of cost-aware and service level agreement (SLA) mechanisms. This thesis addresses these issues by presenting a cost-effective resource management solution which introduces the possibility of managing geographically distributed resources in resource units that are under the control of a Virtual Authority (VA). A VA is a collection of resources controlled, but not necessarily owned, by a group of users or an authority representing a group of users. It leverages the fact that different resources in disparate locations will have varying usage levels. By creating smaller divisions of resources called VAs, users would be given the opportunity to choose between a variety of cost models, and each VA could rent resources from resource providers when necessary, or could potentially rent out its own resources when underloaded. The resource management is simplified since the user and owner of a resource recognize only the VA because all permissions and charges are associated directly with the VA. The VA is controlled by a ’rental’ policy which is supported by a pool of resources that the system may rent from external resource providers. As far as scheduling is concerned, the VA is independent from competitors and can instead concentrate on managing its own resources. As a result, the VA offers scalable resource management with minimal infrastructure and operating costs. We demonstrate the feasibility of the VA through both a practical implementation of the prototype system and an illustration of its quantitative advantages through the use of extensive simulations. First, the VA concept is demonstrated through a practical implementation of the prototype system. Further, we perform a cost-benefit analysis of current distributed resource infrastructures to demonstrate the potential cost benefit of such a VA system. We then propose a costing model for evaluating the cost effectiveness of the VA approach by using an economic approach that captures revenues generated from applications and expenses incurred from renting resources. Based on our costing methodology, we present rental policies that can potentially offer effective mechanisms for running distributed and parallel applications without a heavy upfront investment and without the cost of maintaining idle resources. By using real workload trace data, we test the effectiveness of our proposed rental approaches. Finally, we propose an extension to the VA framework that promotes long-term negotiations and rentals based on service level agreements or long-term contracts. Based on the extended framework, we present new SLA-aware policies and evaluate them using real workload traces to demonstrate their effectiveness in improving rental decisions.
|
246 |
Power efficient communications in low power ad hoc radio networksGreenhalgh, A. P. January 2012 (has links)
In this thesis we investigate the feasibility of using information overheard by wireless devices to reduce their overall energy consumption for communications. Specifically we investigate the hypothesis "It is more efficient in terms of energy consumption to constrain the transmission power based upon a combination of received signal strength with a minimally extended MAC, than to utilise an unchanged MAC and full power.". We investigate the hypothesis in the context of an ad hoc wireless network comprising of devices that use low power radio systems. We investigate two different low power radio systems, a standard 802.11 system and a custom low power radio device from Philips Research Labs. We examine in detail the energy consumption of the Philips’ low power radio device in its three modes of operation; transmission, reception and idle. From this, we propose a generic framework for power measurement and illustrate the technique with a case study. Specifically, this technique identifies the three modes in a trace of the energy consumption of the low power radio device, and uses this information to accurately extract the consumption figures for the different modes of operation. Using our measurements and the energy consumption parameters for an 802.11 radio device, we examine in simulation the complex behaviour that emerges from the implementation of an energy-aware system using a simple transmission power control algorithm that exploits overheard MAC-level information to reduce device’s energy consumption. We evaluate this simple algorithm using two radio systems and show that, in spite of the complexity, energy savings can be obtained using a scheme that takes advantage of overheard information.
|
247 |
The quality of probabilistic search in unstructured distributed information retrieval systemsFu, R. January 2012 (has links)
Searching the web is critical to the Web's success. However, the frequency of searches together with the size of the index prohibit a single computer being able to cope with the computational load. Consequently, a variety of distributed architectures have been proposed. Commercial search engines such as Google, usually use an architecture where the the index is distributed but centrally managed over a number of disjoint partitions. This centralized architecture has a high capital and operating cost that presents a significant barrier preventing any new competitor from entering the search market. The dominance of a few Web search giants brings concerns about the objectivity of search results and the privacy of the user. A promising solution to eliminate the high cost of entry is to conduct the search on a peer-to-peer (P2P) architecture. Peer-to-peer architectures offer a more geographically dispersed arrangement of machines that are not centrally managed. This has the benefit of not requiring an expensive centralized server facility. However, the lack of a centralized management can complicate the communication process. And the storage and computational capabilities of peers may be much less than for nodes in a commercial search engine. P2P architectures are commonly categorized into two broad classes, structured and unstructured. Structured architectures guarantee that the entire index is searched for a query, but suffer high communication cost during retrieval and maintenance. In comparison, unstructured architectures do not guarantee the entire index is searched, but require less maintenance cost and are more robust to attacks. In this thesis we study the quality of the probabilistic search in an unstructured distributed network since such a network has potential for developing a low cost and robust large scale information retrieval system. Search in an unstructured distributed network is a challenge, since a single machine normally can only store a subset of documents, and a query is only sent to a subset of machines, due to limitations on computational and communication resources. Thus, IR systems built on such network do not guarantee that a query finds the required documents in the collection, and the search has to be probabilistic and non-deterministic. The search quality is measured by a new metric called accuracy, defined as the fraction of documents retrieved by a constrained, probabilistic search compared with those that would have been retrieved by an exhaustive search. We propose a mathematical framework for modeling search in an unstructured distributed network, and present a non-deterministic distributed search architecture called Probably Approximately Correct (PAC) search, We provide formulas to estimate the search quality based on different system parameters, and show that PAC can achieve good performance when using the same amount of resources of a centrally managed deterministic distributed information retrieval system. We also study the effects of node selection in a centralized PAC architecture. We theoretically and empirically analyze the search performance across query iterations, and show that the search accuracy can be improved by caching good performing nodes in a centralized PAC architecture. Experiments on a real document collection and query log support our analysis. We then investigate the effects of different document replication policies in a PAC IR system. We show that the traditional square-root replication policy is not optimum for maximizing accuracy, and give an optimality criterion for accuracy. A non-uniform distribution of documents improves the retrieval performance of popular documents at the expense of less popular documents. To compensate for this, we propose a hybrid replication policy consisting of a combination of uniform and non-uniform distributions. Theoretical and experimental results show that such an arrangement significantly improves the accuracy of less popular documents at the expense of only a small degradation in accuracy averaged over all queries. We finally explore the effects of query caching in the PAC architecture. We empirically analyze the search performance of queries being issued from a query log, and show that the search accuracy can be improved by caching the top-k documents on each node. Simulations on a real document collection and query log support our analysis.
|
248 |
Doppler-aided single-frequency real-time kinematic satellite positioning in the urban environmentBahrami, M. January 2011 (has links)
Real-Time Kinematic (RTK) is one of the most precise Global Navigation Satellite Systems (GNSS) positioning technologies, with which users can obtain centimetre-level relative positioning accuracy in real-time. Routinely, expensive and dedicated dual-frequency,geodetic-quality receivers are used to provide RTK positioning. However a myriad of industrial and engineering applications (e.g., utility services, automated continuous monitoring of ground subsidence and deformation of man-made structures, robotics, intelligent transportation systems, agriculture, etc.) demand small-size, cost-effective and highly accurate GNSS positioning. This encourages the use of easily available low-cost single-frequency receivers with carrier-phase tracking and output capabilities and hence, the potential of those receivers to provide RTK positioning is examined in this thesis. A novel and effective Doppler-aided epoch-by-epoch processing technique is devised and developed to increase the single-frequency RTK positioning availability in GPS/GNSS challenged environments. The technique utilises raw Doppler frequency shift measurements to smooth code pseudoranges and also uses a new integer ambiguity estimation and validation technique. Doppler-smoothing of pseudoranges is motivated both by the continual availability and the centimetre-level precision, even in difficult urban canyons, of receiver generated raw Doppler frequency shift measurements. The influence of Doppler-smoothed pseudoranges on both the positioning and the carrier-phase integer ambiguity resolution is investigated. It is shown that in urban areas the proposed Doppler-smoothing technique is more robust and effective than that of the traditional carrier-smoothing of pseudoranges (e.g., the Hatch filter). Static and kinematic trials confirm this technique improves the precision of the code-based absolute and also relative positioning in urban areas characteristically of the order of 40%-50%. In the experimental trials carried out, Doppler-smoothing of pseudoranges also demonstrated improvements (close to 15%) in the ambiguity resolution success rate in instantaneous RTK for short baselines ( /approx 7 km), where the probability of fixing ambiguities to correct integer values is dominated by the relatively imprecise code pseudoranges. Furthermore, to increase the success rate of the integer ambiguity resolution (and hence the RTK availability) a single-frequency, epoch-by-epoch ambiguity resolution technique is introduced. Experimental results suggests that the new technique combined with Doppler-smoothing offers a large improvement (> 30%) in fixing the ‘correct’ integer ambiguities in a single-epoch for single-frequency users compared to conventional ambiguity resolution methods.
|
249 |
Support for flexible and transparent distributed computingLiu, H. January 2010 (has links)
Modern distributed computing developed from the traditional supercomputing community rooted firmly in the culture of batch management. Therefore, the field has been dominated by queuing-based resource managers and work flow based job submission environments where static resource demands needed be determined and reserved prior to launching executions. This has made it difficult to support resource environments (e.g. Grid, Cloud) where the available resources as well as the resource requirements of applications may be both dynamic and unpredictable. This thesis introduces a flexible execution model where the compute capacity can be adapted to fit the needs of applications as they change during execution. Resource provision in this model is based on a fine-grained, self-service approach instead of the traditional one-time, system-level model. The thesis introduces a middleware based Application Agent (AA) that provides a platform for the applications to dynamically interact and negotiate resources with the underlying resource infrastructure. We also consider the issue of transparency, i.e., hiding the provision and management of the distributed environment. This is the key to attracting public to use the technology. The AA not only replaces user-controlled process of preparing and executing an application with a transparent software-controlled process, it also hides the complexity of selecting right resources to ensure execution QoS. This service is provided by an On-line Feedback-based Automatic Resource Configuration (OAC) mechanism cooperating with the flexible execution model. The AA constantly monitors utility-based feedbacks from the application during execution and thus is able to learn its behaviour and resource characteristics. This allows it to automatically compose the most efficient execution environment on the fly and satisfy any execution requirements defined by users. Two policies are introduced to supervise the information learning and resource tuning in the OAC. The Utility Classification policy classifies hosts according to their historical performance contributions to the application. According to this classification, the AA chooses high utility hosts and withdraws low utility hosts to configure an optimum environment. The Desired Processing Power Estimation (DPPE) policy dynamically configures the execution environment according to the estimated desired total processing power needed to satisfy users’ execution requirements. Through the introducing of flexibility and transparency, a user is able to run a dynamic/normal distributed application anywhere with optimised execution performance, without managing distributed resources. Based on the standalone model, the thesis further introduces a federated resource negotiation framework as a step forward towards an autonomous multi-user distributed computing world.
|
250 |
Evaluating collaborative filtering over timeLathia, N. K. January 2010 (has links)
Recommender systems have become essential tools for users to navigate the plethora of content in the online world. Collaborative filtering—a broad term referring to the use of a variety, or combination, of machine learning algorithms operating on user ratings—lies at the heart of recommender systems’ success. These algorithms have been traditionally studied from the point of view of how well they can predict users’ ratings and how precisely they rank content; state of the art approaches are continuously improved in these respects. However, a rift has grown between how filtering algorithms are investigated and how they will operate when deployed in real systems. Deployed systems will continuously be queried for personalised recommendations; in practice, this implies that system administrators will iteratively retrain their algorithms in order to include the latest ratings. Collaborative filtering research does not take this into account: algorithms are improved and compared to each other from a static viewpoint, while they will be ultimately deployed in a dynamic setting. Given this scenario, two new problems emerge: current filtering algorithms are neither (a) designed nor (b) evaluated as algorithms that must account for time. This thesis addresses the divergence between research and practice by examining how collaborative filtering algorithms behave over time. Our contributions include: 1. A fine grained analysis of temporal changes in rating data and user/item similarity graphs that clearly demonstrates how recommender system data is dynamic and constantly changing. 2. A novel methodology and time-based metrics for evaluating collaborative filtering over time, both in terms of accuracy and the diversity of top-N recommendations. 3. A set of hybrid algorithms that improve collaborative filtering in a range of different scenarios. These include temporal-switching algorithms that aim to promote either accuracy or diversity; parameter update methods to improve temporal accuracy; and re-ranking a subset of users’ recommendations in order to increase diversity. 4. A set of temporal monitors that secure collaborative filtering from a wide range of different temporal attacks by flagging anomalous rating patterns. We have implemented and extensively evaluated the above using large-scale sets of user ratings; we further discuss how this novel methodology provides insight into dimensions of recommender systems that were previously unexplored. We conclude that investigating collaborative filtering from a temporal perspective is not only more suitable to the context in which recommender systems are deployed, but also opens a number of future research opportunities.
|
Page generated in 0.0244 seconds