• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Dynamic probabilistic routing discovery and broadcast schemes for high mobility ad-hoc networks

Bani Khalaf, Mustafa January 2013 (has links)
Mobile Ad-hoc Networks (MANETs) have lately come to be widely used in everyday applications. Their usability and capability have attracted the interest of both commercial organizations and research communities. Recently, the Vehicular Ad-hoc Network (VANET) is a promising application of MANETs. It has been designed to offer a high level of safety for the drivers in order to minimize a number of roads accidents. Broadcast communication in MANETs and VANETs, is essential for a wide range of important services such as propagating safety messages and Route REQuest (RREQ) packets. Routing is one of the most challenging issues in MANETs and VANETs, which requires high efficient broadcast schemes. The primitive and widely deployed method of implementing the broadcast is simple ‘flooding'. In this approach, each node ‘floods' the network, with the message that it has received, in order to guarantee that other nodes in the network have been successfully reached. Although flooding is simple and reliable, it consumes a great deal of network resources, since it swamps the network with many redundant packets, leading to collisions contention and huge competition, while accessing the same shared wireless medium. This phenomenon is well-known in MANETs, and is called the Broadcast Storm Problem. The first contribution of this thesis is to design and develop an efficient distributed route discovery scheme that is implemented based on the probabilistic concept, in order to suppress the broadcast storm problem. The proposed scheme is called a Probabilistic Disturbed Route Discovery scheme (PDRD), and it prioritizes the routing operation at each node with respect to different network parameters such as the number of duplicated packets, and local and global network density. The performance of the proposed scheme PDRD has been examined in MANETs, in terms of a number of important metrics such as RREQ rebroadcast number and RREQ collision number. Experimental results confirm the superiority of the proposed scheme over its counterparts, including the Hybrid Probabilistic-Based Counter (HPC) scheme and the Simple Flooding (SF) scheme. The second contribution of this thesis is to tackle the frequent link breakages problem in MANETs. High mobility nodes often have frequent link breakages; this potentially leads to re-discovery of the same routes. Although different probabilistic solutions have been suggested to optimize the routing in MANETs, to the best of our knowledge they have not focused on the problem of frequent link breakages and link stability. II Unlike other existing probabilistic solutions, this thesis proposes a new Velocity Aware-Probabilistic (VAP) route discovery scheme, which can exclude unstable nodes from constructing routes between source and destination. The main idea behind the proposed schemes is to use velocity vector information to determine the stable nodes and unstable nodes. A proper rebroadcast probability and timer are set dynamically according to the node stability. Simulation results confirm that the new proposed scheme has much better performance in terms of end-to-end delay, RREQ rebroadcast number and link stability. The routing in VANETs is very critical and challenging in terms of the number of broken links and packet overheads. This is mainly due to the fast vehicles' speed and different vehicles' movement directions. A large number of routing protocols such as Ad-hoc On-demand Distance Vector (AODV) and Dynamic Source Routing (DSR) have been proposed to deal with the routing in MANETs. However, these protocols are not efficient and cannot be applied directly to VANETs context due to its different characteristics. Finally toward this end, this thesis proposes new probabilistic and timer probabilistic routing schemes in order to improve the routing in VANETs. The main aim of the proposed schemes is to set up the most stable routes to avoid any possible link breakage. These schemes also enhance the overall network performance by suppressing the broadcast storm problem, which occurs during the route discovery process. The proposed schemes also make AODV protocol suitable and applicable for VANETs. Simulation results show the benefit of the new routing schemes in terms of a number of metrics such as RREQ rebroadcast number, link stability and end-to-end delay.
392

Binary matrix for pedestrian tracking in infrared images

Grama, Keshava January 2013 (has links)
The primary goal of this thesis is to present a robust low compute cost pedestrian tracking system for use with thermal infra-red images. Pedestrian tracking employs two distinct image analysis tasks, pedestrian detection and path tracking. This thesis will focus on benchmarking existing pedestrian tracking systems and using this to evaluate the proposed pedestrian detection and path tracking algorithm. The first part of the thesis describes the imaging system and the image dataset collected for evaluating pedestrian detection and tracking algorithms. The texture content of the images from the imaging system are evaluated using fourier maps following this the locations at which the dataset was collected are described. The second part of the thesis focuses on the detection and tracking system. To evaluate the performance of the tracking system, a time per target metric is described and is shown to work with existing tracking systems. A new pedestrian aspect ratio based pedestrian detection algorithm is proposed based on a binary matrix dynamically constrained using potential target edges. Results show that the proposed algorithm is effective at detecting pedestrians in infrared images while being less resource intensive as existing algorithms. The tracking system proposed uses deformable, dynamically updated codebook templates to track pedestrians in an infrared image sequence. Results show that this tracker performs as well as existing tracking systems in terms of accuracy, but requires fewer resources.
393

Correlation of affiliate performance against web evaluation metrics

Miehling, Mathew J. January 2014 (has links)
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
394

An exploration of professional identity in the information technology sector

Smith, Sally January 2016 (has links)
At present the Information Technology profession appears to be dogged by high profile project failure, high graduate unemployment rates, employers unable to recruit suitable staff and a professional body under attack. It is not even clear that IT can be considered a profession when compared with other occupational groups in which professional bodies regulate entry and employers demand professional status from their employees. There are some advantages in belonging to a recognised profession, including external recognition and status; and, consequentially, disadvantages in not belonging. To find out more about the nature of professional identity as experienced in the workplace, this study was designed to explore how IT professionals in leadership roles self-identify. Professional identity is defined to be a coherent self-conception based on skills, abilities, experiences and identification with a profession. The underlying identity theories accept a complex picture of multiple identities with identity commitment and salience affecting behaviour in different contexts. This study explored the nature of professional identity construction and adaptation for experienced IT professionals. As a previously unexplored group in a relatively new profession, the life narrative technique was used to identify factors in the construction and adaptation of identity with insights drawn over the course of a working life. The findings revealed that participants constructed organisational, technical skills-based and leadership identities but there was little identification with the IT profession, as would have been in evidence, for example, through membership of the British Computer Society or developmental interactions with prototypical IT professionals. Analysis of the data uncovered mechanisms which could explain the lack of identification with the IT profession, including the rate of technological change and an underpowered professional body. The findings were evaluated and a set of emerging recommendations for stakeholders in a strong and stable IT sector were framed, including encouraging employers to endorse chartered status and careful consideration of the review on computing course accreditation underway.
395

Logic synthesis and optimisation using Reed-Muller expansions

McKenzie, Lynn Mhairi January 1995 (has links)
This thesis presents techniques and algorithms which may be employed to represent, generate and optimise particular categories of Exclusive-OR Sum- Of-Products (ESOP) forms. The work documented herein concentrates on two types of Reed-Muller (RM) expressions, namely, Fixed Polarity Reed-Muller (FPRM) expansions and KROnecker (KRO) expansions (a category of mixed polarity RM expansions). Initially, the theory of switching functions is comprehensively reviewed. This includes descriptions of various types of RM expansion and ESOP forms. The structure of Binary Decision Diagrams (BDDs) and Reed-Muller Universal Logic Module (RM-ULM) networks are also examined. Heuristic algorithms for deriving optimal (sub-optimal) FPRM expansions of Boolean functions are described. These algorithms are improved forms of an existing tabular technique [1]. Results are presented which illustrate the performance of these new minimisation methods when evaluated against selected existing techniques. An algorithm which may be employed to generate FPRM expansions from incompletely specified Boolean functions is also described. This technique introduces a means of determining the optimum allocation of the Boolean 'don't care' terms so as to derive equivalent minimal FPRM expansions. The tabular technique [1] is extended to allow the representation of KRO expansions. This new method may be employed to generate KRO expansions from either an initial incompletely specified Boolean function or a KRO expansion of different polarity. Additionally, it may be necessary to derive KRO expressions from Boolean Sum-Of-Products (SOP) forms where the product terms are not minterms. A technique is described which forms KRO expansions from disjoint SOP forms without first expanding the SOP expressions to minterm forms. Reed-Muller Binary Decision Diagrams (RMBDDs) are introduced as a graphical means of representing FPRM expansions. RMBDDs are analogous to the BDDs used to represent Boolean functions. Rules are detailed which allow the efficient representation of the initial FPRM expansions and an algorithm is presented which may be employed to determine an optimum (sub-optimum) variable ordering for the RMBDDs. The implementation of RMBDDs as RM-ULM networks is also examined. This thesis is concluded with a review of the algorithms and techniques developed during this research project. The value of these methods are discussed and suggestions are made as to how improved results could have been obtained. Additionally, areas for future work are proposed.
396

Solving vehicle routing problems using multiple ant colonies and deterministic approaches

Sa'adah, Samer January 2007 (has links)
In the vehicle routing problem with time windows VRPTW, there arc two main objectives. The primary objective is to reduce the number of vehicles, the secondary one is to minimise the total distance travelled by all vehicles. This thesis describes some experiments with multiple ant colony and deterministic approaches. For that, it starts explaining how a double ant colony system called DACS 01 with two colonies has the advantage of using the pheromone trails and the XCHNG local search and the ability of tackling multiple objectives problems like VRPTW. Also, it shows how such DACS system lacks vital components that make the performance as comparable and competitive with that of the well-known VRPTW algorithms. Therefore, the inclusions of components, like a triple move local search, a push-forward and pushbackward strategy PFPBS, a hybrid local search HLS and a variant of a 2-0pt move, improve the results very significantly, compared to not using them. Furthermore, it draws the attention to an interesting discovery, which suggests that if a DACS system uses ants that arc more deterministic, then that system has the ability to bring performance that is better than that of another DACS system with pheromone ants. Consequently, the interesting discovery has motivated the author to investigate a number of SI1- Like deterministic approaches, which most of them depend on capturing under-constrained tours and removing them using a removing heuristic that uses the hybrid local search HLS. Some of these SI1-Like approaches show signs of the ability of improving the average, best and worst case performances of DACS systems on some problem set cases, if they are merged with such systems. Of course, this casts some doubt on whether the usc of pheromone trails, which is a distinctive feature of multiple ant-colony systems in the research literature, is really so advantageous as is sometimes claimed. Experiments are conducted on the 176 problem instances with 100, 200 and 400 customers of Solomon [1], Ghering and Homberger [2] and [3]. The results shown in this thesis are comparable and competitive to those obtained by other state-of-the-art approaches.
397

Analysis and optimization of data storage using enhanced object models in the .NET framework

Tandon, Ashish January 2007 (has links)
The purpose of thesis is to benchmark the database to examine and analyze the performance using the Microsoft COM+ the most commonly used component framework heavily used for developing component based applications. The prototype application based on Microsoft Visual C#.NET language used to benchmark the database performance on Microsoft .NET Framework environment 2.0 and 3.0 using the different sizes of data range from low (100 Rows) to high volume (10000 Rows) of data with five or ten number of users connections. There are different type of application used like COM+, Non-COM+ and .NET based application to show their performance on the different volume of data with specified numbers of user on the .NET Framework 2.0 and 3.0. The result has been analyzed and collected using the performance counter variables of an operating system and used Microsoft .NET class libraries which help in collecting system's level performance information as well. This can be beneficial to developers, stakeholders and management to decide the right technology to be used in conjunction with a database. The results and experiments conducted in this project results in the substantial gain in the performance, scalability and availability of component based application using the Microsoft COM+ features like object pooling, application pooling, role- based, transactions isolation and constructor enabled. The outcome of this project is that Microsoft COM+ component based application provides optimized database performance results using the SQL Server. There is a performance gain of at least 10% in the COM+ based application as compared to the Non COM+ based application. COM+ services features come at the performance penalty. It has been noticed that there is a performance difference between the COM+ based application and the application based on role based security, constructor enable and transaction isolation of around 15%, 20% and 35% respectively. The COM+ based application provides performance gain of around 15% and 45% on the low and medium volume of data on a .NET Framework 2.0 in comparison to 3.0. There is a significant gain in the COM+ Server based application on .NET Framework 3.0 of around 10% using high volume of data. This depicts that high volume of data application works better with Framework 3.0 as compared to 2.0 on SQL Server. The application performance type results represents that COM+ component based application provides better performance results over Non-COM+ and .NET based application. The difference between the performance of COM+ application based on low and medium volume of data was around 20% and 30%. .NET based application performs better on the high volume of data results in performance gain of around 10%. Similarly more over the same results provided on the test conducted on the MS Access. Where COM+ based application running under .NET Framework 2.0 performs better result other than the Non-COM+ and .NET based application on a low and medium volume of data and .NET Framework 3.0 based COM+ application performs better results on high volume of data.
398

Managing complex taxonomic data in an object-oriented database

Raguenaud, Cedric January 2002 (has links)
This thesis addresses the problem of multiple overlapping classifications in object-oriented databases through the example of plant taxonomy. These multiple overlapping classifications are independent simple classifications that share information (nodes and leaves), therefore overlap. Plant taxonomy was chosen as the motivational application domain because taxonomic classifications are especially complex and have changed over long periods of time, therefore overlap in a significant manner. This work extracts basic requirements for the support of multiple overlapping classifications in general, and in the context of plant taxonomy in particular. These requirements form the basis on which a prototype is defmed and built. The prototype, an extended object-oriented database, is extended from an object-oriented model based on ODMG through the provision of a relationship management mechanism. These relationships form the main feature used to build classifications. This emphasis on relationships allows the description of classifications orthogonal to the classified data (for reuse and integration of the mechanism with existing databases and for classification of non co-operating data), and allows an easier and more powerful management of semantic data (both within and without a classification). Additional mechanisms such as integrity constraints are investigated and implemented. Finally, the implementation of the prototype is presented and is evaluated, from the point of view of both usability and expressiveness (using plant taxonomy as an application), and its performance as a database system. This evaluation shows that the prototype meets the needs of taxonomists.
399

The home workshop : a method for investigating the home

Baillie, Lynne January 2002 (has links)
No description available.
400

Case-based reasoning and evolutionary computation techniques for FPGA programming

Job, Dominic Edward January 2001 (has links)
A problem in Software Reuse (SR) is to find a software component appropriate to a given requirement. At present this is done by manual browsing through large libraries which is very time consuming and therefore expensi ve. Further to this, if the component is not the same as, but similar to a requirement, the component must be adapted to meet the requirements. This browsing and adaptation requires a skilled user who can comprehend library entries and foresee their application. It is expensive to train users and to produce these documented libraries. The specialised software design domain, chosen in this thesis, is that of Field Programmable Gate Arrays (FPGAs) programs. FPGAs are user programmable microchips that have many applications including encryption and control. This thesis is concerned with a specific technique for FGPA programming that uses Evolutionary Computing (EC) techniques to synthesize FPGA programs. Evolutionary Computing (EC) techniques are based on natural systems such as the life cycle of living organisms or the formation of crystalline structures. They can generate solutions to problems without the need for complete understanding of the problem. EC has been used to create software programs, and can be used as a knowledge-lean approach for generating libraries of software solutions. EC techniques produce solutions without documentation. To automate SR it has been shown that it is essential to understand the knowledge in the software library. In this thesis techniques for automatically documenting EC produced solutions are illustrated. It is also helpful to understand the principles at work in the reuse process. On examination of large collections of evolved programs it is shown that these programs contain reusable modules. Further to this, it is shown that by studying series of similar software components, principles of scale can be deduced. Case Based Reasoning (CBR) is a problem solving method that reuses old solutions to solve new problems and is an effective method of automatically reusing software libraries. These techniques enable automated creation, documentation and reuse of a software library. This thesis proposes that CBR is a feasible method for the reuse of EC designed FPGA programs. It is shown that EC synthesised FPGA programs can be documented, reused, and adapted to solve new problems, using automated CBR techniques.

Page generated in 0.0306 seconds