• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 487
  • 487
  • 487
  • 171
  • 157
  • 155
  • 155
  • 68
  • 57
  • 48
  • 33
  • 29
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

An architecture for mobile communications in hazardous situations and physical disasters

Soulahakis, Alexander January 2007 (has links)
Hazardous environmental conditions have always been a threat to human lives around the globe. Human society has seen some of the worst disasters due to accidents, physical phenomena or even cases that humans have created on purpose. The existing infrastructure can guarantee that there are hospitals, markets, mass transportation networks, sophisticated communications networks, and many more to cover all possible needs from a home user to an enterprise company. Unfortunately, the infrastructure has been proven unstable due to rapid environmental changes. The sophisticated networks, as well as the support buildings, can become useless in seconds in the event of a physical phenomenon such as an earthquake, a fire or a flood or even worse in the event of a well organized terrorist attack. The major problems identified are associated with inadequate capacity of the network, equipment vulnerable to physical phenomena and methodologies of disaster recovery that require time and work force to be applied. Modem telecommunication systems are designed in a cost effective way, to support as many users as they can, by using minimum equipment, but they cannot support users in hazardous environments. As a response to this situation we present the development of a novel architecture, which is based on an fast deployed network, infrastructure independent. The proposed network is capable of providing mobile subscribers with messaging and voice services in hazardous environments at the time of the event. Similar studies are based on infrastructure as they are in the need of extra hardware deployment. The novelty of our research is that we combine 802.11 and GSM in order to form a fast deployed network, infrastructure independent. The proposed architecture has two modes of operation: messages only or voice system. This solution benefits from the advantages of a deployed, infrastructure independent Ad Hoc network. This network is able to recover quickly from errors and can survive in hazardous dynamic environments. In addition we benefit from GSM technology using already implemented functions such as encoding/decoding for voice transmission. Combining those two technologies we can deploy a network which satisfies the challenges previously mentioned. While 802.11 handles connectivity and data transfers, GSM is responsible for bit error correction of voice calls and a number of other functions such as messaging and identification. The proposed architecture has been designed and simulated in order to evaluate the network. The evaluation has been separated in two phases. Messaging and voice capabilities of the network have been tested to investigate their performance. In the evaluation we check the factors affecting the network in a hazardous environment and we compare it to other approaches and similar networks. The results prove that the concept of messaging service is valid as the system can operate in hazardous environments. Voice capabilities of the system have been proven to work but further work is needed for maximising the performance and the reliability of the network. The new architecture can form the basis for the next generation emergency telecommunication services.
362

A robust region-adaptive digital image watermarking system

Song, Chunlin January 2012 (has links)
Digital image watermarking techniques have drawn the attention of researchers and practitioners as a means of protecting copyright in digital images. The technique involves a subset of information-hiding technologies, which work by embedding information into a host image without perceptually altering the appearance of the host image. Despite progress in digital image watermarking technology, the main objectives of the majority of research in this area remain improvements in the imperceptibility and robustness of the watermark to attacks. Watermark attacks are often deliberately applied to a watermarked image in order to remove or destroy any watermark signals in the host data. The purpose of the attack is. aimed at disabling the copyright protection system offered by watermarking technology. Our research in the area of watermark attacks found a number of different types, which can be classified into a number of categories including removal attacks, geometry attacks, cryptographic attacks and protocol attacks. Our research also found that both pixel domain and transform domain watermarking techniques share similar levels of sensitivity to these attacks. The experiment conducted to analyse the effects of different attacks on watermarked data provided us with the conclusion that each attack affects the high and low frequency part of the watermarked image spectrum differently. Furthermore, the findings also showed that the effects of an attack can be alleviated by using a watermark image with a similar frequency spectrum to that of the host image. The results of this experiment led us to a hypothesis that would be proven by applying a watermark embedding technique which takes into account all of the above phenomena. We call this technique 'region-adaptive watermarking'. Region-adaptive watermarking is a novel embedding technique where the watermark data is embedded in different regions of the host image. The embedding algorithms use discrete wavelet transforms and a combination of discrete wavelet transforms and singular value decomposition, respectively. This technique is derived from the earlier hypothesis that the robustness of a watermarking process can be improved by using watermark data in the frequency spectrum that are not too dissimilar to that of the host data. To facilitate this, the technique utilises dual watermarking technologies and embeds parts of the watermark images into selected regions of the host image. Our experiment shows that our technique improves the robustness of the watermark data to image processing and geometric attacks, thus validating the earlier hypothesis. In addition to improving the robustness of the watermark to attacks, we can also show a novel use for the region-adaptive watermarking technique as a means of detecting whether certain types of attack have occurred. This is a unique feature of our watermarking algorithm, which separates it from other state-of-the-art techniques. The watermark detection process uses coefficients derived from the region-adaptive watermarking algorithm in a linear classifier. The experiment conducted to validate this feature shows that, on average, 94.5% of all watermark attacks can be correctly detected and identified.
363

An architecture to support virtual Concurrent Engineering

Hanneghan, Martin January 1998 (has links)
No description available.
364

An investigation into autonomic middleware control services to support distributed self-adaptive software

Badr, Nagwa Lotfy January 2003 (has links)
No description available.
365

The use of computer technology and constructivism to enhance visualisation skills in mathematics education

Malabar, Ian January 2003 (has links)
No description available.
366

Sessions-based misbehaviour detection framework for wireless mobile ad hoc networks

Fahad, Tarag January 2007 (has links)
There has been a tremendous growth over the past decade in the use of wireless communication. As the cost of wireless access drops, wireless communications could replace wired in many settings. Today, widely travelling laptop users access the Internet at a variety of places including their homes, and even at public places such as airports. Mobile Wireless Ad hoc Networks (MANET) is one such type of wireless network that have many useful applications including Wireless Sensors Networks which is now used in many civilian and environmental application areas. In mobile ad-hoc networks, nodes act as both routers and terminals. For the lack of routing infrastructure, they have to cooperate to communicate. Misbehaviour means deviation from regular routing and forwarding. It occurs by either selfish or malicious nodes. In both types misbehaviour's impact on MANET's proves to be detrimental, decreasing the performance and the fairness of the network, and in the extreme case resulting in a non-functional network. In this thesis we have addressed the requirements that nodes misbehaviour detection solution in MANET's should achieve. Existing solutions related to nodes misbehaviour detection in MANET were shown to fail to meet all of our requirements. The main direction of our work has been to look for an effective approach that can satisfy our requirements. The result is a new novel low cost framework entitled Sessions-basedM isbehaviour Detection Framework (SMDF). It consists of three components, the detection component, the decision component and the isolation component. We analysed and evaluated the proposed schemes by simulation techniques. By comparing our results to those of other mechanisms available in the literature, we showed that our solution has low cost in terms of communication overhead and has the lowest False Positive as well as the highest value of True Positive Detection Rates. It also showed that our solution has lower energy consumption rate and is scalable. Finally we present a series of proposals for future research work that have been raised by this work, such as tackling detection complications in hybrid ad hoc network environments.
367

DIEGESIS : a multi-agent Digital Interactive Storytelling framework using planning and re-planning techniques

Goudoulakis, E. January 2014 (has links)
In recent years, the field of Digital Interactive Storytelling (DIS) has become very popular both in academic circles, as well as in the gaming industry, in which stories are becoming a unique selling point. Academic research on DIS focuses in the search for techniques that allow the creation of systems that can generate dynamically interesting stories which are not linear and can change dynamically at runtime as a consequence of a player’s actions, therefore leading to different story endings. To reach this goal, DIS systems usually employ Artificial Intelligence planning and re-planning algorithms as part of their solution. There is a lack of algorithms created specifically for DIS purposes since most DIS systems use generic algorithms, and they do not usually assess if and why a given algorithm is the best solution for their purposes. Additionally, there is no unified way (e.g. in the form of a selection of metrics) to evaluate such systems and algorithms. To address these issues and to provide new solutions to the DIS field, we performed a review of related DIS systems and algorithms, and based on the critical analysis of that work we designed and implemented a novel multi-agent DIS framework called DIEGESIS, which includes –among other novel aspects- two new DIS-focused planning and re-planning algorithms. To ensure that our framework and its algorithms have met the specifications we set, we created a large scale evaluation scenario which models the story of Troy, derived from Homer’s epic poem, “Iliad”, which we used to perform a number of evaluations based on metrics that we chose and we consider valuable for the DIS field. This collection of requirements and evaluations could be used in the future from other DIS systems as a unified test-bed for analysis and evaluation of such systems.
368

Development of virtual network computing (VNC) environment for networking and enhancing user experience

Al-Malki, Dana Mohammed January 2006 (has links)
Virtual Network Computing (VNC) is a thin client developed by Real VNC Ltd, Formerly of Olivetti Research Ltd/AT&T labs Cambridge and can be used as a collaborative environment, therefore it has been chosen as the basis of this research study. The purpose of this thesis is to investigate and develop a VNC based environment over the network and to improve the users’ Quality of Experience (QoE) of using VNC between networked groups by the incorporation of videoconferencing with VNC and enhancing QoE in Mobile environments where the network status is far from ideal and is prone to disconnection. This thesis investigates the operation of VNC in different environments and scenarios such as wireless environments by investigating user and device mobility and ways to sustain their seamless connection when in motion. As part of the study I also researched all groups that implement VNC like universities, research groups and laboratories and virtual laboratories. In addition to that I identified the successful features and security measures in VNC in order to create a secure environment. This was achieved by pinpointing the points of strength and weakness in VNC as opposed to popular thin clients and remote control applications and analysing VNC according to conforming to several security measures. Furthermore, it is reasonable to say that the success of any scheme that attempts to deliver desirable levels of Quality of Service (QoS) of an effective application for the future Internet must be based, not only on the progress of technology, but on usersʹ requirements. For instance, a collaborative environment has not yet reached the desired expectation of its users since it is not capable of handling any unexpected events which can result from a sudden disconnection of a nomadic user engaged in an ongoing collaborative session; this is consequently associated with breaking the social dynamics of the group collaborating in the session. Therefore, I have concluded that knowing the social dynamics of application’s users as a group and their requirements and expectations of a successful experience can lead an application designer to exploit technology to autonomously support the initiating and maintaining of social interaction. Moreover, I was able to successfully develop a VNC based environment for networked groups that facilitates the administration of different remote VNC sessions. In addition to a prototype that uses videoconferencing in parallel to VNC to provide a better user’s QoE of VNC. The last part of the thesis was concerned with designing a framework to improve and assess QoE of all users in a collaborative environment where it can be especially applied in the presence of nomadic clients with their much frequent disconnections. I have designed a conceptual algorithm called Improved Collaborative Quality of Experience (IC‐QoE), an algorithm that aims to eliminate frustration and improve QoE of users in a collaborative session in the case of disconnections and examined its use and benefits in real world scenarios such as research teams and implemented a prototype to present the concepts of this algorithm. Finally, I have designed a framework to suggest ways to evaluate this algorithm.
369

A distributed, compact routing protocol for the Internet

Jakma, Paul January 2016 (has links)
The Internet has grown in size at rapid rates since BGP records began, and continues to do so. This has raised concerns about the scalability of the current BGP routing system, as the routing state at each router in a shortest-path routing protocol will grow at a supra-linearly rate as the network grows. The concerns are that the memory capacity of routers will not be able to keep up with demands, and that the growth of the Internet will become ever more cramped as more and more of the world seeks the benefits of being connected. Compact routing schemes, where the routing state grows only sub-linearly relative to the growth of the network, could solve this problem and ensure that router memory would not be a bottleneck to Internet growth. These schemes trade away shortest-path routing for scalable memory state, by allowing some paths to have a certain amount of bounded “stretch”. The most promising such scheme is Cowen Routing, which can provide scalable, compact routing state for Internet routing, while still providing shortest-path routing to nearly all other nodes, with only slightly stretched paths to a very small subset of the network. Currently, there is no fully distributed form of Cowen Routing that would be practical for the Internet. This dissertation describes a fully distributed and compact protocol for Cowen routing, using the k-core graph decomposition. Previous compact routing work showed the k-core graph decomposition is useful for Cowen Routing on the Internet, but no distributed form existed. This dissertation gives a distributed k-core algorithm optimised to be efficient on dynamic graphs, along with with proofs of its correctness. The performance and efficiency of this distributed k-core algorithm is evaluated on large, Internet AS graphs, with excellent results. This dissertation then goes on to describe a fully distributed and compact Cowen Routing protocol. This protocol being comprised of a landmark selection process for Cowen Routing using the k-core algorithm, with mechanisms to ensure compact state at all times, including at bootstrap; a local cluster routing process, with mechanisms for policy application and control of cluster sizes, ensuring again that state can remain compact at all times; and a landmark routing process is described with a prioritisation mechanism for announcements that ensures compact state at all times.
370

Data structures for SIMD logic simulation

Kabiri Chimeh, Mozhgan January 2016 (has links)
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Page generated in 0.0901 seconds