• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Representation and decision making in the immune system

McEwan, Chris January 2010 (has links)
The immune system has long been attributed cognitive capacities such as "recognition" of pathogenic agents; "memory" of previous infections; "regulation" of a cavalry of detector and effector cells; and "adaptation" to a changing environment and evolving threats. Ostensibly, in preventing disease the immune system must be capable of discriminating states of pathology in the organism; identifying causal agents or ``pathogens''; and correctly deploying lethal effector mechanisms. What is more, these behaviours must be learnt insomuch as the paternal genes cannot encode the pathogenic environment of the child. Insights into the mechanisms underlying these phenomena are of interest, not only to immunologists, but to computer scientists pushing the envelope of machine autonomy. This thesis approaches these phenomena from the perspective that immunological processes are inherently inferential processes. By considering the immune system as a statistical decision maker, we attempt to build a bridge between the traditionally distinct fields of biological modelling and statistical modelling. Through a mixture of novel theoretical and empirical analysis we assert the efficacy of competitive exclusion as a general principle that benefits both. For the immunologist, the statistical modelling perspective allows us to better determine that which is phenomenologically sufficient from the mass of observational data, providing quantitative insight that may offer relief from existing dichotomies. For the computer scientist, the biological modelling perspective results in a theoretically transparent and empirically effective numerical method that is able to finesse the trade-off between myopic greediness and intractability in domains such as sparse approximation, continuous learning and boosting weak heuristics. Together, we offer this as a modern reformulation of the interface between computer science and immunology, established in the seminal work of Perelson and collaborators, over 20 years ago.
412

A comparative study of in-band and out-of-band VOIP protocols in layer 3 and layer 2.5 environments

Pallis, George January 2010 (has links)
For more than a century the classic circuit-switched telephony in the form of PSTN (Public Service Telephone Network) has dominated the world of phone communications (Varshney et al., 2002). The alternative solution of VoIP (Voice over Internet Protocol) or Internet telephony has increased dramatically its share over the years though. Originally started among computer enthusiasts, nowadays it has become a huge research area in both the academic community as well as the industry (Karapantazis and Pavlidou, 2009). Therefore, many VoIP technologies have emerged in order to offer telephony services. However, the performance of these VoIP technologies is a key issue for the sound quality that the end-users receive. When making reference to sound quality PSTN still stands as the benchmark. Against this background, the aim of this project is to evaluate different VoIP signalling protocols in terms of their key performance metrics and the impact of security and packet transport mechanisms on them. In order to reach this aim in-band and out-ofband VoIP signalling protocols are reviewed along with the existing security techniques which protect phone calls and network protocols that relay voice over packet-switched systems. In addition, the various methods and tools that are used in order to carry out performance measurements are examined together with the open source Asterisk VoIP platform. The findings of the literature review are then used in order to design and implement a novel experimental framework which is employed for the evaluation of the in-band and out-of-band VoIP signalling protocols in respect to their key performance networks. The major issue of this framework though is the lack of fine-grained clock synchronisation which is required in order to achieve ultra precise measurements. However, valid results are still extracted. These results show that in-band signalling protocols are highly optimised for VoIP telephony and outperform out-of-band signalling protocols in certain key areas. Furthermore, the use of VoIP specific security mechanisms introduces just a minor overhead whereas the use of Layer 2.5 protocols against the Layer 3 routing protocols does not improve the performance of the VoIP signalling protocols.
413

Rate based IPS for DDoS

Flandrin, Flavien January 2010 (has links)
Nowadays every organisation is connected to the Internet and more and more of the world population have access to the Internet. The development of Internet permits to simplify the communication between the people. Now it is easy to have a conversation with people from everywhere in the world. This popularity of Internet brings also new threats like viruses, worm, Trojan, or denial of services. Because of this, companies start to develop new security systems, which help in the protection of networks. The most common security tools used by companies or even by personal users at home are firewalls, antivirus and now even Intrusion Detection System (IDS). Nevertheless, this is not enough so a new security system has been created as Intrusion Prevention Systems, which are getting more popular with the time .This could be defining as the blend between a firewall and an IDS. The IPS is using the detection capability of the IDS and the response capability of a firewall. Two main types of IPS exist, Network-based Intrusion Prevention System (NIPS) and Host-based Intrusion Prevention System (HIPS). The thirst should be set-up in front of critical resources as a web server while the second is set-up inside the host and so protect only this host. Different methodologies are used to evaluate IPSs but all of them have been produced by constructors or by organisms specialised in the evaluation of security devices. This means that no standard methodology in the evaluation of IPS exists. The utilisation of such methodology permits to benchmark system in an objective way and so it will be possible to compare the results with other systems. This thesis reviews different evaluation methodologies for IPS. Because of the lack of documentation around them the analysis of IDS evaluation methodology will be also done. This will permit to help in the creation of an IPS evaluation methodology. The evaluation of such security system is vast; this is why this thesis will only focus on a particular type of threat: Distributed Denial of Service (DDoS). The evaluation methodology will be around the capacity of an IPS to handle such threat. The produced methodology is capable of generating realistic background traffic along with attacking traffic, which are DDoS attacks. Four different DDoS attacks will be used to carry out the evaluation of a chosen IPS. The evaluation metrics are the packet lost that will be evaluated on two different ways because of the selected IPS. The other metrics are the time to respond to the attack, the available bandwidth, the latency, the reliability, the CPU load, and memory load. All experiment have been done in a real environment to ensure that the results are the more realistic possible. The selected IPS to carry out the evaluation of the methodology is the most popular and open-source Snort, which has been set-up in a Linux machine. The results shows that system is effective to handle a DDoS attack but when the rate of 6 000 pps of malicious traffic is reach Snort start to dropped malicious and legitimate packets without any differences. It also shows that the IPS could only handle traffic lower than 1Mbps. The conclusion shows that the produces methodology permits to evaluate the mitigation capability of an IPS. The limitations of the methodology are also explained. One of the key limitations is the impossibility to aggregate the background traffic with the attacking traffic. Furthermore, the thesis shows interesting future work that could be done as the automation of the evaluation procedure to simply the evaluation of IPSs.
414

Performance evaluation of virtualization with cloud computing

Pelletingeas, Christophe January 2010 (has links)
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances. The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud. The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
415

Analysis of electromagnetic wave propagation using 3D finite-difference time-domain methods with parallel processing

Buchanan, William J. January 1996 (has links)
The 3D Finite-Difference Time-Domain (FDTD) method simulates structures in the time-domain using a direct form of Maxwell's curl equations. This method has the advantage over other simulation methods in that it does not use empirical approximations. Unfortunately, it requires large amounts of memory and long simulation times. This thesis applies parallel processing to the method so that simulation times are greatly reduced. Parallel processing, though, has the disadvantage in that simulation programs require to be segmented so that each processor processes a separate part of the simulation. Another disadvantage of parallel processing is that each processor communicates with neighbouring processors to report their conditions. For large processor arrays this can result in a large overhead in simulation time. Two main methods of parallel processing discussed: Transputer arrays and clustered workstations over a local area network (LAN). These have been chosen because of their relatively cheapness to use, and their widespread availability. The results presented apply to the simulation of a microstrip antenna and to propagation of electrical signals in a printed circuit board (PCB). Microstrip antennas are relatively difficult to simulate in the time-domain because they have resonant pulses. Methods that reduce this problem are discussed in the thesis. The thesis contains a novel analysis of the parallel processing showing, using equations, tables and graphs, the optimum array size for a given inter-processor communication speed and for a given iteration time. This can be easily applied to any processing system. Background material on the 3D FDTD method and microstrip antennas is also provided. From the work on the parallel processing of the 3D FDTD a novel technique for the simulation of the Finite-element (FE) method is also discussed.
416

Efficient routing and communication algorithms for wireless mesh networks

Zhao, Liang January 2011 (has links)
No description available.
417

Ontology based knowledge formulation and an interpretation engine for intelligent devices in pervasive environments

Kosek, Anna January 2011 (has links)
Ongoing device miniaturization makes it possible to manufacture very small devices; therefore more of them can be embedded in one space. Pervasive computing con- cepts, envisioning computers distributed in a space and hidden from users' sight, presented by Weiser in 1991 are becoming more realistic and feasible to implement. A technology supporting pervasive computing and Ambient Intelligence also needs to follow miniaturization. The Ambient Intelligence domain was mainly focused on supercomputers with large computation power and it is now moving towards smaller devices, with limited computation power, and takes inspiration from dis- tributed systems, ad-hoc networks and emergent computing. The ability to process knowledge, understand network protocols, adapt and learn is becoming a required capability from fairly small and energy-frugal devices. This research project con- sists of two main parts. The first part of the project has created a context aware generic knowledgebase interpretation engine that enables autonomous devices to pervasively manage smart spaces using Communicating Sequential Processes as the underlying design methodology. In the second part a knowledgebase containing all the information that is needed for a device to cooperate, make decisions and react was designed and constructed. The interpretation engine is designed to be suitable for devices from different vendors, as it enables semantic interoperability based on the use of ontologies. The knowledge, that the engine interprets, is drawn from an ontology and the model of the chosen ontology is fixed in the engine. This project has investigated, designed and built a prototype of the knowledge base interpretation engine. Functional testing was performed using a simulation implemented in JCSP. The implementation simulates many autonomous devices running in parallel, communicating using a broadcast-based protocol, self-organizing into sub-networks and reacting to users' requests. The main goal of the project was to design and investigate the knowledge interpretation engine, determine the number of functions that the engine performs, to enable hardware realisation, and investigate the knowledgebase represented with use of RDF triples and chosen ontology model. This project was undertaken in collaboration with NXP Semiconductor Research Eindhoven, The Netherlands.
418

Success factors for organisational information systems development projects : a Scottish suppliers' perspective

Irvine, Robert John January 2013 (has links)
Organisational information systems development (OISD) projects have long been associated with failure. Not surprisingly, the cost of these failures is enormous. Yet, despite numerous studies, understanding of real-world projects is limited. In particular, little is known about the way in which various factors affect the success of OISD projects. Prior research has focussed on OISD projects from an in-house or client perspective, and the views of the supplier have largely been ignored. By investigating OISD project success factors from the supplier perspective, this doctoral study helps address this gap. Based on an empirical investigation drawn from data collected from Scottish IS/IT solution suppliers, this research identifies and analyses 20 success factors for supplier-based OISD projects, and a range of more detailed, inter-related sub-factors related to each of the twenty. The work confirms the importance of many factors identified in the extant literature. A number of additional factors not previously identified are also exposed. Important differences between supplier and client perspectives are revealed. The findings also develop a variety of factors that have merited scant treatment in the OISD project success factor literature. The means by which OISD project success factors propagate their influences to affect project success was also investigated. This is revealed to be a complex phenomenon comprising billions of causal chains interacting with a few million causal loops. The propagation process is performed by a sizeable network of factors, the topology of which seems to reflect the complexities of real-world OISD projects. Hence, the network is used to propose a new theory for success factors that contributes new insight into the behaviour of these projects. The research also reveals that supplier-based OISD projects are oriented more towards project success than project management success and that OISD project success criteria are far more than simply measures of success. Indeed, the overall conclusion of this thesis is that the concept of OISD project success factors is far more complicated than has been previously articulated.
419

A semantic framework for unified cloud service search, recommendation, retrieval and management

Fang, Daren January 2015 (has links)
Cloud computing (CC) is a revolutionary paradigm of consuming Information and Communication Technology (ICT) services. However, while trying to find the optimal services, many users often feel confused due to the inadequacy of service information description. Although some efforts are made in the semantic modelling, retrieval and recommendation of cloud services, existing practices would only work effectively for certain restricted scenarios to deal for example with basic and non-interactive service specifications. In the meantime, various service management tasks are usually performed individually for diverse cloud resources for distinct service providers. This results into significant decreased effectiveness and efficiency for task implementation. Fundamentally, it is due to the lack of a generic service management interface which enables a unified service access and manipulation regardless of the providers or resource types. To address the above issues, the thesis proposes a semantic-driven framework, which integrates two main novel specification approaches, known as agility-oriented and fuzziness-embedded cloud service semantic specifications, and cloud service access and manipulation request operation specifications. These consequently enable comprehensive service specification by capturing the in-depth cloud concept details and their interactions, even across multiple service categories and abstraction levels. Utilising the specifications as CC knowledge foundation, a unified service recommendation and management platform is implemented. Based on considerable experiment data collected on real-world cloud services, the approaches demonstrate distinguished effectiveness in service search, retrieval and recommendation tasks whilst the platform shows outstanding performance for a wide range of service access, management and interaction tasks. Furthermore, the framework includes two sets of innovative specification processing algorithms specifically designed to serve advanced CC tasks: while the fuzzy rating and ontology evolution algorithms establish a manner of collaborative cloud service specification, the service orchestration reasoning algorithms reveal a promising means of dynamic service compositions.
420

Quantum error correction codes

Babar, Zunaira January 2015 (has links)
Quantum parallel processing techniques are capable of solving certain complex problems at a substantially lower complexity than their classical counterparts. From the perspective of telecommunications, this quantum-domain parallel processing provides a plausible solution for achieving full-search based multi-stream detection, which is vital for future gigabit-wireless systems. The peculiar laws of quantum mechanics have also spurred interest in the absolutely secure quantum-based communication systems. Unfortunately, quantum decoherence imposes a hitherto insurmountable impairment on the practical implementation of quantum computation as well as on quantum communication systems, which may be overcome with the aid of efficient error correction codes. In this thesis, we design error correction codes for the quantum domain, which is an intricate journey from the realm of classical channel coding theory to that of the Quantum Error Correction Codes (QECCs). Since quantum-based communication systems are capable of supporting the transmission of both classical and quantum information, we initially focus our attention on the code design for entanglementassisted classical communication over the quantum depolarizing channel. We conceive an Extrinsic Information Transfer (EXIT) chart aided near-capacity classical-quantum code design, which invokes a classical Irregular Convolutional Code (IRCC) and a Unity Rate Code (URC) in conjunction with our proposed soft-decision aided SuperDense Code (SD). Hence, it is referred to as an ‘IRCC-URCSD’ arrangement. The proposed scheme is intrinsically amalgamated both with 2-qubit as well as 3-qubit SD coding protocols and it is benchmarked against the corresponding entanglement-assisted classical capacity. Since the IRCC-URC-SD scheme is a bit-based design, it incurs a capacity loss. As a further advance, we design a symbol based concatenated code design, referred to as a symbol-based ‘CC-URC-SD’, which relies on a single-component classical Convolutional Code (CC). Additionally, for the sake of reducing the associated decoding complexity, we also investigate the impact of the constraint length of the convolutional code on the achievable performance. Our initial designs, namely IRCC-URC-SD and CC-URC-SD, exploit redundancy in the classical domain. By contrast, QECCs relying on the quantum-domain redundancy are indispensable for conceiving a quantum communication system supporting the transmission of quantum information and also for quantum computing. Therefore, we next provide insights into the transformation from the family of classical codes to the class of quantum codes known as ‘Quantum Stabilizer Codes’ (QSC), which invoke the classical syndrome decoding. Particularly, we detail the underlying quantum-to classical isomorphism, which facilitates the design of meritorious families of QECCs from the known classical codes. We further study the syndrome decoding techniques operating over classical channels, which may be exploited for decoding QSCs. In this context, we conceive a syndrome-based block decoding approach for the classical Turbo Trellis Coded Modulation (TTCM), whose performance is investigated for transmission over an Additive White Gaussian Noise (AWGN) channel as well as over an uncorrelated Rayleigh fading channel. Pursuing our objective of designing efficient QECCs, we next consider the construction of Hashingbound-approaching concatenated quantum codes. In this quest, we appropriately adapt the conventional non-binary EXIT charts for Quantum Turbo Codes (QTCs) by exploiting the intrinsic quantumto- classical isomorphism. We further demonstrate the explicit benefit of our EXIT-chart technique for achieving a Hashing-bound-approaching code design. We also propose a generically applicable structure for Quantum Irregular Convolutional Codes (QIRCCs), which can be dynamically adapted to a specific application scenario with the aid of the EXIT charts. More explicitly, we provide a detailed design example by constructing a 10-subcode QIRCC and use it as an outer code in a concatenated quantum code structure for evaluating its performance. Working further in the direction of iterative code structures, we survey Quantum Low Density Parity Check (QLPDC) codes from the perspective of code design as well as in terms of their decoding algorithms. Furthermore, we propose a radically new class of high-rate row-circulant Quasi-Cyclic QLDPC (QC-QLDPC) codes, which can be constructed from arbitrary row-circulant classical QC LDPC matrices. We also conceive a modified non-binary decoding algorithm for homogeneous Calderbank-Shor-Steane (CSS)-type QLDPC codes, which is capable of alleviating the problems imposed by the unavoidable length-4 cycles. Our modified decoder outperforms the state-of-the-art decoders in terms of their Word Error Rate (WER) performance, despite imposing a reduced decoding complexity. Finally, we intricately amalgamate our modified decoder with the classic Uniformly-ReWeighted Belief Propagation (URW-BP) for the sake of achieving further performance improvement.

Page generated in 0.0317 seconds