• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1104
  • 379
  • 210
  • 133
  • 95
  • 75
  • 37
  • 19
  • 18
  • 18
  • 15
  • 15
  • 15
  • 15
  • 12
  • Tagged with
  • 2451
  • 610
  • 607
  • 376
  • 324
  • 321
  • 267
  • 257
  • 252
  • 234
  • 226
  • 215
  • 210
  • 204
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Verification and validation of security protocol implementations

O'Shea, Nicholas January 2010 (has links)
Security protocols are important and widely used because they enable secure communication to take place over insecure networks. Over the years numerous formal methods have been developed to assist protocol designers by analysing models of these protocols to determine their security properties. Beyond the design stage however, developers rarely employ formal methods when implementing security protocols. This may result in implementation flaws often leading to security breaches. This dissertation contributes to the study of security protocol analysis by advancing the emerging field of implementation analysis. Two tools are presented which together translate between Java and the LySa process calculus. Elyjah translates Java implementations into formal models in LySa. In contrast, Hajyle generates Java implementations from LySa models. These tools and the accompanying LySa verification tool perform rapid static analysis and have been integrated into the Eclipse Development Environment. The speed of the static analysis allows these tools to be used at compile-time without disrupting a developer’s workflow. This allows us to position this work in the domain of practical software tools supporting working developers. As many of these developers may be unfamiliar with modelling security protocols a suite of tools for the LySa process calculus is also provided. These tools are designed to make LySa models easier to understand and manipulate. Additional tools are provided for performance modelling of security protocols. These allow both the designer and the implementor to predict and analyse the overall time taken for a protocol run to complete. Elyjah was among the very first tools to provide a method of translating between implementation and formal model, and the first to use either Java for the implementation language or LySa for the modelling language. To the best of our knowledge, the combination of Elyjah and Hajyle represents the first and so far only system which provides translation from both code to model and back again.
282

Internet congestion control for variable-rate TCP traffic

Biswas, Md. Israfil January 2011 (has links)
The Transmission Control Protocol (TCP) has been designed for reliable data transport over the Internet. The performance of TCP is strongly influenced by its congestion control algorithms that limit the amount of traffic a sender can transmit based on end-to-end available capacity estimations. These algorithms proved successful in environments where applications rate requirements can be easily anticipated, as is the case for traditional bulk data transfer or interactive applications. However, an important new class of Internet applications has emerged that exhibit significant variations of transmission rate over time. Variable-rate traffic poses a new challenge for congestion control, especially for applications that need to share the limited capacity of a bottleneck over a long delay Internet path (e.g., paths that include satellite links). This thesis first analyses TCP performance of bursty applications that do not send data continuously, but generate data in bursts separated by periods in which little or no data is sent. Simulation analysis shows that standard TCP methods do not provide efficient support for bursty applications that produce variable-rate traffic, especially over long delay paths. Although alternative forms of congestion control like TCP-Friendly Rate Control and the Datagram Congestion Control Protocol have been proposed, they did not achieve widespread deployment. Therefore many current applications that rely upon User Datagram Protocol are not congestion controlled. The use of non-standard or proprietary methods decreases the effectiveness of Internet congestion control and poses a threat to the Internet stability. Solutions are therefore needed to allow bursty applications to use TCP. Chapter three evaluates Congestion Window Validation (CWV), an IETF experimental specification that was proposed to improve support for bursty applications over TCP. It concluded that CWV is too conservative to support many bursty applications and does not provide an incentive to encourage use by application designers. Instead, application designers often avoid generating variable-rate traffic by padding idle periods, which has been shown to waste network resources. CWV is therefore shown to not provide an acceptable solution for variable-rate traffic. In response to this shortfall, a new modification to TCP, TCP-JAGO, is proposed. This allows variable-rate traffic to restart quickly after an inactive (i.e., idle) period and to effectively utilise available network resources while sending at a lower rate than the available rate (i.e., during an application-limited period). The analysis in Chapter five shows that JAGO provides faster convergence to a steady-state rate and improves throughput by more efficiently utilising the network. TCP-JAGO is also shown to provide an appropriate response when congestion is experienced after restart. Variable-rate TCP traffic can also be impacted by the Initial Window algorithm at the start or during the restart of a session. Chapter six considers this problem, where TCP has no prior indication of the network state. A recent proposal for a larger initial window is analysed. Issues and advantages of using a large IW over a range of scenarios are discussed. The thesis concludes by presenting recommendations to improve TCP support for bursty applications. This also provides an incentive for application designers to choose TCP for variable-rate traffic.
283

Embedded monitors for detecting and preventing intrusions in cryptographic and application protocols.

Joglekar, Sachin P. 12 1900 (has links)
There are two main approaches for intrusion detection: signature-based and anomaly-based. Signature-based detection employs pattern matching to match attack signatures with observed data making it ideal for detecting known attacks. However, it cannot detect unknown attacks for which there is no signature available. Anomaly-based detection builds a profile of normal system behavior to detect known and unknown attacks as behavioral deviations. However, it has a drawback of a high false alarm rate. In this thesis, we describe our anomaly-based IDS designed for detecting intrusions in cryptographic and application-level protocols. Our system has several unique characteristics, such as the ability to monitor cryptographic protocols and application-level protocols embedded in encrypted sessions, a very lightweight monitoring process, and the ability to react to protocol misuse by modifying protocol response directly.
284

Optimization of resources allocation for H.323 endpoints and terminals over VoIP networks

27 January 2014 (has links)
M.Phil. (Electrical & Electronic Engineering) / Without any doubt, the entire range of voice and TV signals will migrate to the packet network. The universal addressable mode of Internet protocol (IP) and the interfacing framing structure of Ethernet are the main reasons behind the success of TCP/IP and Ethernet as a packet network and network access scheme mechanisms. Unfortunately, the success of the Internet has been the problem for real-time traffic such as voice, leading to more studies in the domain of Teletraffic Engineering; and the lack of a resource reservation mechanism in Ethernet, which constitutes a huge problem as switching system mechanism, have raised enough challenges for such a migration. In that context, ITU-T has released a series of Recommendation under the umbrella of H.323 to guarantee the required Quality of Service (QoS) for such services. Although the “utilisation” is not a good parameter in terms of traffic and QoS, we are here in proposing a multiplexing scheme with a queuing solution that takes into account the positive correlations of the packet arrival process experienced at the multiplexer input with the aim to optimize the utilisation of the buffer and bandwidth on the one hand; and the ITU-T H.323 Endpoints and Terminals configuration that can sustain such a multiplexing scheme on the other hand. We take into account the solution of the models from the M/M/1 up to G/G/1 queues based on Kolmogorov’s analysis as our solution to provide a better justification of our approach. This solution, the Diffusion approximation, is the limit of the Fluid process that has not been used enough as queuing solution in the domain of networking. Driven by the results of the Fluid method, and the resulting Gaussian distribution from the Diffusion approximation, the application of the asymptotic properties of the Maximum Likelihood Estimation (MLE) as the central limit theorem allowed capturing the fluctuations and therefore filtering out the positive correlations in the queue system. This has resulted in a queue system able to serve 1 erlang (100% of transmission link capacity) of traffic intensity without any extra delay and a queue length which is 60% of buffer utilization when compared to the ordinary Poisson queue length.
285

Managing near field communication (NFC) payment applications through cloud computing

Pourghomi, Pardis January 2014 (has links)
The Near Field Communication (NFC) technology is a short-range radio communication channel which enables users to exchange data between devices. NFC provides a contactless technology for data transmission between smart phones, Personal Computers (PCs), Personal Digital Assistants (PDAs) and such devices. It enables the mobile phone to act as identification and a credit card for customers. However, the NFC chip can act as a reader as well as a card, and also be used to design symmetric protocols. Having several parties involved in NFC ecosystem and not having a common standard affects the security of this technology where all the parties are claiming to have access to client’s information (e.g. bank account details). The dynamic relationships of the parties in an NFC transaction process make them partners in a way that sometimes they share their access permissions on the applications that are running in the service environment. These parties can only access their part of involvement as they are not fully aware of each other’s rights and access permissions. The lack of knowledge between involved parties makes the management and ownership of the NFC ecosystem very puzzling. To solve this issue, a security module that is called Secure Element (SE) is designed to be the base of the security for NFC. However, there are still some security issues with SE personalization, management, ownership and architecture that can be exploitable by attackers and delay the adaption of NFC payment technology. Reorganizing and describing what is required for the success of this technology have motivated us to extend the current NFC ecosystem models to accelerate the development of this business area. One of the technologies that can be used to ensure secure NFC transactions is cloud computing which offers wide range advantages compared to the use of SE as a single entity in an NFC enabled mobile phone. We believe cloud computing can solve many issues in regards to NFC application management. Therefore, in the first contribution of part of this thesis we propose a new payment model called “NFC Cloud Wallet". This model demonstrates a reliable structure of an NFC ecosystem which satisfies the requirements of an NFC payment during the development process in a systematic, manageable, and effective way.
286

Using human interactive security protocols to secure payments

Chen, Bangdao January 2012 (has links)
We investigate using Human Interactive Security Protocols (HISPs) to secure payments. We start our research by conducting extensive investigations into the payment industry. After interacting with different payment companies and banks, we present two case studies: online payment and mobile payment. We show how to adapt HISPs for payments by establishing the reverse authentication method. In order to properly and thoroughly evaluate different payment examples, we establish two attack models which cover the most commonly seen attacks against payments. We then present our own payment solutions which aim at solving the most urgent security threats revealed in our case studies. Demonstration implementations are also made to show our advantages. In the end we show how to extend the use of HISPs into other domains.
287

Operational benefit of implementing VoIP in a tactical environment / Operational benefit of implementing Voice Over Internet Protocol in a tactical environment

Lewis, Rosemary 06 1900 (has links)
Approved for public release, distribution is unlimited / In this thesis, Voice over Internet Protocol (VoIP) technology will be explored and a recommendation of the operational benefit of VoIP will be provided. A network model will be used to demonstrate improvement of voice End-to-End delay by implementing quality of service (QoS) controls. An overview of VoIP requirements will be covered and recommended standards will be reviewed. A clear definition of a Battle Group will be presented and an overview of current analog RF voice technology will be explained. A comparison of RF voice technology and VoIP will modeled using OPNET Modeler 9.0. / Lieutenant, United States Navy
288

Reliable user datagram protocol (RUDP).

Thammadi, Abhilash January 1900 (has links)
Master of Science / Department of Computing and Information Sciences / Gurdip Singh / As the network bandwidth and delay increase, TCP becomes inefficient. Data intensive applications over high-speed networks need new transport protocol to support them. This project describes a general purpose high performance data transfer protocol as an application level solution. The protocol Reliable UDP-based data transfer works above UDP with reliability. Reliable Data Transfer protocol provides reliability to applications using the Sliding Window protocol (Selective Repeat). UDP uses a simple transmission model without implicit handshaking techniques for providing reliability and ordering of packets. Thus, UDP provides an unreliable service and datagrams may arrive out of order, appear duplicated, or go missing without notice. Reliable UDP uses both positive acknowledgements and negative acknowledgements to guarantee data reliability. Both simulation and implementation results have shown that Reliable UDP provides reliable data transfer. This report will describe the details of Reliable UDP protocol with simulation and implementation results and analysis.
289

Design of a practical voice over internet protocol network for the multi user enterprise

Loubser, Jacob Bester 06 1900 (has links)
Thesis (M. Tech. Engineering: Electrical--Vaal University of Technology. / This dissertation discusses the design and implementation of a voice over internet protocol system for the multi-user enterprise. It is limited to small to medium enterprises of which the Vaal University of Technology is an example. Voice communications over existing Internet protocol networks are governed by standards, and to develop such a system it is necessary to have a thorough understanding of these standards. Two such standards namely the International Telecommunications Unions H.323 and the Internet Engineering Task Force's SIP were evaluated and compared to each other in terms of their complexity, extensibility and scalability as well as the services they offer. Based on these criteria it was decided to implement a SIP system. A SIP network consists of application software that act as clients and servers, as well as hardware components such as a proxy and redirect and registrar or location servers that allow users of this network to call each other on the data network. Gateways enable users of the network to call regular public switched telephone network numbers. A test network was set up in the laboratory that contained all the hardware and software components. This was done to understand the installation and configuration options of the different software components and to determine the suitability and interoperability of the software components. This network was then migrated to the network of the Vaal University of Technology which allowed selected users to test and use it. Bandwidth use is a major point of contention, and calculations and measurements showed that the codec being used during the voice call is the determining factor. This SIP system is being used on a daily basis and the users report excellent audio quality between soft phones and soft phones, soft phones and normal telephones and even cellular phones.
290

Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance

Tian, Xiaoyu January 2016 (has links)
<p>X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].</p><p>Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.</p><p>As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization. </p><p>More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.</p><p>With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled. </p><p>Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4. </p><p>With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes. </p><p>Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization. </p><p>Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.</p> / Dissertation

Page generated in 0.0259 seconds