• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 89
  • 15
  • 15
  • 13
  • 12
  • 11
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

ECC Video: An Active Second Error Control Approach for Error Resilience in Video Coding

Du, Bing Bing January 2003 (has links)
To support video communication over mobile environments has been one of the objectives of many engineers of telecommunication networks and it has become a basic requirement of a third generation of mobile communication systems. This dissertation explores the possibility of optimizing the utilization of shared scarce radio channels for live video transmission over a GSM (Global System for Mobile telecommunications) network and realizing error resilient video communication in unfavorable channel conditions, especially in mobile radio channels. The main contribution describes the adoption of a SEC (Second Error Correction) approach using ECC (Error Correction Coding) based on a Punctured Convolutional Coding scheme, to cope with residual errors at the application layer and enhance the error resilience of a compressed video bitstream. The approach is developed further for improved performance in different circumstances, with some additional enhancements involving Intra Frame Relay and Interleaving, and the combination of the approach with Packetization. Simulation results of applying the various techniques to test video sequences Akiyo and Salesman are presented and analyzed for performance comparisons with conventional video coding standard. The proposed approach shows consistent improvements under these conditions. For instance, to cope with random residual errors, the simulation results show that when the residual BER (Bit Error Rate) reaches 10-4, the video output reconstructed from a video bitstream protected using the standard resynchronization approach is of unacceptable quality, while the proposed scheme can deliver a video output which is absolutely error free in a more efficient way. When the residual BER reaches 10-3, the standard approach fails to deliver a recognizable video output, while the SEC scheme can still correct all the residual errors with modest bit rate increase. In bursty residual error conditions, the proposed scheme also outperforms the resynchronization approach. Future works to extend the scope and applicability of the research are suggested in the last chapter of the thesis.
82

Efeitos da aprotinina em crianças com cardiopatia congênita acianogênica operadas com circulação extracorpórea / Effects of aprotinin in children with acianogenic congenital heart disease submitted to correction with extracorporeal circulation

Cesar Augusto Ferreira 22 November 2006 (has links)
Introdução. A Aprotinina parece reduzir o uso de transfusões, o processo inflamatório e o dano miocárdico, pós-CEC. Material e Métodos. Estudo prospectivo randomizado em crianças de 30 dias a 4 anos de idade, submetidas à correção de cardiopatia congênita acianogênica, com CEC e divididas em dois grupos, um denominado Controle (n=9) e o outro, Aprotinina (n=10). Neste, a droga foi administrada imediatamente antes da CEC. A resposta inflamatória sistêmica e disfunções hemostáticas e multiorgânicas foram analisadas por marcadores clínicos e bioquímicos. Foram consideradas significantes as diferenças com p<0,05. Resultados. Os grupos foram semelhantes quanto às variáveis demográficas e intra-operatórias, exceto por maior hemodiluição no Grupo Aprotinina. Não houve benefício quanto aos tempos de ventilação pulmonar mecânica, permanência no CTIP e hospitalar, nem quanto ao uso de inotrópicos e função renal. A relação PaO2/FiO2 (pressão parcial de oxigênio arterial/fração inspirada de oxigênio) apresentou queda significativa com 24 h PO, no Grupo Controle. Ocorreu preservação da concentração plaquetária com a Aprotinina enquanto no grupo Controle houve plaquetopenia desde o início da CEC. As perdas sangüíneas foram semelhantes nos dois grupos. No grupo Aprotinina surgiu leucopenia significativa, em CEC, seguida de leucocitose. Fator de necrose tumoral alfa (TNF-) , Interleucinas (IL)-6, IL-8, IL-10, proporção IL-6/IL-10, troponina I cardíaca (cTnI), fração MB da creatinofosfoquinase (CKMB), transaminase glutâmico-oxalacética (TGO) e fração amino-terminal do peptídio natriurético tipo B (NT-proBNP) não apresentaram diferenças marcantes intergrupos. A proporção IL-6/IL-10 PO aumentou no grupo Controle. A lactatemia e acidose metabólica pós-CEC foi mais intensa no grupo Aprotinina. Não houve complicações com o uso da Aprotinina. Conclusão. A Aprotinina não minimizou as manifestações clínicas e os marcadores séricos de resposta inflamatória sistêmica e miocárdicos, mas preservou quantitativamente as plaquetas. / Introduction. Aprotinin seems to reduce the need for transfusion, the inflammatory process and myocardial damage after extracorporeal circulation (ECC). Material and Methods. A prospective randomized study was conducted on children aged 30 days to 4 years submitted to correction of acyanogenic congenital heart disease with ECC and divided into two groups: Control (n=9) and Aprotinin (n=10). In the Aprotinin Group the drug was administered immediately before ECC and the systemic inflammatory response and hemostatic and multiorgan dysfunctions were analyzed on the basis of clinical and biochemical markers. Differences were considered to be significant when P<0.05. Results. The groups were similar regarding demographic and intraoperative variables, except for a greater hemodilution in the Aprotinin Group. The drug had no benefit regarding time of mechanical pulmonary ventilation, permanence in the postoperative ICU and length of hospitalization, or regarding the use of inotropic drugs and renal function. The partial arterial oxygen pressure/inspired oxygen fraction ratio (PaO2/FiO2) was significantly reduced 24 h after surgery in the Control Group. Platelet concentration was preserved with the use of Aprotinin, whereas thrombocytopenia occurred in the Control Group since the beginning of ECC. Blood loss was similar for both groups. Significant leukopenia was observed in the Aprotinin Group during ECC, followed by leukocytosis. Tumor necrosis factor alpha (TNF-), interleukins (IL)-6, IL-8, IL-10, IL-6/IL-10 ratio, cardiac troponin I (cTnI), creatine kinase MB fraction (CKMB), glutamic-oxaloacetic transaminase (GOT) and the aminoterminal fraction of natriuretic peptide type B (NT-proBNP) ndid not differ significantly between groups.The postoperative IL-6/IL-10 fraction increased significantly in the Control Group. Post-ECC blood lactate concentration and metabolic acidosis was more intense in the Aprotinin Group. There were no complications with the use of Aprotinin. Conclusion. Aprotinin did not minimize the clinical manifestations or serum markers of the inflammatory, systemic and myocardial response, but quantitatively preserved the platelets.
83

Low Overhead Soft Error Mitigation Methodologies

Prasanth, V January 2012 (has links) (PDF)
CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this thesis include, (i) proposal of a new delayed capture methodology for low overhead soft error detection, (ii) adopting Error Control Coding (ECC) for delayed capture methodology for correction of single event upsets, (iii) analyzing the impact of different derating factors to reduce the hardware overhead incurred by the above implementations, and (iv) proposal for hardware software co-design for reliability based upon critical component identification determined by the application executing on the hardware (as against standalone hardware analysis). This thesis first surveys existing soft error mitigation techniques and their associated limitations. It proposes a new delayed capture methodology as a low overhead soft error detection technique. Delayed capture methodology is an enhancement of the Razor flip-flop methodology. In the delayed capture methodology, the parity for a set of flip-flops is calculated at their inputs and outputs. The input parity is latched on a second clock, which is delayed with respect to the functional clock by more than the soft error pulse width. It requires an extra flip-flop for each set of flip-flops. On the other hand, in the Razor flip-flop methodology an additional flip-flop is required for every functional flip-flop. Due to the skew in the clocks, either the parity flip-flop or the functional flip-flop will capture the effect of transient, and hence by comparing the output parity and latched input parity an error can be detected. Fault injection experiments are performed to evaluate the bneefits and limitations of the proposed approach. The limitations include soft error detection escapes and lack of error correction capability. Different cases of soft error detection escapes are analyzed. They are attributed mainly to a Single Event Upset (SEU) causing multiple flip-flops within a group to be in error. The error space due to SEUs is analyzed and an intelligent flip-flop grouping method using graph theoretic formulations is proposed such that no SEU can cause multiple flip-flops within a group to be in error. Once the error occurs, leaving the correction aspects to the application may not be desirable. The proposed delayed capture methodology is extended to replace parity codes with codes having higher redundancy to enable correction. The hardware overhead due to the proposed methodology is analyzed and an area savings of about 15% is obtained when compared to an existing soft error mitigation methodology with equivalent coverage. The impact of different derating factors in determining the hardware overhead due to the soft error mitigation methodology is then analyzed. We have considered electrical derating and timing derating information for the evaluation purpose. The area overhead of the circuit with implementation of delayed capture methodology, considering different derating factors standalone and in combination is then analyzed. Results indicate that in different circuits, either a combination of these derating factors yield optimal results, or each of them considered standalone. This is due to the dependency of the solution on the heuristic nature of the algorithms used. About 23% area savings are obtained by employing these derating factors for a more optimal grouping of flip-flops. A new paradigm of hardware software co-design for reliability is finally proposed. This is based on application derating in which the application / firmware code is profiled to identify the critical components which must be guarded from soft errors. This identification is based on the ability of the application software to tolerate certain errors in hardware. An algorithm to identify critical components in the control logic based on fault injection is developed. Experimental results indicated that for a safety critical automotive application, only 12% of the sequential logic elements were found to be critical. This approach provides a framework for investigating how software methods can complement hardware methods, to provide a reduced hardware solution for soft error mitigation.
84

Predicting the Longevity of DVDR Media by Periodic Analysis of Parity, Jitter, and ECC Performance Parameters

Wells, Daniel Patrick 14 July 2008 (has links) (PDF)
For the last ten years, DVD-R media have played an important role in the storage of large amounts of digital data throughout the world. During this time it was assumed that the DVD-R was as long-lasting and stable as its predecessor, the CD-R. Several reports have surfaced over the last few years questioning the DVD-R's ability to maintain many of its claims regarding archival quality life spans. These reports have shown a wide range of longevity between the different brands. While some DVD-Rs may last a while, others may result in an early and unexpected failure. Compounding this problem is the lack of information available for consumers to know the quality of the media they own. While the industry works on devising a standard for labeling the quality of future media, it is currently up to the consumer to pay close attention to their own DVD-R archives and work diligently to prevent data loss. This research shows that through accelerated aging and the use of logistic regression analysis on data collected through periodic monitoring of disc read-back errors it is possible to accurately predict unrecoverable failures in the test discs. This study analyzed various measurements of PIE errors, PIE8 Sum errors, POF errors and jitter data from three areas of the disc: the whole disc, the region of the disc where it first failed as well as the last half of the disc. From this data five unique predictive equations were produced, each with the ability to predict disc failure. In conclusion, the relative value of these equations for end-of-life predictions is discussed.
85

Investigation of Integrated Decoupling Methods for MIMO Antenna Systems. Design, Modelling and Implementation of MIMO Antenna Systems for Different Spectrum Applications with High Port-to-Port Isolation Using Different Decoupling Techniques

Salah, Adham M.S. January 2019 (has links)
Multiple-Input-Multiple-Output (MIMO) antenna technology refers to an antenna with multiple radiators at both transmitter and receiver ends. It is designed to increase the data rate in wireless communication systems by achieving multiple channels occupying the same bandwidth in a multipath environment. The main drawback associated with this technology is the coupling between the radiating elements. A MIMO antenna system merely acts as an antenna array if the coupling between the radiating elements is high. For this reason, strong decoupling between the radiating elements should be achieved, in order to utilize the benefits of MIMO technology. The main objectives of this thesis are to investigate and implement several printed MIMO antenna geometries with integrated decoupling approaches for WLAN, WiMAX, and 5G applications. The characteristics of MIMO antenna performance have been reported in terms of scattering parameters, envelope correlation coefficient (ECC), total active reflection coefficient (TARC), channel capacity loss (CCL), diversity gain (DG), antenna efficiency, antenna peak gain and antenna radiation patterns. Three new 2×2 MIMO array antennas are proposed, covering dual and multiple spectrum bandwidths for WLAN (2.4/5.2/5.8 GHz) and WiMAX (3.5 GHz) applications. These designs employ a combination of DGS and neutralization line methods to reduce the coupling caused by the surface current in the ground plane and between the radiating antenna elements. The minimum achieved isolation between the MIMO antennas is found to be better than 15 dB and in some bands exceeds 30 dB. The matching impedance is improved and the correlation coefficient values achieved for all three antennas are very low. In addition, the diversity gains over all spectrum bands are very close to the ideal value (DG = 10 dB). The forth proposed MIMO antenna is a compact dual-band MIMO antenna operating at WLAN bands (2.4/5.2/5.8 GHz). The antenna structure consists of two concentric double square rings radiating elements printed symmetrically. A new method is applied which combines the defected ground structure (DGS) decoupling method with five parasitic elements to reduce the coupling between the radiating antennas in the two required bands. A metamaterial-based isolation enhancement structure is investigated in the fifth proposed MIMO antenna design. This MIMO antenna consists of two dual-band arc-shaped radiating elements working in WLAN and Sub-6 GHz 5th generation (5G) bands. The antenna placement and orientation decoupling method is applied to improve the isolation in the second band while four split-ring resonators (SRRs) are added between the radiating elements to enhance the isolation in the first band. All the designs presented in this thesis have been fabricated and measured, with the simulated and measured results agreeing well in most cases. / Higher Committee for Education Development in Iraq (HCED)
86

Using Digital Microscopy to Evaluate Enamel Defects in Young Children: A Novel Method

Baxter, Richard Turner 23 December 2014 (has links)
No description available.
87

Elliptic curve cryptosystem over optimal extension fields for computationally constrained devices

Abu-Mahfouz, Adnan Mohammed 08 June 2005 (has links)
Data security will play a central role in the design of future IT systems. The PC has been a major driver of the digital economy. Recently, there has been a shift towards IT applications realized as embedded systems, because they have proved to be good solutions for many applications, especially those which require data processing in real time. Examples include security for wireless phones, wireless computing, pay-TV, and copy protection schemes for audio/video consumer products and digital cinemas. Most of these embedded applications will be wireless, which makes the communication channel vulnerable. The implementation of cryptographic systems presents several requirements and challenges. For example, the performance of algorithms is often crucial, and guaranteeing security is a formidable challenge. One needs encryption algorithms to run at the transmission rates of the communication links at speeds that are achieved through custom hardware devices. Public-key cryptosystems such as RSA, DSA and DSS have traditionally been used to accomplish secure communication via insecure channels. Elliptic curves are the basis for a relatively new class of public-key schemes. It is predicted that elliptic curve cryptosystems (ECCs) will replace many existing schemes in the near future. The main reason for the attractiveness of ECC is the fact that significantly smaller parameters can be used in ECC than in other competitive system, but with equivalent levels of security. The benefits of having smaller key size include faster computations, and reduction in processing power, storage space and bandwidth. This makes ECC ideal for constrained environments where resources such as power, processing time and memory are limited. The implementation of ECC requires several choices, such as the type of the underlying finite field, algorithms for implementing the finite field arithmetic, the type of the elliptic curve, algorithms for implementing the elliptic curve group operation, and elliptic curve protocols. Many of these selections may have a major impact on overall performance. In this dissertation a finite field from a special class called the Optimal Extension Field (OEF) is chosen as the underlying finite field of implementing ECC. OEFs utilize the fast integer arithmetic available on modern microcontrollers to produce very efficient results without resorting to multiprecision operations or arithmetic using polynomials of large degree. This dissertation discusses the theoretical and implementation issues associated with the development of this finite field in a low end embedded system. It also presents various improvement techniques for OEF arithmetic. The main objectives of this dissertation are to --Implement the functions required to perform the finite field arithmetic operations. -- Implement the functions required to generate an elliptic curve and to embed data on that elliptic curve. -- Implement the functions required to perform the elliptic curve group operation. All of these functions constitute a library that could be used to implement any elliptic curve cryptosystem. In this dissertation this library is implemented in an 8-bit AVR Atmel microcontroller. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2006. / Electrical, Electronic and Computer Engineering / unrestricted
88

Smart card fault attacks on public key and elliptic curve cryptography

Ling, Jie January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Blömmer, Otto, and Seifert presented a fault attack on elliptic curve scalar multiplication called the Sign Change Attack, which causes a fault that changes the sign of the accumulation point. As the use of a sign bit for an extended integer is highly unlikely, this appears to be a highly selective manipulation of the key stream. In this thesis we describe two plausible fault attacks on a smart card implementation of elliptic curve cryptography. King and Wang designed a new attack called counter fault attack by attacking the scalar multiple of discrete-log cryptosystem. They then successfully generalize this approach to a family of attacks. By implementing King and Wang's scheme on RSA, we successfully attacked RSA keys for a variety of sizes. Further, we generalized the attack model to an attack on any implementation that uses NAF and wNAF key.
89

Elliptic Curve Cryptography for Lightweight Applications.

Hitchcock, Yvonne Roslyn January 2003 (has links)
Elliptic curves were first proposed as a basis for public key cryptography in the mid 1980's. They provide public key cryptosystems based on the difficulty of the elliptic curve discrete logarithm problem (ECDLP) , which is so called because of its similarity to the discrete logarithm problem (DLP) over the integers modulo a large prime. One benefit of elliptic curve cryptosystems (ECCs) is that they can use a much shorter key length than other public key cryptosystems to provide an equivalent level of security. For example, 160 bit ECCs are believed to provide about the same level of security as 1024 bit RSA. Also, the level of security provided by an ECC increases faster with key size than for integer based discrete logarithm (dl) or RSA cryptosystems. ECCs can also provide a faster implementation than RSA or dl systems, and use less bandwidth and power. These issues can be crucial in lightweight applications such as smart cards. In the last few years, ECCs have been included or proposed for inclusion in internationally recognized standards. Thus elliptic curve cryptography is set to become an integral part of lightweight applications in the immediate future. This thesis presents an analysis of several important issues for ECCs on lightweight devices. It begins with an introduction to elliptic curves and the algorithms required to implement an ECC. It then gives an analysis of the speed, code size and memory usage of various possible implementation options. Enough details are presented to enable an implementer to choose for implementation those algorithms which give the greatest speed whilst conforming to the code size and ram restrictions of a particular lightweight device. Recommendations are made for new functions to be included on coprocessors for lightweight devices to support ECC implementations Another issue of concern for implementers is the side-channel attacks that have recently been proposed. They obtain information about the cryptosystem by measuring side-channel information such as power consumption and processing time and the information is then used to break implementations that have not incorporated appropriate defences. A new method of defence to protect an implementation from the simple power analysis (spa) method of attack is presented in this thesis. It requires 44% fewer additions and 11% more doublings than the commonly recommended defence of performing a point addition in every loop of the binary scalar multiplication algorithm. The algorithm forms a contribution to the current range of possible spa defences which has a good speed but low memory usage. Another topic of paramount importance to ECCs for lightweight applications is whether the security of fixed curves is equivalent to that of random curves. Because of the inability of lightweight devices to generate secure random curves, fixed curves are used in such devices. These curves provide the additional advantage of requiring less bandwidth, code size and processing time. However, it is intuitively obvious that a large precomputation to aid in the breaking of the elliptic curve discrete logarithm problem (ECDLP) can be made for a fixed curve which would be unavailable for a random curve. Therefore, it would appear that fixed curves are less secure than random curves, but quantifying the loss of security is much more difficult. The thesis performs an examination of fixed curve security taking this observation into account, and includes a definition of equivalent security and an analysis of a variation of Pollard's rho method where computations from solutions of previous ECDLPs can be used to solve subsequent ECDLPs on the same curve. A lower bound on the expected time to solve such ECDLPs using this method is presented, as well as an approximation of the expected time remaining to solve an ECDLP when a given size of precomputation is available. It is concluded that adding a total of 11 bits to the size of a fixed curve provides an equivalent level of security compared to random curves. The final part of the thesis deals with proofs of security of key exchange protocols in the Canetti-Krawczyk proof model. This model has been used since it offers the advantage of a modular proof with reusable components. Firstly a password-based authentication mechanism and its security proof are discussed, followed by an analysis of the use of the authentication mechanism in key exchange protocols. The Canetti-Krawczyk model is then used to examine secure tripartite (three party) key exchange protocols. Tripartite key exchange protocols are particularly suited to ECCs because of the availability of bilinear mappings on elliptic curves, which allow more efficient tripartite key exchange protocols.

Page generated in 0.0407 seconds