• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 69
  • 9
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 125
  • 125
  • 23
  • 23
  • 22
  • 19
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Error resilience for video coding services over packet-based networks

Zhang, Jian, Electrical Engineering, Australian Defence Force Academy, UNSW January 1999 (has links)
Error resilience is an important issue when coded video data is transmitted over wired and wireless networks. Errors can be introduced by network congestion, mis-routing and channel noise. These transmission errors can result in bit errors being introduced into the transmitted data or packets of data being completely lost. Consequently, the quality of the decoded video is degraded significantly. This thesis describes new techniques for minimising this degradation. To verify video error resilience tools, it is first necessary to consider the methods used to carry out experimental measurements. For most audio-visual services, streams of both audio and video data need to be simultaneously transmitted on a single channel. The inclusion of the impact of multiplexing schemes, such as MPEG 2 Systems, in error resilience studies is also an important consideration. It is shown that error resilience measurements including the effect of the Systems Layer differ significantly from those based only on the Video Layer. Two major issues of error resilience are investigated within this thesis. They are resynchronisation after error detection and error concealment. Results for resynchronisation using small slices, adaptive slice sizes and macroblock resynchronisation schemes are provided. These measurements show that the macroblock resynchronisation scheme achieves the best performance although it is not included in MPEG2 standard. The performance of the adaptive slice size scheme, however, is similar to that of the macroblock resynchronisation scheme. This approach is compatible with the MPEG 2 standard. The most important contribution of this thesis is a new concealment technique, namely, Decoder Motion Vector Estimation (DMVE). The decoded video quality can be improved significantly with this technique. Basically, this technique utilises the temporal redundancy between the current and the previous frames, and the correlation between lost macroblocks and their surrounding pixels. Therefore, motion estimation can be applied again to search in the previous picture for a match to those lost macroblocks. The process is similar to that the encoder performs, but it is in the decoder. The integration of techniques such as DMVE with small slices, or adaptive slice sizes or macroblock resynchronisation is also evaluated. This provides an overview of the performance produced by individual techniques compared to the combined techniques. Results show that high performance can be achieved by integrating DMVE with an effective resynchronisation scheme, even at a high cell loss rates. The results of this thesis demonstrate clearly that the MPEG 2 standard is capable of providing a high level of error resilience, even in the presence of high loss. The key to this performance is appropriate tuning of encoders and effective concealment in decoders.
102

Perceptually motivated speech recognition and mispronunciation detection

Koniaris, Christos January 2012 (has links)
This doctoral thesis is the result of a research effort performed in two fields of speech technology, i.e., speech recognition and mispronunciation detection. Although the two areas are clearly distinguishable, the proposed approaches share a common hypothesis based on psychoacoustic processing of speech signals. The conjecture implies that the human auditory periphery provides a relatively good separation of different sound classes. Hence, it is possible to use recent findings from psychoacoustic perception together with mathematical and computational tools to model the auditory sensitivities to small speech signal changes. The performance of an automatic speech recognition system strongly depends on the representation used for the front-end. If the extracted features do not include all relevant information, the performance of the classification stage is inherently suboptimal. The work described in Papers A, B and C is motivated by the fact that humans perform better at speech recognition than machines, particularly for noisy environments. The goal is to make use of knowledge of human perception in the selection and optimization of speech features for speech recognition. These papers show that maximizing the similarity of the Euclidean geometry of the features to the geometry of the perceptual domain is a powerful tool to select or optimize features. Experiments with a practical speech recognizer confirm the validity of the principle. It is also shown an approach to improve mel frequency cepstrum coefficients (MFCCs) through offline optimization. The method has three advantages: i) it is computationally inexpensive, ii) it does not use the auditory model directly, thus avoiding its computational cost, and iii) importantly, it provides better recognition performance than traditional MFCCs for both clean and noisy conditions. The second task concerns automatic pronunciation error detection. The research, described in Papers D, E and F, is motivated by the observation that almost all native speakers perceive, relatively easily, the acoustic characteristics of their own language when it is produced by speakers of the language. Small variations within a phoneme category, sometimes different for various phonemes, do not change significantly the perception of the language’s own sounds. Several methods are introduced based on similarity measures of the Euclidean space spanned by the acoustic representations of the speech signal and the Euclidean space spanned by an auditory model output, to identify the problematic phonemes for a given speaker. The methods are tested for groups of speakers from different languages and evaluated according to a theoretical linguistic study showing that they can capture many of the problematic phonemes that speakers from each language mispronounce. Finally, a listening test on the same dataset verifies the validity of these methods. / <p>QC 20120914</p> / European Union FP6-034362 research project ACORNS / Computer-Animated language Teachers (CALATea)
103

中文資訊擷取結果之錯誤偵測 / Error Detection on Chinese Information Extraction Results

鄭雍瑋, Cheng, Yung-Wei Unknown Date (has links)
資訊擷取是從自然語言文本中辨識出特定的主題或事件的描述,進而萃取出相關主題或事件元素中的對應資訊,再將其擷取之結果彙整至資料庫中,便能將自然語言文件轉換成結構化的核心資訊。然而資訊擷取技術的結果會有錯誤情況發生,若單只依靠人工檢查及更正錯誤的方式進行,將會是耗費大量人力及時間的工作。 在本研究論文中,我們提出字串圖形結構與字串特徵值兩種錯誤資料偵測方法。前者是透過圖形結構比對各資料內字元及字元間關聯,接著由公式計算出每筆資料的比對分數,藉由分數高低可判斷是否為錯誤資料;後者則是利用字串特徵值,來描述字串外表特徵,再透過SVM和C4.5機器學習分類方法歸納出決策樹,進而分類正確與錯誤二元資料。而此兩種偵測方法的差異在於前者隱含了圖學理論之節點位置與鄰點概念,直接比對原始字串內容;後者則是將原始字串轉換成特徵數值,進行分類等動作。 在實驗方面,我們以「總統府人事任免公報」之資訊擷取成果資料庫作為測試資料。實驗結果顯示,本研究所提出的錯誤偵測方法可以有效偵測出不合格的值組,不但能節省驗證資料所花費的成本,甚至可確保高資料品質的資訊擷取成果產出,促使資訊擷取技術更廣泛的實際應用。 / Given a targeted subject and a text collection, information extraction techniques provide the capability to populate a database in which each record entry is a subject instance documented in the text collection. However, even with the state-of-the-art IE techniques, IE task results are expected to contain errors. Manual error detection and correction are labor intensive and time consuming. This validation cost remains a major obstacle to actual deployment of practical IE applications with high validity requirement. In this paper, we propose string graph structure and string feature-based methods. The former takes advantage of graph structure to compare characters and the relation between characters. Next step, we count the corresponding score via formula, and then the scores are takes to estimate the data correctness. The latter uses string features to describe a certain characteristics of each string, after that decision tree is generated by the C4.5 and SVM machine learning algorithms. And then classify the data is valid or not. These two detection methods have the ability to describe the feature of data and verify the correctness further. The difference between these two methods is that, we deal with string of row data directly in the previous method. Besides, it indicates the concept of node position and neighbor node in graphic theory. By contrast, the row string was transformed into feature value, and then be classified in the latter method. In our experiments, we use IE task results of government personnel directives as test data. We conducted experiments to verify that effective detection of IE invalid values can be achieved by using the string graph structure and string feature-based methods. The contribution of our work is to reduce validation cost and enhance the quality of IE results, even provide both analytical and empirical evidences for supporting the effective enhancement of IE results usability as well.
104

Online Bit Flip Detection for In-Memory B-Trees on Unreliable Hardware

Kolditz, Tim, Kissinger, Thomas, Schlegel, Benjamin, Habich, Dirk, Lehner, Wolfgang 25 August 2022 (has links)
Hardware vendors constantly decrease the feature sizes of integrated circuits to obtain better performance and energy efficiency. Due to cosmic rays, low voltage or heat dissipation, hardware -- both processors and memory -- becomes more and more unreliable as the error rate increases. From a database perspective bit flip errors in main memory will become a major challenge for modern in-memory database systems, which keep all their enterprise data in volatile, unreliable main memory. Although existing hardware error control techniques like ECC-DRAM are able to detect and correct memory errors, their detection and correction capabilities are limited. Moreover, hardware error correction faces major drawbacks in terms of acquisition costs, additional memory utilization, and latency. In this paper, we argue that slightly increasing data redundancy at the right places by incorporating context knowledge already increases error detection significantly. We use the B-Tree -- as a widespread index structure -- as an example and propose various techniques for online error detection and thus increase its overall reliability. In our experiments, we found that our techniques can detect more errors in less time on commodity hardware compared to non-resilient B-Trees running in an ECC-DRAM environment. Our techniques can further be easily adapted for other data structures and are a first step in the direction of resilient database systems which can cope with unreliable hardware.
105

Error detection in blood work : Acomparison of self-supervised deep learning-based models / Felupptäckning i blodprov : En jämförelse av självbevakade djupinlärningsmodeller

Vinell, Paul January 2022 (has links)
Errors in medical testing may cause serious problems that has the potential to severely hurt patients. There are many machine learning methods to discover such errors. However, due to the rarity of errors, it is difficult to collect enough examples to learn from them. It is therefore important to focus on methods that do not require human labeling. This study presents a comparison of neural network-based models for the detection of analytical errors in blood tests containing five markers of cardiovascular health. The results show that error detection in blood tests using deep learning is a promising preventative mechanism. It is also shown that it is beneficial to take a multivariate approach to error detection so that the model examines several blood tests at once. There may also be benefits to looking at multiple health markers simultaneously, although this benefit is more pronounced when looking at individual blood tests. The comparison shows that a supervised approach significantly outperforms outlier detection methods on error detection. Given the effectiveness of the supervised model, there is reason to further study and potentially employ deep learning-based error detection to reduce the risk of errors. / Fel i medicinska tester kan orsaka allvarliga problem som har potential att allvarligt skada patienter. Det finns många maskininlärningsmetoder för att upptäcka sådana fel. Men på grund av att felen är sällsynta så är det svårt att samla in tillräckligt många exempel för att lära av dem. Det är därför viktigt att fokusera på metoder som inte kräver mänsklig märkning. Denna studie presenterar en jämförelse av neurala nätverksbaserade modeller för detektering av analytiska fel i blodprov som innehåller fem markörer för kardiovaskulär hälsa. Resultaten visar att feldetektering i blodprov med hjälp av djupinlärning är en lovande förebyggande mekanism. Det har också visat sig att det är fördelaktigt att använda ett multivariat tillvägagångssätt för feldetektering så att modellen undersöker flera blodprov samtidigt. Det kan också finnas fördelar med att titta på flera hälsomarkörer samtidigt, även om denna fördel är tydligare när modellen tittar på individuella blodprov. Jämförelsen visar att ett övervakat tillvägagångssätt avsevärt överträffar metoder för detektering av extremvärden vid feldetektering. Med tanke på effektiviteten av den övervakade modellen finns det anledning att studera tillvägagångssättet vidare och eventuellt använda djupinlärningsbaserad feldetektering för att minska risken för fel.
106

Exploring Machine Learning Solutions in the Context of OCR Post-Processing of Invoices / Utforskning av Maskininlärningslösningar för Optisk Teckenläsningsefterbehandling av Fakturor

Dwyer, Jacob, Bertse, Sara January 2022 (has links)
Large corporations receive and send large volumes of invoices containing various fields detailing a transaction. Such fields include VAT, due date, total amount, etc. One common way to automatize invoice processing is optical character recognition (OCR). This technology entails automatic reading of characters from scanned images. One problem with invoices is that there is no universal layout standard. This creates difficulties when processing data from invoices with different layouts. This thesis aims to examine common errors in the output from Azure's Form Recognizer general document model and the ways in which machine learning (ML) can be used to solve the aforementioned problem, by providing error detection as a first step when classifying OCR output as correct or incorrect. To examine this, an analysis of common errors was made based on OCR output from 70 real invoices, and a Bidirectional Encoder Representations from Transformers (BERT) model was fine-tuned for invoice classification. The results show that the two most common OCR errors are: (i) extra words showing up in a field and (ii) words missing from a field. Together these two types of errors account for 51% of OCR errors. For correctness classification, a BERT type Transformer model yielded an F-score of 0.982 on fabricated data. On real invoice data, the initial model yielded an F-score of 0.596. After additional fine-tuning, the F-score was raised to 0.832. The results of this thesis show that ML, while not entirely reliable, may be a viable first step in assessment and correction of OCR errors for invoices. / Stora företag tar emot och skickar ut stora volymer fakturor innehållande olika fält med transaktionsdetaljer. Dessa fält inkluderar skattesats, förfallodatum, totalbelopp, osv. Ett vanligt sätt att automatisera fakturahantering är optisk teckenläsning. Denna teknologi innebär automatisk läsning av tecken från inskannade bilder. Ett problem med fakturor är att det saknas standardmall. Detta försvårar hanteringen av inläst data från fakturor med olika gränssnitt. Denna uppsats söker utforska vanliga fel i utmatningen från Azure's Form Recognizer general document model och sätten på vilka maskininlärning kan användas för att lösa nämnda problem, genom att förse feldetektering som ett första steg genom att klassificera optisk teckenläsningsutmatning som korrekt eller inkorrekt. För att undersöka detta gjordes en analys av vanligt förkommande fel i teckenläsningsutdata från 70 verkliga fakturor, och en BERT-modell finjusterades för klassificering av fakturor. Resultaten visar att de två vanligast förekommande optiska teckenläsningsfelen är:(i) att ovidkommande ord upptäcks i ett inläst värdefält och (ii) avsaknaden av ord i ett värdefält, vilka svarar för 51% av de optiska teckenläsningsfelen. För korrekthetsklassificeringen användes Transformermodellen BERT vilket gav ett F-värde på 0.98 för fabrikerad data. För data från verkliga fakturor var F-värdet 0.596 för den ursprungliga modellen. Efter ytterligare finjustering hamnade F-värdet på 0.832. Resultaten i denna uppsats visar att maskininlärning, om än inte fullt tillförlitligt, är ett gångbart första steg vid bedömning och korrigering av optiska teckenläsningsfel.
107

Evaluation of Digital Twin implementations in Facility Management - A systematic review

Espania Slioa, Adoar January 2022 (has links)
Digital twins have found increased interest in the recent years with articles being published at an increased rate in the years 2018-2020. with digital twins it is possible to achieve an efficient and responsive planning and control over of facility management. A digital twin by JTH has been implemented for some of the rooms in a corridor, a structured literature study is performed to bridge the knowledge gap, the aim is to review scientific literature regarding digital twins’ in facilities management and assess different concepts dig-ital twins in facility management. The method used is a mixed qualitative-quantitative systematic review that follows the Preferred Reporting Items for Systematic Reviews (PRISMA). The systematic review defines digital twins in facility management and identifies categories as well as digital twins applications in facility management and how digital twins can be used to evaluate building performance and room experience.
108

Investigation of Increased Mapping Quality Generated by a Neural Network for Camera-LiDAR Sensor Fusion / Ökning av kartläggningskvalitet genom att använda ett neuralt natverk för fusion av kamera och LiDAR data

Correa Silva, Joan Li Guisell, Jönsson, Sofia January 2021 (has links)
This study’s aim was to investigate the mapping part of Simultaneous Localisation And Mapping (SLAM) in indoor environments containing error sources relevant to two types of sensors. The sensors used were an Intel Realsense depth camera and an RPlidar Light Detection AndRanging (LiDAR). Both cameras and LiDARs are frequently used as exteroceptive sensors in SLAM. Cameras typically struggle with strong light in the environment, and LiDARs struggle with reflective surfaces. Therefore, this study investigated the possibility of using a neural network to detect an error in either sensors’ data caused by mentioned error sources. The network identified which sensor produced erroneous data. The sensor fusion algorithm momentarily excluded said sensor’s data, consequently, improving the mapping quality when possible. The quantitative results showed no significant difference in the measured mean squared error and structural similarity between the final maps generated with and without the network, when compared to the ground truth. However, the qualitative analysis showed some advantages with using the network. Many of the camera’s errors were filtered out with the neural network, and led to a more accurate continuous mapping than without the network implemented. The conclusion was that a neural network can to a limited extent recognise the sensors’ data errors, but only the camera data benefited from the proposed solution. The study also produced important findings from the implementation which are presented. Future work recommendations include neural network optimisation, sensor selection, and sensor fusion implementation. / Denna studie undersökte kartläggningen i Simultaneous Localisation And Mapping (SLAM) problem, i kontexten av två sensorers felkällor. Sensorerna som användes var en Intel Realsense djupseende kamera samt en LiDAR fran RPlidar. Både kameror och LiDARs är vanliga sensorer i SLAM system, och båda har olika typer av felkällor. Kameror är typiskt känsliga för mycket starkt ljus, medan LiDARs har svårt med reflekterande ytor. Med detta som bakgrund har denna studie undersökt möjligheten att implementera ett neuralt nätverk för att detektera när varje sensor är utsatt för en felkälla (och därmed ger fel data). Nätverkets klassificering används sedan för att i varje tidssteg exkludera den sensors data som det är fel på för att förbättra kartläggningen. De qvantitativa resultaten visade ingen signifikant skillnad mellan kartorna genererade med nätverket och de utan nätverket. Dock visade den kvalitativa analysen att det finns vissa fördelar med att använda det neutrala nätverket. Manga av kamerans fel blev korrigerade när nätverket var implementerat, vilket ledde till mer korrekta kartor under kontinuerlig körning. Slutsatsen blev att ett nätverk kan bli tränat för att identifiera fel i datan, men att kameran drar mest nytta av det. Studien producerade även sekundara resultat som också redovisas. Slutligen rekommenderas optimering av nätverket, val av sensorer, samt uppdaterad algoritm för sensor fusionen som möjliga områden till fortsatt forskning inom området.
109

Detección concurrente de errores en el flujo de ejecución de un procesador

Rodríguez Ballester, Francisco 02 May 2016 (has links)
Tesis por compendio / [EN] Incorporating error detection mechanisms is a key element in the design of fault tolerant systems. For many of those systems the detection of an error (whether temporary or permanent) triggers a bunch of actions or activation of elements pursuing any of these objectives: continuation of the system operation despite the error, system recovery, system stop into a safe state, etc. Objectives ultimately intended to improve the characteristics of reliability, security, and availability, among others, of the system in question. One of these error detection elements is a watchdog processor; it is responsible to monitor the system processor and check that no errors occur during the program execution. The main drawback of the existing proposals in this regard and that prevents a more widespread use of them is the loss of performance and the increased memory consumption suffered by the monitored system. In this PhD a new technique to embed signatures is proposed. The technique is called ISIS - Interleaved Signature Instruction Stream - and it embeds the watchdog signatures interspersed with the original program instructions in the memory. With this technique it is a separate element of the system processor (a watchdog processor as such) who carries out the operations to detect errors. Although signatures are mixed with program instructions, and unlike previous proposals, the main system processor is not involved neither in the recovery of these signatures from memory nor in the corresponding calculations, reducing the performance loss. A novel technique is also proposed that enables the watchdog processor verification of the structural integrity of the monitored program checking the jump addresses used. This jump address processing technique comes to largely solve the problem of verifying a jump to a new program area when there are multiple possible valid destinations of the jump. This problem did not have an adequate solution so far, and although the proposal made here can not solve every possible jump scenario it enables the inclusion of a large number of them into the set verifiable jumps. The theoretical ISIS proposal and its error detection mechanisms are complemented by the contribution of a complete system (processor, watchdog processor, cache memory, etc.) based on ISIS which incorporates the detection mechanisms proposed here. This system has been called HORUS, and is developed in the synthesizable subset of the VHDL language, so it is possible not only to simulate the behavior of the system at the occurrence of a fault and analyze its evolution from it but it is also possible to program a programmable logic device like an FPGA for its inclusion in a real system. To program the HORUS system in this PhD a modified version of the gcc compiler has been developed which includes the generation of signatures for the watchdog processor as an integral part of the process to create the executable program (compilation, assembly, and link) from a source code written in the C language. Finally, another work developed in this PhD is the development of FIASCO (Fault Injection Aid Software Components), a set of scripts using the Tcl/Tk language that allow the injection of a fault during the simulation of HORUS in order to study its behavior and its ability to detect subsequent errors. With FIASCO it is possible to perform hundreds or thousands of simulations in a distributed system environment to reduce the time required to collect the data from large-scale injection campaigns. Results show that a system using the techniques proposed here is able to detect errors during the execution of a program with a minimum loss of performance, and that the penalty in memory consumption when using a watchdog processor is similar to previous proposals. / [ES] La incorporación de mecanismos de detección de errores es un elemento fundamental en el diseño de sistemas tolerantes a fallos en los que, en muchos casos, la detección de un error (ya sea transitorio o permanente) es el punto de partida que desencadena toda una serie de acciones o activación de elementos que persiguen alguno de estos objetivos: la continuación de las operaciones del sistema a pesar del error, la recuperación del mismo, la parada de sus operaciones llevando al sistema a un estado seguro, etc. Objetivos, en definitiva, que pretenden la mejora de las características de fiabilidad, seguridad y disponibilidad, entre otros, del sistema en cuestión. Uno de estos elementos de detección de errores es un procesador de guardia; su trabajo consiste en monitorizar al procesador del sistema y comprobar que no se producen errores durante la ejecución del programa. El principal inconveniente de las propuestas existentes a este respecto y que impiden una mayor difusión de su uso es la pérdida de prestaciones y el aumento de consumo de memoria que sufre el sistema monitorizado. En este trabajo se propone una nueva técnica de empotrado de firmas (ISIS -Interleaved Signature Instruction Stream) intercaladas dentro del espacio de la memoria del programa. Con ella un elemento separado del procesador del sistema realiza las operaciones encaminadas a detectar los errores. A pesar de que las firmas se encuentran mezcladas con las instrucciones del programa que está ejecutando, y a diferencia de las propuestas previas, el procesador principal del sistema no se involucra ni en la recuperación de las firmas ni en las operaciones de cálculo correspondientes, lo que reduce la pérdida de prestaciones. También se propone una novedosa técnica para que el procesador de guardia pueda verificar la integridad estructural del programa que monitoriza comprobando las direcciones de salto empleadas. Esta técnica de procesado de las direcciones de salto viene a resolver en gran medida el problema de la comprobación de un salto a una nueva zona del programa cuando existen múltiples posibles destinos válidos. Este problema no tenía una solución adecuada hasta el momento, y aunque la propuesta que aquí se hace no consigue resolver todos los posibles escenarios de salto sí permite incorporar un buen números de ellos al conjunto de saltos verificables. ISIS y sus mecanismos de detección de errores se complementan con la aportación de un sistema completo (procesador, procesador de guardia, memoria caché, etc.) basado en ISIS denominado HORUS. Está desarrollado en lenguaje VHDL sintetizable, de manera que es posible tanto simular el comportamiento del sistema ante la aparición de un fallo y analizar su evolución a partir de éste como programar un dispositivo lógico programable tipo FPGA para su inclusión en un sistema real. Para programar el sistema HORUS se ha desarrollado una versión modificada del compilador gcc que incluye la generación de las firmas de referencia para el procesador de guardia como parte del proceso de creación del programa ejecutable a partir de código fuente escrito en lenguaje C. Finalmente, otro trabajo desarrollado en esta tesis es el desarrollo de FIASCO (Fault Injection Aid Software COmponents), un conjunto de scripts en lenguaje Tcl/Tk que permiten la inyección de un fallo durante la simulación de HORUS con el objetivo de estudiar su comportamiento y su capacidad para detectar los errores subsiguientes. Con FIASCO es posible lanzar cientos o miles de simulaciones en un entorno distribuido para reducir el tiempo necesario para obtener los datos de campañas de inyección a gran escala. Los resultados demuestran que un sistema que utilice las técnicas que aquí se proponen es capaz de detectar errores durante la ejecución del programa con una mínima pérdida de prestaciones, y que la penalización en el consumo de memoria al usar un procesador de guardia es similar a la de las propu / [CA] La incorporació de mecanismes de detecció d'errors és un element fonamental en el disseny de sistemes tolerants a fallades. En aquests sistemes la detecció d'un error, tant transitori com permanent, sovint significa l'inici d'una sèrie d'accions o activació d'elements per assolir algun del objectius següents: mantenir les operacions del sistema malgrat l'error, la recuperació del sistema, aturar les operacions situant el sistema en un estat segur, etc. Aquests objectius pretenen, fonamentalment, millorar les característiques de fiabilitat, seguretat i disponibilitat del sistema. El processador de guarda és un dels elements emprats per a la detecció d'errors. El seu treball consisteix en monitoritzar el processador del sistema i comprovar que no es produeixen error durant l'execució de les instruccions. Els principals inconvenients de l'ús del processadors de guarda és la pèrdua de prestacions i l'increment de les necessitats de memòria del sistema que monitoritza, per la qual cossa la seva utilització no està molt generalitzada. En aquest treball es proposa una nova tècnica de encastat de signatures (ISIS - Interleaved Signature Instruction Stream) intercalant-les en l'espai de memòria del programa. D'aquesta manera és possible que un element extern al processador realitze les operacions dirigides a detectar els errors, i al mateix temps permet que el processador execute el programa original sense tenir que processar les signatures, encara que aquestes es troben barrejades amb les instruccions del programa que s'està executant. També es proposa en aquest treball una nova tècnica que permet al processador de guarda verificar la integritat estructural del programa en execució. Aquesta verificació permet resoldre el problema de com comprovar que, al executar el processador un salt a una nova zona del programa, el salt es realitza a una de les possibles destinacions que són vàlides. Fins el moment no hi havia una solució adequada per a aquest problema i encara que la tècnica presentada no resol tots el cassos possibles, sí afegeix un bon nombre de salts al conjunt de salts verificables. Les tècniques presentades es reforcen amb l'aportació d'un sistema complet (processador, processador de guarda, memòria cache, etc.) basat en ISIS i que incorpora els mecanismes de detecció que es proposen en aquest treball. A aquest sistema se li ha donat el nom de HORUS, i està desenvolupat en llenguatge VHDL sintetitzable, la qual cosa permet no tan sols simular el seu comportament davant la aparició d'un error i analitzar la seva evolució, sinó també programar-lo en un dispositiu FPGA per incloure'l en un sistema real. Per poder programar el sistema HORUS s'ha desenvolupat una versió modificada del compilador gcc. Aquesta versió del compilador inclou la generació de les signatures de referència per al processador de guarda com part del procés de creació del programa executable (compilació, assemblat i enllaçat) des del codi font en llenguatge C. Finalment en aquesta tesis s'ha desenvolupat un altre treball anomenat FIASCO (Fault Injection Aid Software COmponents), un conjunt d'scripts en llenguatge Tcl/Tk que permeten injectar fallades durant la simulació del funcionament d'HORUS per estudiar la seua capacitat de detectar els errors i el seu comportament posterior. Amb FIASCO és possible llançar centenars o milers de simulacions en entorns distribuïts per reduir el temps necessari per obtenir les dades d'una campanya d'injecció de fallades de grans proporcions. Els resultats obtinguts demostren que un sistema que utilitza les tècniques descrites és capaç de detectar errors durant l'execució del programa amb una pèrdua mínima de prestacions, i amb un requeriments de memòria similars als de les propostes anteriors. / Rodríguez Ballester, F. (2016). Detección concurrente de errores en el flujo de ejecución de un procesador [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/63254 / Compendio
110

A HIGH-SPEED, RUGGEDIZED, MINIATURE INSTRUMENTATION RECORDER UTILIZING COMMERCIAL TECHNOLOGY

Ricker, William, Kolb, John Jr 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California / Due to the vast amount of data required to be collected for design/performance analysis of operational and development systems, there has evolved a real requirement for a high-speed, large capacity, data collection/record system in a small Flight/Ruggedized package. This need is realized by several user communities and factors which include the evolution of small operational vehicles (airborne, land and UAV’s), the desire of weapons manufacturers/integrators to be independent from the vehicle during vehicle integration, and a general need for a field/airborne, reliable portable data collection system for intelligence gathering, operational performance verification and on-board data processing. In the Air Defence community, the need for a ruggedized record system was highlighted after Desert Storm, in which the operational performance of the Patriot Missile was questioned and data collection was not performed to support the performance. The Aydin Vector Division in conjunction with the prime contractor, has come up with a solution to this problem which utilizes a commercially available helical scan 8mm data storage unit. This solution provides a highly reliable record system, ruggedized for airborne and field environments and a low price in comparison with the more traditional approaches currently offered. This paper will describe the design implementation of this small ruggedized, flight worthy Data collection system deemed the ATD-800. It will also discuss the performance and limitations of implementing such a system, as well as provide several applications and solutions to different operational environments to be encountered. Additionally, the paper will conclude with several product enhancements which may benefit the flight test, operational and intelligence communities in the future.

Page generated in 0.1611 seconds