• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 16
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 78
  • 78
  • 42
  • 15
  • 15
  • 14
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Utvecklandet av ett mer användbart system : - En designanalys av ledningssystemet C2STRIC - / The Development of a More User-Friendly System : - A Design Analysis of the Command-and-Control System C2STRIC -

Schenning, Joacim, Rydén, Tova January 2023 (has links)
In a modern world torn apart by war the basic human need to feel safe is threatened. In order to defend and protect this need, technical defense systems solving complex situations are of great importance. As the rate of technological innovation accelerates it is important for companies like Saab to maintain their competitive advantage. One way of doing this is by offering modern systems of high quality with intuitive interfaces increasing the usability. This master’s thesis aims at evaluating Saab Surveillance’s command and control system C2STRIC by an analysis of its user interface. It further aims at identifying usability problems and designing prototypes solving said problems, to increase the system’s usability. The system in question is a safety critical system, meaning that its malfunction might lead to severe consequences or death. This system characteristic permeates the whole thesis, resulting in findings unique for C2STRIC and thus maybe not generalizable. Through a close collaboration and interviews with the users the issues were identified and compiled. Due to secrecy reasons all the interviews were conducted live without recordings. All of them at Saab’s premises, except one which was conducted at “Stridslednings- och luftbevakningsskolan” at Uppsala garrison. Most of the interviews were scheduled while some were spontaneous. With the help of the compilation of identified problems, high fidelity prototypes could be developed in Adobe XD through an iterative user-centered design process. The process provided continuous feedback which helped in delivering prototypes satisfying the needs of the users. The primary problems regarded disturbances of the situational awareness of the user, i.e., issues preventing the user from performing its tasks according to circumstances. These disturbances were caused by ineffective navigation in the interface and poorly optimized object visualization. Prototypes introducing transparency, docking systems, search functions, radial menus and a new main menu solved these problems and increased the usability of C2STRIC - and they will in extension help in defending and protecting the basic human need to feel safe.
72

Waiting for Locks: How Long Does It Usually Take?

Baier, Christel, Daum, Marcus, Engel, Benjamin, Härtig, Hermann, Klein, Joachim, Klüppelholz, Sascha, Märcker, Steffen, Tews, Hendrik, Völp, Marcus January 2012 (has links)
Reliability of low-level operating-system (OS) code is an indispensable requirement. This includes functional properties from the safety-liveness spectrum, but also quantitative properties stating, e.g., that the average waiting time on locks is sufficiently small or that the energy requirement of a certain system call is below a given threshold with a high probability. This paper reports on our experiences made in a running project where the goal is to apply probabilistic model checking techniques and to align the results of the model checker with measurements to predict quantitative properties of low-level OS code.
73

Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

Imran, Naveed 01 January 2013 (has links)
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria.
74

Formal verification of a synchronous data-flow compiler : from Signal to C

Ngô, Van Chan 01 July 2014 (has links) (PDF)
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
75

Formal verification of a synchronous data-flow compiler : from Signal to C / Vérification formelle d’un compilateur synchrone : de Signal vers C

Ngô, Van Chan 01 July 2014 (has links)
Les langages synchrones tels que Signal, Lustre et Esterel sont dédiés à la conception de systèmes critiques. Leurs compilateurs, qui sont de très gros programmes complexes, peuvent a priori se révéler incorrects dans certains situations, ce qui donnerait lieu alors à des résultats de compilation erronés non détectés. Ces codes fautifs peuvent invalider des propriétés de sûreté qui ont été prouvées en appliquant des méthodes formelles sur les programmes sources. En adoptant une approche de validation de la traduction, cette thèse vise à prouver formellement la correction d'un compilateur optimisé et industriel de Signal. La preuve de correction représente dans un cadre sémantique commun le programme source et le code compilé, et formalise une relation entre eux pour exprimer la préservation des sémantiques du programme source dans le code compilé. / Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
76

A strategic theoretical framework to safeguard business value for information systems

Grobler, Chris Daniel January 2017 (has links)
The phenomenon of business value dissipation in mature organisations as an unintended by-product of the adoption and use of information systems has been a highly debated topic in the corporate boardroom awakening the interest of practitioners and academics alike. Much of the discourse tends to focus on the inability of organisations to unlock and realise the intended benefits to be harvested through large information systems investments. While the business case for investing in large technology programmes has been thoroughly investigated, the human agent that causes value erosion through his interaction with information systems (IS), has not received the studied attention it deserves. This study examines the use of technology in organisations by considering the dichotomy inherent in IS where its introduction for the purposes of creating new or sustaining existing business value subsequently also inadvertently dissipates value. The study proceeds to investigate the root people-induced causes resulting in the unintentional dissipation of value and presents an empirically validated model suggesting that human agents do not only create value for organisations through their use of IS, but at the same time, deliberately or inadvertently, dissipate value. The root people-induced causes resulting in the unintentional dissipation of value is delineated within a Theoretical Technology Value Framework that is constructed from a review of the extant literature, and delineates the overall unintentional value destroying causes and effects of IS on organisations. The Theoretical Technology Value Framework is forthwith applied as a basis for the development of a set of questions to support both qualitative and quantitative investigations from which an Archetypical Technology Value Model was derived. Finally, an Archetypical Technology Value Model is presented as a benchmark and basis to identify, investigate, mitigate and minimise or eliminate the unintentional value destroying effects of IS on Information Technology driven organisations. The study concludes with implications for both theory and practice and suggestions on how value erosion through the activities of the human agent may be identified, modeled and mitigated. Ultimately, recommendations are offered towards the crafting of more effective IS. / School of Computing / Ph. D. (Information Systems)
77

Model-Based Exploration of Parallelism in Context of Automotive Multi-Processor Systems

Höttger, Robert Martin 15 July 2021 (has links)
This dissertation entitled ’Model-Based Exploration of Parallelism in the Context of Automotive Multi-Core Systems’ deals with the analytical investigation of different temporal relationships for automotive multi-processor systems subject to critical, embedded, real-time, distributed, and heterogeneous domain requirements. Vehicle innovation increasingly demands high-performance platforms in terms of, e.g., highly assisted or autonomous driving such that established software development processes must be examined, revised, and advanced. The goal is not to develop application software itself, but instead to improve the model-based development process, subject to numerous constraints and requirements. Model-based software development is, for example, an established process that allows systems to be analyzed and simulated in an abstracted, standardized, modular, isolated, or integrated manner. The verification of real-time behavior taking into account various constraints and modern architectures, which include graphics and heterogeneous processors as well as dedicated hardware accelerators, is one of many challenges in the real-time and automotive community. The software distribution across hardware entities and the identification of software that can be executed in parallel are crucial in the development process. Since these processes usually optimize one or more properties, they belong to the category of problems that can only be solved in polynomial time using non-deterministic methods and thus make use of (meta) heuristics for being solved. Such (meta) heuristics require sophisticated implementation and configuration, due to the properties to be optimized are usually subject to many different analyses. With the results of this dissertation, various development processes can be adjusted to modern architectures by using new and extended processes that enable future and computationally intensive vehicle applications on the one hand and improve existing processes in terms of efficiency and effectiveness on the other hand. These processes include runnable partitioning, task mapping, data allocation, and timing verification, which are addressed with the help of constraint programming, genetic algorithms, and heuristics.
78

Evaluating the expressiveness of specification languages : for stochastic safety-critical systems

Jamil, Fahad Rami January 2024 (has links)
This thesis investigates the expressiveness of specification languages for stochastic safety-critical systems, addressing the need for expressiveness in describing system behaviour formally. Through a case study and specification language enhancements, the research explores the impact of different frameworks on a set of specifications. The results highlight the importance of continuous development in the specification languages to meet the complex behaviours of systems with probabilistic properties. The findings emphasise the need for extending the chosen specification languages more formally, to ensure that the languages can capture the complexity of the systems they describe.  The research contributes valuable insights into improving the expressiveness of specification languages for ensuring system safety and operational reliability.

Page generated in 0.0863 seconds