• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1531
  • 192
  • 128
  • 104
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2198
  • 2198
  • 850
  • 456
  • 442
  • 283
  • 276
  • 249
  • 241
  • 221
  • 214
  • 202
  • 201
  • 199
  • 183
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Design of an interactive authoringtool for creating branchedvideo : Design av ett interaktivt författarverktyg för att skapa grenadevideor

Schmidt, Simon, Nils, Tyrén January 2019 (has links)
With the release of ”Bandersnatch” in 2018, an interactive movie where the viewermakes choices that affects the outcome of the story, we know that successful interactivemovies are possible and appreciated. Although this technology already exists the possibilitiesare seemingly limitless. Perhaps in the future, movies could take certain paths basedon a predetermined profile of a viewer or by scanning facial expressions during the filmto determine what path that best suits the viewer. Interactive films and videos allows theviewer to interact with the storyline of the video. This technique is interesting from boththe user and developer perspective and introduces new challenges. To be able to have anoverview of the different possible branches of the video is helpful and needed in developmentof the media player and the branched video. When different possibles paths of thevideo emerges it can be difficult to keep track of all the different story lines. In this thesis,we make significant improvements to an existing authoring tool for a branched videoplayer. The authoring tool is to be used along side with a media player in order to facilitatethe development of a non-linear branched video. We will also be exploring what featuresof the authoring tool offer the most value to the user.
152

Secure log-management for an Apache Kafka-based data-streaming service / Säker logghantering i en Apache Kafka baserad data-streaming tjänst

Kull, Hjalmar, Hujic, Mirza January 2023 (has links)
This thesis aims to investigate the prospect of using Apache Kafka to manage data streams based on secrecy/classification level and separate these data streams in order to meet the requirement set by the secrecy/classification levels. Basalt AB has the responsibility of managing classified data for private and state actors, including the Swedish Armed Forces and other organizations. There is interest in a data-streaming solution that can securely stream large amounts of data while coordinating different data classifications and managing user access. This thesis work examines the viability of logically and physically separating producer data streams into categories based on the classification level of the data in an Apache Kafka cluster. Additionally, the thesis examines the viability of managing access control through the use of Access Control Lists. To protect against embedded attackers this thesis examines the viability of using Shamir Secret Sharing (SSS) algoritm to segment messages to on top of that use multi-factor authentication to ensure that messages cannot be read by a lone individual. The work seeks to contribute to the existing body of knowledge by improving the security, and ensuring the integrity of data through the application of detailed or granular user management of event-logs in an Apache Kafka cluster. This is of interest to organizations that require protection from both external and internal potential attackers. Our results indicate that Apache Kafka is an appropriate tool for data streaming secret data, we used a secret sharing algorithm to segment data and used Simple Authentication and Security Layer to build a multi-factor authentication system.
153

Development of a rental platform for university students with focus on design to be perceived as trustworthy / Utveckling av en uthyrningsplattform för universitetsstudenter med fokus på design med avsikt att skapa tillförlitlighet

Meyer, Lisa, Björklund, Anna, Davill Glas, Dante, Fridell, Axel, Myhrberg, Emil, Hammarbäck, Fredrik, Strallhofer, Jakob, Book, Johannes, Johansson, Maximilian January 2022 (has links)
Studies show that the trustworthiness of a web application is affected by how it is designed and in particular which font is used, which colour scheme is used and if the layout is expected or unexpected. To test this claim, a web application was developed according to principles about how design elements affect the trustworthiness of a web application. The web application was developed iteratively and design choices as well as implemented functionality were supported by related research. Eight different prototypes of the web application with different combinations of a blue and red colour scheme, the fonts Arial and Comic Sans as well as an expected and unexpected layout was developed. Two user tests were conducted in order to assess how the specific design elements affected the trustworthiness of the web application. The results show that the choice of colour and font for a web application affects how trustworthiness is perceived by the user. The combination of a blue colour scheme, the Arial font and the expected layout was perceived as the most trustworthy out of the examined combinations. Colour and font have a significant impact on perceived trustworthiness, where a blue colour scheme is to be preferred over a red colour scheme as well as the Arial font over Comic Sans. Regarding layout, no conclusions could be drawn from the results whether an expected layout is preferred over an unexpected layout.
154

Improving System Reliability for Cyber-Physical Systems

Wu, Leon L. January 2015 (has links)
Cyber-physical systems (CPS) are systems featuring a tight combination of, and coordination between, the system’s computational and physical elements. Cyber-physical systems include systems ranging from critical infrastructure such as a power grid and transportation system to health and biomedical devices. System reliability, i.e., the ability of a system to perform its intended function under a given set of environmental and operational conditions for a given period of time, is a fundamental requirement of cyber-physical systems. An unreliable system often leads to disruption of service, financial cost and even loss of human life. An important and prevalent type of cyber-physical system meets the following criteria: processing large amounts of data; employing software as a system component; running online continuously; having operator-in-the-loop because of human judgment and an accountability requirement for safety critical systems. This thesis aims to improve system reliability for this type of cyber-physical system. To improve system reliability for this type of cyber-physical system, I present a system evaluation approach entitled automated online evaluation (AOE), which is a data-centric runtime monitoring and reliability evaluation approach that works in parallel with the cyber-physical system to conduct automated evaluation along the workflow of the system continuously using computational intelligence and self-tuning techniques and provide operator-in-the-loop feedback on reliability improvement. For example, abnormal input and output data at or between the multiple stages of the system can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop. The operator can then take actions and make changes to the system based on the alerts in order to achieve minimal system downtime and increased system reliability. One technique used by the approach is data quality analysis using computational intelligence, which applies computational intelligence in evaluating data quality in an automated and efficient way in order to make sure the running system perform reliably as expected. Another technique used by the approach is self-tuning which automatically self-manages and self-configures the evaluation system to ensure that it adapts itself based on the changes in the system and feedback from the operator. To implement the proposed approach, I further present a system architecture called autonomic reliability improvement system (ARIS). This thesis investigates three hypotheses. First, I claim that the automated online evaluation empowered by data quality analysis using computational intelligence can effectively improve system reliability for cyber-physical systems in the domain of interest as indicated above. In order to prove this hypothesis, a prototype system needs to be developed and deployed in various cyber-physical systems while certain reliability metrics are required to measure the system reliability improvement quantitatively. Second, I claim that the self-tuning can effectively self-manage and self-configure the evaluation system based on the changes in the system and feedback from the operator-in-the-loop to improve system reliability. Third, I claim that the approach is efficient. It should not have a large impact on the overall system performance and introduce only minimal extra overhead to the cyber- physical system. Some performance metrics should be used to measure the efficiency and added overhead quantitatively. Additionally, in order to conduct efficient and cost-effective automated online evaluation for data-intensive CPS, which requires large volumes of data and devotes much of its processing time to I/O and data manipulation, this thesis presents COBRA, a cloud-based reliability assurance framework. COBRA provides automated multi-stage runtime reliability evaluation along the CPS workflow using data relocation services, a cloud data store, data quality analysis and process scheduling with self-tuning to achieve scalability, elasticity and efficiency. Finally, in order to provide a generic way to compare and benchmark system reliability for CPS and to extend the approach described above, this thesis presents FARE, a reliability benchmark framework that employs a CPS reliability model, a set of methods and metrics on evaluation environment selection, failure analysis, and reliability estimation. The main contributions of this thesis include validation of the above hypotheses and empirical studies of ARIS automated online evaluation system, COBRA cloud-based reliability assurance framework for data-intensive CPS, and FARE framework for benchmarking reliability of cyber-physical systems. This work has advanced the state of the art in the CPS reliability research, expanded the body of knowledge in this field, and provided some useful studies for further research.
155

Automated Testing of Interactive Systems

Cartwright, Stephen C. 05 1900 (has links)
Computer systems which interact with human users to collect, update or provide information are growing more complex. Additionally, users are demanding more thorough testing of all computer systems. Because of the complexity and thoroughness required, automation of interactive systems testing is desirable, especially for functional testing. Many currently available testing tools, like program proving, are impractical for testing large systems. The solution presented here is the development of an automated test system which simulates human users. This system incorporates a high-level programming language, ATLIS. ATLIS programs are compiled and interpretively executed. Programs are selected for execution by operator command, and failures are reported to the operator's console. An audit trail of all activity is provided. This solution provides improved efficiency and effectiveness over conventional testing methods.
156

Embedded System Security: A Software-based Approach

Cui, Ang January 2015 (has links)
We present a body of work aimed at understanding and improving the security posture of embedded devices. We present results from several large-scale studies that measured the quantity and distribution of exploitable vulnerabilities within embedded devices in the world. We propose two host-based software defense techniques, Symbiote and Autotomic Binary Structure Randomization, that can be practically deployed to a wide spectrum of embedded devices in use today. These defenses are designed to overcome major challenges of securing legacy embedded devices. To be specific, our proposed algorithms are software- based solutions that operate at the firmware binary level. They do not require source-code, are agnostic to the operating-system environment of the devices they protect, and can work on all major ISAs like MIPS, ARM, PowerPC and X86. More importantly, our proposed defenses are capable of augmenting the functionality of embedded devices with a plethora of host-based defenses like dynamic firmware integrity attestation, binary structure randomization of code and data, and anomaly-based malcode detection. Furthermore, we demonstrate the safety and efficacy of the proposed defenses by applying them to a wide range of real- time embedded devices like enterprise networking equipment, telecommunication appliances and other commercial devices like network-based printers and IP phones. Lastly, we present a survey of promising directions for future research in the area of embedded security.
157

An investigation into the value of embedded software

Lynch, Valerie Barbara January 2014 (has links)
No description available.
158

ON THE REAPPORTIONMENT OF COGNITIVE RESPONSIBILITIES IN INFORMATION SYSTEMS (USER INTERFACE).

FJELDSTAD, OYSTEIN DEVIK. January 1987 (has links)
As the number of information system users increases, we are witnessing a related increase in the complexity and the diversity of their applications. The increasing functional complexity amplifies the degree of functional and technical understanding required of the user to make productive use of the application tools. Emerging technologies, increased and varied user interests and radical changes in the nature of applications give rise to the opportunity and necessity to re-examine the proper apportionment of cognitive responsibilities in human-system interaction. We present a framework for the examination of the allocation of cognitive responsibilities in information systems. These cognitive tasks involve skills associated with the models and tools that are provided by information systems and the domain knowledge and problem knowledge that are associated with the user. The term cognitor is introduced to refer to a cognitive capacity for assuming such responsibilities. These capacities are resident in the human user and they are now feasible in information system architectures. Illustrations are given of how this framework can be used in understanding and assessing the apportionment of responsibilities. Implications of shifting and redistributing cognitive task from the system-user environment to the system environment are discussed. Metrics are provided to assess the degree of change under alternative architectures. An architecture for the design of alternative responsibility allocations, named Reapportionment of Cognitive Activities, (RCA), is presented. The architecture describes knowledge and responsibilities associated with facilitating dynamic allocation of cognitive responsibilities. Knowledge bases are used to support and describe alternative apportionments. RCA illustrates how knowledge representations, search techniques and dialogue management can be combined to accommodate multiple cooperating cognitors, each assuming unique roles, in an effort to share the responsibilities associated with the use of an information system. A design process for responsibility allocation is outlined. Examples of alternative responsibility allocation feasible within this architecture are provided. Cases implementing the architecture are described. We advocate treating the allocation of cognitive responsibilities as a design variable and illustrate through the architecture and the cases the elements necessary in reapportioning these responsibilities in information systems dialogues.
159

Assessing the usability of user interfaces: Guidance and online help features.

Smith, Timothy William. January 1988 (has links)
The purpose of this research was to provide evidence to support specific features of a software user interface implementation. A 3 x 2 x 2 full factorial, between subjects design was employed, in a laboratory experiment systematically varying existence or non-existence of a user interface and media of help documentation (either online or written), while blocking for varying levels of user experience. Subjects completed a set of tasks using a computer, so the experimenters could collect and evaluate various performance and attitudinal measures. Several attitudinal measures were developed and validated as part of this research. Consistent with previous findings, this research found that a user's previous level of experience in using a computer had a significant impact on their performance measures. Specifically, increased levels of user experience were associated with reduced time to complete the tasks, fewer number of characters typed, fewer references to help documentation, and fewer requests for human assistance. In addition, increased levels of user experience were generally associated with higher levels of attitudinal measures (general attitude toward computers and satisfaction with their experiment performance). The existence of a user interface had a positive impact on task performance across all levels of user experience. Although experienced users were not more satisfied with the user interface than without it, their performance was better. This contrasts with at least some previous findings that suggest experienced users are more efficient without a menu-driven user interface. The use of online documentation, as opposed to written, had a significant negative impact on task performance. Specifically, users required more time, made more references to the help documentation, and required more human assistance. However, these users generally indicated attitudinal measures (satisfied) that were as high with online as written documentation. There was a strong interaction between the user interface and online documentation for the task performance measures. This research concludes that a set of tasks can be performed in significantly less time when online documentation is facilitated by the presence of a user interface. Written documentation users seemed to perform equivalently with or without the user interface. With online documentation the user interface became crucial to task performance. Research implications are presented for practitioners, designers and researchers.
160

Some new results on the stabilization and state estimation in large-scale systems by decentralized and multilevel schemes.

Elbanna, Refaat Mohammed. January 1988 (has links)
The main objectives of this dissertation are the following. The first objective is concerned with the stabilization of large-scale systems by a decentralized control. The fundamental idea behind this type of control is the stabilization of the isolated subsystems of a large-scale system in such a way that the global stability requirement is also satisfied. For this purpose, a new stability criterion is introduced to identify a class of interconnected systems that can be stabilized by local state feedback. In addition to this, two specific classes of interconnections are presented for which the overall system stability can be ensured by a decentralized approach. A new constructive procedure for the design of decentralized controllers for the identified classes of large-scale systems is discussed. The principal advantages of this design procedure are that it requires a minimal amount of computation and is a systematic procedure eliminating the trial and error arguments as in the earlier methods. The second objective of the dissertation is to investigate the problem of the stabilization of a class of large-scale systems which are composed of identical subsystems and identical interconnections. For this class of systems, certain significant theorems, concerning the qualitative properties are introduced. Following the guidelines set forth by the above theorems, a few different schemes for the decentralized and multilevel control of the aforementioned class of large-scale interconnected systems are presented. The third objective concerns the development of a few different schemes for the design of an asymptotic state estimator for large-scale systems described as interconnections of several low-order subsystems. The most attractive feature of the present schemes is that the majority of the necessary computations are performed at the subsystem level only, thereby leading to a simple and practicable estimator design. Finally, all the above results are illustrated by numerical examples. Further, a comparison study is conducted to show the advantages of the methods and the results in this dissertation in comparison with some results available in the literature.

Page generated in 0.0489 seconds