• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1641
  • 211
  • 132
  • 106
  • 19
  • 18
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 3
  • Tagged with
  • 2336
  • 2336
  • 953
  • 453
  • 441
  • 285
  • 274
  • 244
  • 240
  • 227
  • 219
  • 203
  • 201
  • 201
  • 184
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A study to develop and evaluate a taxonomic model of behavioral techniques for representing user interface designs /

Chase, Joseph Dwight. January 1994 (has links)
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1994. / Vita. Abstract. Includes bibliographical references (leaves 88-93). Also available via the Internet.
92

The theory and implementation of a secure system

Robb, David S. S. January 1992 (has links)
Computer viruses pose a very real threat to this technological age. As our dependence on computers increases so does the incidence of computer virus infection. Like their biological counterparts, complete eradication is virtually impossible. Thus all computer viruses which have been injected into the public domain still exist. This coupled with the fact that new viruses are being discovered every day is resulting in a massive escalation of computer virus incidence. Computer viruses covertly enter the system and systematically take control, corrupt and destroy. New viruses appear each day that circumvent current means of detection, entering the most secure of systems. Anti-Virus software writers find themselves fighting a battle they cannot win: for every hole that is plugged, another leak appears. Presented in this thesis is both method and apparatus for an Anti-Virus System which provides a solution to this serious problem. It prevents the corruption, or destruction of data, by a computer virus or other hostile program, within a computer system. The Anti-Virus System explained in this thesis will guarantee system integrity and virus containment for any given system. Unlike other anti-virus techniques, security can be guaranteed, as at no point can a virus circumvent, or corrupt the action of the Anti-Virus System presented. It requires no hardware modification of the computer or the hard disk, nor software modification of the computer's operating system. Whilst being largely transparent to the user, the System guarantees total protection against the spread of current and future viruses.
93

Why is security still an issue? : A study comparing developers’ software security awareness to existing vulnerabilities in software applications / Varför är säkerhetshål i mjukvara fortfarande ett problem? : En jämförande studie mellan utvecklares medvetenhet kring mjukvarusäkerhet och existerande sårbarheter i deras mjukvara

Backman, Lars January 2018 (has links)
The need for secure web applications grows ever stronger the more sensitive, personal data makes its’ way onto the Internet. During the last decade, hackers have stolen enormous amounts of data from high profile companies and social institutions. In this paper, we answer the question of why security breaches still occur; Why do programmers write vulnerable code? To answer this question, we conducted a case study on a smaller software development company. By performing penetration tests, surveys and interviews we successfully identified several weaknesses in their product and their way of working, that could lead to security breaches in their application. We also conducted a security awareness assessment and found multiple contributing factors to why these weaknesses occur. Insufficient knowledge, misplaced trust, and inadequate testing policies are some of the reasons why these vulnerabilities appeared in the studied application.
94

Utvärderingsprogram för radomer

Eklund, Olov January 2019 (has links)
Under 1930-talet utvecklades RADAR (Radio Detection and Ranging) av många länder samtidigt och oberoende av varandra. Radarn utvecklades för att kunna upptäcka fientliga objekt, till exempel flygplan. Ett problem var att radarantennen utsätts för väder, vind och andra miljörelaterade påfrestningar. Lösningen till detta problem var att sätta ett skydd ovanpå radarn. Detta skydd kallas för en radom. Radom är en akronym från engelskans radar dome. SAAB Applied Composites AB, ACAB (tidigare GKN Applied Composites AB ) tillverkar och utvecklar bland annat radomer i form av noskoner till flygplan. Radomen fungerar som skydd för radarantennen. För att undersöka radomens radaregenskaper utförs mätningar på radarn med och utan radom. Detta generar en stor mängd mätdata. Examensarbetets syfte är att ta fram ett datorprogram som samlar och behandlar denna mätdata samt presenterar data grafiskt. Programmet som utveklades heter OErep. Examensarbetet har resulterat i ett program OErep som fungerar som ett ramverk för vidare utveckling av ett testprogram för utvärdering av radomemätdata. För detta har C++ använts som programeringsspråk. Arbetet har utförts delvis på Applied Composites AB och delvis på Linköpings Universitet. Avgränsningar I detta examensarbetet behandlas enbart funktionerna Transmission Efficiency, Sidelobe Level och Main Lobe Beam Width. Det finns betydligt fler funktioner som inte behandlats i detta arbete. Syfte Vid mätningar av radomer generas stora mängder mätdata som behandlas på ett semi-manuelt sätt. Syftet med detta examensarbetet är att utveckla ett program som underlättar sammanställning och behandlingen av den insamlade datan.
95

Redesign av nätverk på Brinken : Kartläggning och förbättringsförslag

Marklund, Jonathan, Fredriksson, Erik January 2018 (has links)
Brinkens nätverk har växt över tid genom att flera separata aktörer varit delaktiga i utvecklandet samt underhållet av nätverket. Det faktum att flera entreprenörer har genomfört olika förändringar enligt egna preferenser i Brinkens elnischer, har lett till den situation som återfinns idag. Istället för att på ett enhetligt sätt dokumentera och strukturera befintliga elnischer, har unika lösningar för samma problem tillämpats i de olika skåpen. I 18 av 20 elnischer återfinns dessutom otidsenlig utrustning i form av Token Ring, detta ökar risken för högre felfrekvens och är kostsamt i jämförelse med modern utrustning. Upprätthållandet av god säkerhet har ignorerats till den grad att fyra skåp på Brinken saknar all form av säkerhet. Säkerhet är en väldigt viktig beståndsdel som med alla medel bör hållas intakt för att undvika obehörig åtkomst och sabotage.     I dagsläget saknar stora delar av nätverket redundans i form av alternativa vägar. Enligt nuvarande lösning återfinns det skåp som vid fel orsakar förlorad uppkoppling för både direkt anslutna kontor men även kontor som är anslutna till andra skåp. Det innebär att ett omotiverat stort antal kontor påverkas trots att skåpet de är anslutna till fungerar enligt önskat.   Det arbete som genomförts har innefattat en genomgående dokumentation av hela Brinkens nätverk. Enligt den analys som vi utfört har en rad problem med varierande risknivå detekterats och rangordnats utifrån dess kritiska inverkan på nätverket. Utifrån det material som samlats in genom förstudien har åtgärdsförslag tagits fram för samtliga problem.   Nedanstående grafer visar fördelningen av problemens risknivå samt hur de kommer utvecklas om ingen åtgärd utförs. Ett kritiskt problem upptäcktes i fyra elnischer och allvarliga problem uppdagades vid 65 tillfällen.
96

ON THE USE OF BASE CHOICE STRATEGY FOR TESTING INDUSTRIAL CONTROL SOFTWARE

Eklund, Simon January 2018 (has links)
Testing is one of the most important parts of software development. It is used to ensure that the software is of a certain quality. In many situations it is a time consuming task that is manually performed and error prone. In the last couple of years a wide range of techniques for automated test generation have been explored with the goal of performing more efficient testing both in terms of cost and time. Many of these techniques are using combinatorial methods to explore different combinations of test inputs. Base Choice (BC) is a combinatorial method that has been shown to be efficient and effective at detecting faults. However, it is not very well studied how BC compares to manual testing performed by industrial engineers with experience in software testing. This thesis presents the results of a case study comparing BC testing with manual testing. We investigate the quality of manual tests and tests created using BC strategy in terms of decision coverage, fault detection capability and cost efficiency (in terms of number of test cases). We used recently developed industrial programs written in the IEC 61131-3 FBD language, a popular programming language for embedded software using programmable logic controllers. For generating tests using BC we used the Combinatorial Testing Tool (CTT) developed at M¨alardalen University. The results of this thesis show that manual tests performed significantly better than BC generated tests in terms of achieved decision coverage and fault detection. In average manually written tests achieved 97.38% decision coverage while BC tests suites only achieved 83.10% decision coverage. In fault detection capabilities, manual test suites found in average 88.90% of injected faults compared to 69.53% fault detection by BC generated test suites. We also found that manual tests are slightly shorter in terms of number of tests compared to BC testing. We found that the use of BC is heavily affected by the choice of the base values chosen by the tester. By using more precise base choice values in BC testing may have yielded different results in terms of decision coverage and fault detection.
97

Designing a Scheduler for Cloud-Based FPGAs

Jonsson, Simon January 2018 (has links)
The primary focus of this thesis has been to design a network packet scheduler for the 5G (fifth generation) network at Ericsson in Linköping, Sweden. Network packet scheduler manages in which sequences the packages in a network will be transmitted, and will put them in a queue accordingly. Depending on the requirement for the system different packet schedulers will work in different ways. The scheduler that is designed in this thesis has a timing wheel as its core. The packages will be placed in the timing wheel depending on its final transmission time and will be outputted accordingly. The algorithm will be implemented on an FPGA (Field Programmable gate arrays). The FPGA itself is located in a cloud environment. The platform in which the FPGA is located on is called "Amazon EC2 F1", this platform can be rented with a Linux instance which comes with everything that is necessary to develop a synthesized file for the FPGA. Part of the thesis will discuss the design of the algorithm and how it was customized for a hardware implementation and part of the thesis will describe using the instance environment for development.
98

Design and implementation of a multi-agent opportunistic grid computing platform

Muranganwa, Raymond January 2016 (has links)
Opportunistic Grid Computing involves joining idle computing resources in enterprises into a converged high performance commodity infrastructure. The research described in this dissertation investigates the viability of public resource computing in offering a plethora of possibilities through seamless access to shared compute and storage resources. The research proposes and conceptualizes the Multi-Agent Opportunistic Grid (MAOG) solution in an Information and Communication Technologies for Development (ICT4D) initiative to address some limitations prevalent in traditional distributed system implementations. Proof-of-concept software components based on JADE (Java Agent Development Framework) validated Multi-Agent Systems (MAS) as an important tool for provisioning of Opportunistic Grid Computing platforms. Exploration of agent technologies within the research context identified two key components which improve access to extended computer capabilities. The first component is a Mobile Agent (MA) compute component in which a group of agents interact to pool shared processor cycles. The compute component integrates dynamic resource identification and allocation strategies by incorporating the Contract Net Protocol (CNP) and rule based reasoning concepts. The second service is a MAS based storage component realized through disk mirroring and Google file-system’s chunking with atomic append storage techniques. This research provides a candidate Opportunistic Grid Computing platform design and implementation through the use of MAS. Experiments conducted validated the design and implementation of the compute and storage services. From results, support for processing user applications; resource identification and allocation; and rule based reasoning validated the MA compute component. A MAS based file-system that implements chunking optimizations was considered to be optimum based on evaluations. The findings from the undertaken experiments also validated the functional adequacy of the implementation, and show the suitability of MAS for provisioning of robust, autonomous, and intelligent platforms. The context of this research, ICT4D, provides a solution to optimizing and increasing the utilization of computing resources that are usually idle in these contexts.
99

On verification and controller synthesis for probabilistic systems at runtime

Ujma, Mateusz January 2015 (has links)
Probabilistic model checking is a technique employed for verifying the correctness of computer systems that exhibit probabilistic behaviour. A related technique is controller synthesis, which generates controllers that guarantee the correct behaviour of the system. Not all controllers can be generated offline, as the relevant information may only be available when the system is running, for example, the reliability of services may vary over time. In this thesis, we propose a framework based on controller synthesis for stochastic games at runtime. We model systems using stochastic two-player games parameterised with data obtained from monitoring of the running system. One player represents the controllable actions of the system, while the other player represents the hostile uncontrollable environment. The goal is to synthesize, for a given property specification, a controller for the first player that wins against all possible actions of the environment player. Initially, controller synthesis is invoked for the parameterised model and the resulting controller is applied to the running system. The process is repeated at runtime when changes in the monitored parameters are detected, whereby a new controller is generated and applied. To ensure the practicality of the framework, we focus on its three important aspects: performance, robustness, and scalability. We propose an incremental model construction technique to improve performance of runtime synthesis. In many cases, changes in monitored parameters are small and models built for consecutive parameter values are similar. We exploit this and incrementally build a model for the updated parameters reusing the previous model, effectively saving time. To address robustness, we develop a technique called permissive controller synthesis. Permissive controllers generalise the classical controllers by allowing the system to choose from a set of actions instead of just one. By using a permissive controller, a computer system can quickly adapt to a situation where an action becomes temporarily unavailable while still satisfying the property of interest. We tackle the scalability of controller synthesis with a learning-based approach. We develop a technique based on real-time dynamic programming which, by generating random trajectories through a model, synthesises an approximately optimal controller. We guide the generation using heuristics and can guarantee that, even in the cases where we only explore a small part of the model, we still obtain a correct controller. We develop a full implementation of these techniques and evaluate it on a large set of case studies from the PRISM benchmark suite, demonstrating significant performance gains in most cases. We also illustrate the working of the framework on a new case study of an open-source stock monitoring application.
100

A Combination method of Fingerprint Positioning and Propagation Model Based localization scheme in 3D Large-Scale Indoor Space

Liu, Jun January 2018 (has links)
To achieve the concrete aim of improving the positioning accuracy for large-scale indoor space in this thesis, we propose a weighted Gaussian and Mean hybrid filter (G-M filter) to obtain the G-M mean of received signal strength indicator (RSSI) measurements, which is implemented by taking the practically experimental measurements of received signal strength indicator and analyzing the characteristics of received signal strength indicator. Meanwhile, various path loss models have been utilized to estimate the separation between the transmitting antenna and the receiver (T-R separation) by calculating the G-M mean of received signal strength indicator, therefore, a dynamic-parameter path loss model has been proposed which can be appropriate to enhance the accuracy of estimated T-R separation and accurately describe the indoor position. Moreover, an improved fingerprint positioning has been proposed as the basic method combined with our tetrahedral trilateration scheme to reduce the positioning error of a large-scale 3D indoor space which can achieve the average localization error of 1.5 meters.

Page generated in 0.0624 seconds