• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61840
  • 6049
  • 5658
  • 3723
  • 3450
  • 2277
  • 2277
  • 2277
  • 2277
  • 2277
  • 2264
  • 1226
  • 1148
  • 643
  • 535
  • Tagged with
  • 103709
  • 45475
  • 28912
  • 20558
  • 17965
  • 12468
  • 10994
  • 10852
  • 9121
  • 8524
  • 7166
  • 6402
  • 6247
  • 6192
  • 6063
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Embracing security in all phases of the software development life cycle| A Delphi study

Deschene, Marie 10 November 2016 (has links)
<p> Software is omnipresent from refrigerators to financial institutions. In addition to software that defines cyber system functionality, there is an increasing amount of digitized data on cyber systems. This increasing amount of easily available data has prompted a rise in attacks on cyber systems by globally organized attackers. The solution (which has been proposed by multiple authors) is to plan security into software products throughout all software development phases. This approach constitutes a change in the software development life cycle (SDLC) process. Acceptance and approval from all software development stakeholders is needed to make this type of cultural paradigm shift. A Delphi study into what would encourage software development stakeholders to accept the need for security during software development was performed. Results of the three-round Delphi study revealed education (formal and informal) would increase software development stakeholder understanding of the risks of insecure software and educate stakeholders on how to plan and write more secure software. The Delphi study also revealed that mitigation of time and resource constraints on software projects is needed to encourage software teams to embrace the need and efforts necessary to include security in all phases of the SDLC. </p>
22

Fitting an information security architecture to an enterprise architecture

19 May 2009 (has links)
M.Phil. (Computer Science) / Despite the efforts at international and national level, security continues to pose challenging problems. Firstly, attacks on information systems are increasingly motivated by profit rather than by the desire to create disruption for its own sake. Data are illegally mined, increasingly without the user’s knowledge, while the number of variants (and the rate of evolution) of malicious software (malware) is increasing rapidly. Spam is a good example of this evolution. It is becoming a vehicle for viruses and fraudulent and criminal activities, such as spyware, phishing and other forms of malware. Its widespread distribution increasingly relies on botnets, i.e. compromised servers and PCs used as relays without the knowledge of their owners. The increasing deployment of mobile devices (including 3G mobile phones, portable videogames, etc.) and mobile-based network services will pose new challenges, as IP-based services develop rapidly. These could eventually prove to be a more common route for attacks than personal computers since the latter already deploy a significant level of security. Indeed, all new forms of communication platforms and information systems inevitably provide new windows of opportunity for malicious attacks. In order to successfully tackle the problems described above, a strategic approach to information security is required, rather than the implementation of ad hoc solutions and controls. The strategic approach requires the development of an Information Security Architecture. To be effective, an Information Security Architecture that is developed must be aligned with the organisation’s Enterprise Architecture and must be able to incorporate security into each domain of the Enterprise Architecture. This mini dissertation evaluates two current Information Security Architecture models and frameworks to find an Information Security Architecture that aligns with Eskom’s Enterprise Architecture.
23

Progression and Edge Intelligence Framework for IoT Systems

Huang, Zhenqiu 26 October 2016 (has links)
<p> This thesis studies the issues of building and managing future Internet of Things (IoT) systems. IoT systems consist of distributed components with services for sensing, processing, and controlling through devices deployed in our living environment as part of the global cyber-physical ecosystem. </p><p> Systems with perpetually running IoT devices may use a lot of energy. One challenge is implementing good management policies for energy saving. In addition, a large scale of devices may be deployed in wide geographical areas through low bandwidth wireless communication networks. This brings the challenge of congfiuring a large number of duplicated applications with low latency in a scalable manner. Finally, intelligent IoT applications, such as occupancy prediction and activity recognition, depend on analyzing user and event patterns from historical data. In order to achieve real-time interaction between humans and things, reliable yet real-time analytic support should be included to leverage the interplay and complementary roles of edge and cloud computing. </p><p> In this dissertation, I address the above issues from the service oriented point of view. Service oriented architecture (SOA) provides the integration and management flexibility using the abstraction of services deployed on devices. We have designed the WuKong IoT middleware to facilitate connectivity, deployment, and run-time management of IoT applications. </p><p> For energy efficient mapping, this thesis presents an energy saving methodology for co- locating several services on the same physical device in order to reduce the computing and communication energy. In a multi-hop network, the service co-location problem is formulated as a quadratic programming problem. I propose a reduction method that reduces it to the integer programming problem. In a single hop network, the service co-location problem can be modeled as the Maximum Weighted Independent Set (MWIS) problem. I design algorithm to transform a service flow to a co-location graph. Then, known heuristic algorithms to find the maximum independent set, which is the basis for making service co-location decisions, are applied to the co-location graph. </p><p> For low latency scalable deployment, I propose a region-based hierarchical management structure. A congestion zone that covers multiple regions is identified. The problem of deploying a large number of copies of a flow-based program (FBP) in a congestion zone is modeled as a network traffic congestion problem. Then, the problem of mapping in a congestion zone is modeled as an Integer Quadratic Constrained Programming (IQCP) problem, which is proved to be a NP-hard problem. Given that, an approximation algorithm based on LP relaxation and an efficient service relocating heuristic algorithm are designed for reducing the computation complexity. For each congestion zone, the algorithm will perform global optimized mapping for multiple regions, and then request multiple deployment delegators for reprogramming individual devices. </p><p> Finally, with the growing adoption of IoT applications, dedicated and single-purpose devices are giving way to smart, adaptive devices with rich capabilities using a platform or API, collecting and analyzing data, and making their own decisions. To facilitate building intelligent applications in IoT, I have implemented the edge framework for supporting reliable streaming analytics on edge devices. In addition, a progression framework is built to achieve the self-management capability of applications in IoT. A progressive architecture and a programming paradigm for bridging the service oriented application with the power of big data on the cloud are designed in the framework. In this thesis, I present the detailed design of the progression framework, which incorporates the above features for building scalable management of IoT systems through a flexible middleware.</p>
24

A measure of software complexity

Borchert, Lloyd David January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
25

A VDI interface for a microprocessor graphics system

Stevens, Paul L January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
26

Layered graphical models for tracking partially-occluded moving objects in video

Ablavsky, Vitaly January 2011 (has links)
Thesis (Ph.D.)--Boston University / PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Tracking multiple targets using fixed cameras with non-overlapping views is a challenging problem. One of the challenges is predicting and tracking through occlusions caused by other targets or by fixed objects in the scene. Considerable effort has been devoted toward developing appearance models that are robust to partial occlusions, tracking algorithms that cope with short-term loss of observations, and algorithms that learn static occlusion maps. In t his thesis we consider scenarios where it is impossible to learn a static occlusion map. This is often the case when the scene consists of both people and large objects whose position is not permanently fixed. These objects may enter, leave or relocate within the scene during a short time span. We call such objects "relocatable objects" or "relocatable occluders." We develop a representation for scenes containing relocatable objects that can cause partial occlusions of people in a camera's field of view. In many practical applications, relocatable objects tend to appear often; therefore, models for them can be learned offline and stored in a database. We formulate an occluder-centric representation, called a graphical model layer, where a person's motion in the ground plane is defined as a first-order Markov process on activity zones, while image evidence is aggregated in 2D observation regions that are depth-ordered with respect to the occlusion mask of the relocatable object. We represent real-world scenes as a composition of depth-ordered, interacting graphical model layers, and account for image evidence in a way that handles mutual overlap of the observation regions and their occlusions by the relocatable objects. These layers interact: proximate ground plane zones of different model instances are linked to allow a person to move between the layers, and image evidence is shared between the observation regions of these models. We demonstrate our formulation in tracking low-resolution, partially-occluded pedestrians in the vicinity of parked vehicles. In these scenarios some tracking formulations that rely on part-based person detectors may fail completely. Our pedestrian tracker fares well and compares favorably with the state-of-the-art pedestrian detectors- lowering false positives by twenty-nine percent and false negatives by forty-two percent-and a deformable-contour-based tracker. / 2031-01-01
27

Accelerating Similarly Structured Data

Wu, Lisa K. January 2014 (has links)
The failure of Dennard scaling [Bohr, 2007] and the rapid growth of data produced and consumed daily [NetApp, 2012] have made mitigating the dark silicon phenomena [Esmaeilzadeh et al., 2011] and providing fast computation for processing large volumes and expansive variety of data while consuming minimal energy the utmost important challenges for modern computer architecture. This thesis introduces the concept that grouping data structures that are previously defined in software and processing them with an accelerator can significantly improve the application performance and energy efficiency. To measure the potential performance benefits of this hypothesis, this research starts out by examining the cache impacts on accelerating commonly used data structures and its applicability to popular benchmarks. We found that accelerating similarly structured data can provide substantial benefits, however, most popular benchmark suites do not contain shared acceleration targets and therefore cannot obtain significant performance or energy improvements via a handful of accelerators. To further examine this hypothesis in an environment where the common data structures are widely used, we choose to target database application domain, using tables and columns as the similarly structured data, accelerating the processing of such data, and evaluate the performance and energy efficiency. Given that data partitioning is widely used for database applications to improve cache locality, we architect and design a streaming data partitioning accelerator to assess the feasibility of big data acceleration. The results show that we are able to achieve an order of magnitude improvement in partitioning performance and energy. To improve upon the present ad-hoc communications between accelerators and general-purpose processors [Vo et al., 2013], we also architect and evaluate a streaming framework that can be used for the data parti- tioner and other streaming accelerators alike. The streaming framework can provide at least 5 GB/s per stream per thread using software control, and is able to elegantly handle interrupts and context switches using a simple save/restore. As a final evaluation of this hypothesis, we architect a class of domain-specific database processors, or Database Processing Units (DPUs), to further improve the performance and energy efficiency of database applications. As a case study, we design and implement one DPU, called Q100, to execute industry standard analytic database queries. Despite Q100's sensitivity to communication bandwidth on-chip and off-chip, we find that the low-power configuration of Q100 is able to provide three orders of magnitude in energy efficiency over a state of the art software Database Management System (DBMS), while the high-performance configuration is able to outperform the same DBMS by 70X. Based on these experiments, we conclude that grouping similarly structured data and processing it with accelerators vastly improve application performance and energy efficiency for a given application domain. This is primarily due to the fact that creating specialized encapsulated instruction and data accesses and datapaths allows us to mitigate unnecessary data movement, take advantage of data and pipeline parallelism, and consequently provide substantial energy savings while obtaining significant performance gains.
28

Multitasking on Wireless Sensor Networks

Szczodrak, Marcin K. January 2015 (has links)
A Wireless Sensor Network (WSN) is a loose interconnection among distributed embedded devices called motes. Motes have constrained sensing, computing, and communicating resources and operate for a long period of time on a small energy supply. Although envisioned as a platform for facilitating and inspiring a new spectrum of applications, after a decade of research the WSN is limited to collecting data and sporadically updating system parameters. Programming other applications, including those that have real-time constraints, or designing WSNs operating with multiple applications require enhanced system architectures, new abstractions, and design methodologies. This dissertation introduces a system design methodology for multitasking on WSNs. It allows programmers to create an abstraction of a single, integrated system running with multiple tasks. Every task has a dedicated protocol stack. Thus, different tasks can have different computation logics and operate with different communication protocols. This facilitates the execution of heterogeneous applications on the same WSN and allows programmers to implement a variety of system services. The services that have been implemented provide energy-monitoring, tasks scheduling, and communication between the tasks. The experimental section evaluates implementations of the WSN software designed with the presented methodology. A new set of tools for testbed deployments is introduced and used to test examples of WSNs running with applications interacting with the physical world. Using remote testbeds with over 100 motes, the results show the feasibility of the proposed methodology in constructing a robust and scalable WSN system abstraction, which can improve the run-time performance of applications, such as data collection and point-to-point streaming.
29

Locating an Autonomous Car Using the Kalman Filter to Reduce Noise

Hema Balaji, Nagarathna 06 March 2019 (has links)
<p> With growing use of autonomous vehicles and similar systems, it is critical for those systems to be reliable, accurate, and error free. Sensor data are of vital importance for ensuring the fidelity of navigation and decision-making ability of autonomous systems. Several existing models have achieved accuracy in the sensor data, but they are all application specific and have limited applicability for future systems. </p><p> This paper proposes a method for reducing errors in sensor data through use of sensor fusion on the Kalman filter. The proposed model is intended to be versatile and to adapt to the needs of any robotic vehicle with only minor modifications. The model is a basic framework for normalizing the speed of autonomous robots. Moreover, it is capable of ensuring smooth operation of individual autonomous robots and facilitates co-operative applications. The model achieves a framework that is more reliable, accurate, and error free, compared to other existing models, thereby enabling its implementation on similar robotic applications. This model can be expanded for use in other applications with only minimal changes; it therefore promises to revolutionize the way that human beings use, interact with, and benefit from autonomous devices in day-to-day activities.</p><p>
30

MCCS : the design and implementation of a multi-computer communications system

Fox, Sheldon Lee January 2010 (has links)
Digitized by Kansas Correctional Industries

Page generated in 0.0828 seconds