• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 35
  • 27
  • 9
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 318
  • 318
  • 171
  • 130
  • 78
  • 71
  • 52
  • 50
  • 48
  • 48
  • 44
  • 41
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Resilience of an embedded architecture using hardware redundancy

Castano, Victor January 2014 (has links)
In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators.
152

Hyper-connectivity : intricacies of national and international cyber securities

Dawson, Maurice January 2017 (has links)
This thesis examined the three core themes: the role of education in cyber security, the role of technology in cyber security, and the role of policy in cyber security, the areas in which the papers are published. The associated works are published in referred journals, peer reviewed book chapters, and conference proceedings. Research can be found in the following outlets: 1. Security Solutions for Hyperconnectivity and the Internet of Things; 2. Developing Next-Generation Countermeasures for Homeland Security Threat Prevention; 3. New Threats and Countermeasures in Digital Crime and Cyber Terrorism; 4. International Journal of Business Continuity and Risk Management; 5. Handbook of Research on 3-D Virtual Environments and Hypermedia for Ubiquitous Learning; 6. Information Security in Diverse Computing Environments; 7. Technology, Innovation, and Enterprise Transformation; 8. Journal of Information Systems Technology and Planning; 9. Encyclopedia of Information Science and Technology. The shortcomings and gaps in cyber security research is the research focus on hyperconnectivity of people and technology to include the policies that provide the standards for security hardened systems. Prior research on cyber and homeland security reviewed the three core themes separately rather than jointly. This study examined the research gaps within cyber security as it relates to core themes in an effort to develop stronger policies, education programs, and hardened technologies for cyber security use. This work illustrates how cyber security can be broken into these three core areas and used together to address issues such as developing training environments for teaching real cyber security events. It will further show the correlations between technologies and policies for system Certification & Accreditation (C&A). Finally, it will offer insights on how cyber security can be used to maintain security for international and national security. The overall results of the study provide guidance on how to create an ubiquitous learning (U-Learning) environment to teach cyber security concepts, craft polices that affect secure computing, and examines the effects on national and international security. The overall research has been improving the role of cyber security in education, technology, and policy.
153

Deriving and applying facet views of the Dewey Decimal Classification Scheme to enhance subject searching in library OPACs

Tinker, Amanda Jayne January 2005 (has links)
Classification is a fundamental tool in the organisation of any library collection for effective information retrieval. Several classifications exist, yet the pioneering Dewey Decimal Classification (DDC) still constitutes the most widely used scheme and international de facto standard. Although once used for the dual purpose of physical organisation and subject retrieval in the printed library catalogue, library classification is now relegated to a singular role of shelf location. Numerous studies have highlighted the problem of subject access in library online public access catalogues (OPACs). The library OPAC has changed relatively little since its inception, designed to find what is already known, not discover and explore. This research aims to enhance OPAC subject searching by deriving facets of the DDC and populating these with a library collection for display at a View-based searching OPAC interface. A novel method is devised that enables the automatic deconstruction of complex DDC notations into their component facets. Identifying facets based upon embedded notational components reveals alternative, multidimensional subject arrangements of a library collection and resolves the problem of disciplinary scatter. The extent to which the derived facets enhance users' subject searching perceptions and activities at the OPAC interface is evaluated in a small-scale usability study. The results demonstrate the successful derivation of four fundamental facets (Reference Type, Person Type, Time and Geographic Place). Such facet derivation and deconstruction of Dewey notations is recognised as a complex process, owing to the lack of a uniform notation, notational re-use and the need for distinct facet indicators to delineate facet boundaries. The results of the preliminary usability study indicate that users are receptive to facet-based searching and that the View-based searching system performs equally as well as a current form fill-in interface and, in some cases, provides enhanced benefits. It is concluded that further exploration of facet-based searching is clearly warranted and suggestions for future research are made.
154

The identification of differentiating success factors for students in computer science and computer information systems programs of study

Carabetta, James R 01 January 1991 (has links)
Although both are computer-based, computer science and computer information systems programs of study are markedly different. Therefore, it is not unreasonable to speculate that success factor differences may exist between them, and to seek an objective means of making such a determination based on a student's traits. The purpose of this study was therefore two-fold--to determine whether differences do in fact exist between successful computer science majors and successful computer information systems majors, and if such was affirmed, to determine a classification rule for such assignment. Based on an aggregate of demographic, pre-college academic, and learning style factors, the groups were found to differ significantly on the following variables (listed in decreasing likelihood of significance, for those with p $<$.05): sex, abstract conceptualization and concrete-abstract continuum measures, SAT - Mathematics, interest ranking for science, active experimentation measure, interest ranking for foreign language, and concrete experience measure. Computer science majors were found to consist of significantly more males than females, and to have significantly higher abstract conceptualization, concrete-abstract continuum, SAT - mathematics, and interest ranking for science measures than computer information systems majors, while computer information systems majors were found to have significantly higher active experimentation, interest ranking for foreign language and concrete experience measures. A classification rule, based on a subset of these factors, was derived and found to classify correctly at a 76.6% rate. These results have potential as a research-based component of an advising function for students interested in pursuing a computer science or computer information systems program of study.
155

Domain Name Service Trust Delegation in Cloud Computing: Exploitation, Risks, and Defense

Laprade, Craig 01 January 2021 (has links)
The Domain Name Service (DNS) infrastructure is a global distributed database that links human readable domain names with the Internet Protocol (IP) addresses of the resources that power the internet. With the explosion of cloud computing over the past decade, increasing proportions of organizations' computing services have moved from on-premise solutions to cloud providers. These services range from complete DNS management to singular services such as E-mail or a payroll application. Each of these outsourced services requires a trust delegation, that is, the owning organization needs to advertise to the world, often by DNS records, that another organization can act authoritatively on its behalf. What occurs when these trust delegations are misused? In this work, I explore the methods that can be used to exploit DNS trust delegation and then examine the top 1% of the most popular domains in the world for the presence of these exploitable vulnerabilities. Finally, I conclude with methods of defense against such attacks and the publishing of a novel tool to detect these vulnerabilities.
156

Towards Polymorphic Systems Engineering

Mathieson, John T.J. 01 January 2021 (has links)
Systems engineering is widely regarded as a full life cycle discipline and provides methodologies and processes to support the design, development, verification, sustainment, and disposal of systems. While this cradle-to-grave concept is well documented throughout literature, there has been recent and ever-increasing emphasis on evolving and digitally transforming systems engineering methodologies, practices, and tools to a model-based discipline, not only for advancing system development, but perhaps more importantly for extending agility and adaptability through the later stages of system life cycles – through system operations and sustainment. This research adopts principles from the software engineering domain DevOps concept (a collaborative merger of system development and system operations) into a Systems Engineering DevOps Lemniscate life cycle model. This progression on traditional life cycle models lays a foundation for the continuum of model-based systems engineering artifacts during the life of a system and promotes the coexistence and symbiosis of variants throughout. This is done by facilitating a merger of model-based systems engineering processes, tools, and products into a surrogate and common modeling environment in which the operations and sustainment of a system is tied closely to the curation of a descriptive system model. This model-based approach using descriptive system models, traditionally leveraged for system development, is now expanded to include the operational support elements necessary to operate and sustain the system (i.e. executable procedures, command scripts, maintenance manuals, etc. modeled as part of the core system). This evolution on traditional systems engineering implementation, focused on digitally transforming and enhancing system operations and sustainment, capitalizes on the ability of model-based systems engineering to embrace change to improve agility in the later life cycle stages and emphasizes the existence of polymorphic systems engineering (performing a variety of systems engineering roles in simultaneously occurring life cycle stages to increase system agility). A model-based framework for applying the Systems Engineering DevOps life cycle model is introduced as a new Systems Modeling Language profile. A use-case leveraging this “Model-Based System Operations” framework demonstrates how merging operational support elements into a spacecraft system model improves adaptability of support elements in response to faults, failures, and evolving environments during system operations, exemplifying elements of a DevOps approach to cyber-physical system sustainment.
157

SledgeEDF: Deadline-Driven Serverless for the Edge

McBride, Sean Patrick 01 January 2021 (has links)
Serverless Computing has gained mass popularity by offering lower cost, improved elasticity, and improved ease of use. Driven by the need for efficient low latency computation on resource-constrained infrastructure, it is also becoming a common execution model for edge computing. However, hyperscale cloud mitigations against the serverless cold start problem do not cleanly scale down to tiny 10-100kW edge sites, causing edge deployments of existing VM and container-based serverless runtimes to suffer poor tail latency. This is particularly acute considering that future edge computing workloads are expected to have latency requirements ranging from microseconds to seconds. SledgeEDF is the first runtime to apply the traditional real-time systems techniques of admissions control and deadline-driven scheduling to the serverless execution model. It extends previous research on aWsm, an ahead-of-time (AOT) WebAssembly compiler, and Sledge, a single-process WebAssembly-based serverless runtime designed for the edge, yielding a runtime that targets efficient execution of mixed-criticality edge workloads. Evaluations demonstrate that SledgeEDF prevents backpressure due to excessive client requests and eliminates head-of-line blocking, allowing latency-sensitive high-criticality requests to preempt executing tasks and complete within 10% of their optimal execution time. Taken together, SledgeEDF's admissions controller and deadline-driven scheduler enable it to provide limited guarantees around latency deadlines defined by client service level objectives.
158

The design and implementation of an intelligent interface for information retrieval

Thompson, Roger Howard 01 January 1989 (has links)
Commercial information (text) retrieval systems have been available since the early 1960's. While they have provided a service allowing individuals to find useful documents out of the millions of documents contained in online databases, there are, a number of problems that prevent the user from being more effective. The primary problems are an inadequate means for specifying information needs, a single way of responding to all users and their information needs, and an inadequate user interface. This thesis describes the design and implementation of I$\sp3$R, an intelligent interface for information retrieval the purpose of which is to overcome the limitations of current information retrieval systems by providing multiple ways of assisting the user to precisely specify his information need and to search for information. The system organization is based on a blackboard architecture and consists of a number of "experts" that work cooperatively to assist the user. The operation of the experts is coordinated by a control expert that makes its decisions based on a plan derived from the analysis of human search intermediaries, end user dialogues, and user model. The experts provide multiple formal search strategies, the use and collection of domain knowledge, and browsing assistance. The operation of the system is demonstrated by four scenarios.
159

Learning hash codes for multimedia retrieval

Chen, Junjie 28 August 2019 (has links)
The explosive growth of multimedia data in online media repositories and social networks has led to the high demand of fast and accurate services for large-scale multimedia retrieval. Hashing, due to its effectiveness in coding high-dimensional data into a low-dimensional binary space, has been considered to be effective for the retrieval application. Despite the progress that has been made recently, how to learn the optimal hashing models which can make the best trade-off between the retrieval efficiency and accuracy remains to be open research issues. This thesis research aims to develop hashing models which are effective for image and video retrieval. An unsupervised hashing model called APHash is first proposed to learn hash codes for images by exploiting the distribution of data. To reduce the underlying computational complexity, a methodology that makes use of an asymmetric similarity matrix is explored and found effective. In addition, the deep learning approach to learn hash codes for images is also studied. In particular, a novel deep model called DeepQuan which tries to incorporate product quantization methods into an unsupervised deep model for the learning. Other than adopting only the quadratic loss as the optimization objective like most of the related deep models, DeepQuan optimizes the data representations and their quantization codebooks to explores the clustering structure of the underlying data manifold where the introduction of a weighted triplet loss into the learning objective is found to be effective. Furthermore, the case with some labeled data available for the learning is also considered. To alleviate the high training cost (which is especially crucial given a large-scale database), another hashing model named Similarity Preserving Deep Asymmetric Quantization (SPDAQ) is proposed for both image and video retrieval where the compact binary codes and quantization codebooks for all the items in the database can be explicitly learned in an efficient manner. All the aforementioned hashing methods proposed have been rigorously evaluated using benchmark datasets and found to outperform the related state-of-the-art methods.
160

Understanding cycling behaviour through visual analysis of a large-scale observational dataset

Beecham, R. January 2014 (has links)
The emergence of third-generation, technology-based public bikeshare schemes offers new opportunities for researching cycling behaviour. In this study, data from one such scheme, the London Cycle Hire Scheme (LCHS), are analysed. Algorithms are developed for summarising and labelling cyclists’ usage behaviours and tailored visual analysis applications are designed for exploring their spatiotemporal context. Many of the research findings provide support to existing literature, particularly around gendered cycling behaviour. As well as making more discretionary journeys, women appear to preferentially select parts of London associated with greater levels of safety; and this is true even after controlling for geodemographic differences and levels of LCHS cycling experience. One hypothesis is that these differences represent diverging attitudes and perceptions. After developing a technique for identifying cyclists’ workplaces, these differences might also be explained by where cyclists need to travel for work and other facilities. An additional explanation is later offered that relates to the nature of cyclists’ estimated routes. The size and precision of the LCHS dataset allows under-explored aspects of behaviour to be investigated. Group cycling events – instances where two or more cyclists make journeys together in space and time – are labelled and analysed on a large scale. For certain types of cyclist, group cycling appears to encourage more extensive spatiotemporal cycling behaviour and there is some evidence to suggest that group cycling may help initiate scheme usage. The domain-specific findings, emerging research questions and also behavioural classifications are this study’s principal and unique contribution. A second contribution relates to the analysis approach. This is a data-driven study that takes a large dataset, measuring use of a relatively new cycle facility, and uses it to engage with research questions that are typically answered with very different datasets. There is some uncertainty around how discriminating and generalisable LCHS cycle behaviours may be and which variables, either directly measured or derived, might delineate those behaviours. Visual analysis techniques are shown to be effective in this more speculative research context: numerous behaviours are very quickly explored and understood. These techniques also enable a set of colleagues with relatively limited analysis experience, but substantial domain knowledge, to participate in the analysis and a general argument is made for their use in other, interdisciplinary analysis contexts.

Page generated in 0.1881 seconds