• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 12
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 18
  • 16
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Benchmarks for Embedded Multi-processors

Gong, Shaojie, Deng, Zhongping January 2007 (has links)
<p>During the recent years, computer performance has increased dramatically. To measure </p><p>the performance of computers, benchmarks are ideal tools. Benchmarks exist in many </p><p>areas and point to different applications. For instance, in a normal PC, benchmarks can be </p><p>used to test the performance of the whole system which includes the CPU, graphic card, </p><p>memory system, etc. For multiprocessor systems, there also exist open source benchmark </p><p>programs. In our project, we gathered information about some open benchmark programs </p><p>and investigated their applicability for evaluating embedded multiprocessor systems </p><p>intended for radar signal processing. During our investigation, parallel cluster systems </p><p>and embedded multiprocessor systems were studied. Two benchmark programs, HPL and </p><p>NAS Parallel Benchmark were identified as particularly relevant for the application field. </p><p>The benchmark testing was done on a parallel cluster system which has an architecture </p><p>that is similar to the architecture of embedded multiprocessor systems, used for radar </p><p>signal processing.</p>
2

Benchmarks for Embedded Multi-processors

Gong, Shaojie, Deng, Zhongping January 2007 (has links)
During the recent years, computer performance has increased dramatically. To measure the performance of computers, benchmarks are ideal tools. Benchmarks exist in many areas and point to different applications. For instance, in a normal PC, benchmarks can be used to test the performance of the whole system which includes the CPU, graphic card, memory system, etc. For multiprocessor systems, there also exist open source benchmark programs. In our project, we gathered information about some open benchmark programs and investigated their applicability for evaluating embedded multiprocessor systems intended for radar signal processing. During our investigation, parallel cluster systems and embedded multiprocessor systems were studied. Two benchmark programs, HPL and NAS Parallel Benchmark were identified as particularly relevant for the application field. The benchmark testing was done on a parallel cluster system which has an architecture that is similar to the architecture of embedded multiprocessor systems, used for radar signal processing.
3

Efficient Source Selection For SPARQL Endpoint Query Federation

Saleem, Muhammad 28 October 2016 (has links) (PDF)
The Web of Data has grown enormously over the last years. Currently, it comprises a large compendium of linked and distributed datasets from multiple domains. Due to the decentralised architecture of the Web of Data, several of these datasets contain complementary data. Running complex queries on this compendium thus often requires accessing data from different data sources within one query. The abundance of datasets and the need for running complex query has thus motivated a considerable body of work on SPARQL query federation systems, the dedicated means to access data distributed over the Web of Data. This thesis addresses two key areas of federated SPARQL query processing: (1) efficient source selection, and (2) comprehensive SPARQL benchmarks to test and ranked federated SPARQL engines as well as triple stores. Efficient Source Selection: Efficient source selection is one of the most important optimization steps in federated SPARQL query processing. An overestimation of query relevant data sources increases the network traffic, result in irrelevant intermediate results, and can significantly affect the overall query processing time. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. Similarly, only little attention has been paid to the effect of duplicated data on federated querying. This thesis presents HiBISCuS and TBSS, novel hypergraph-based source selection approaches, and DAW, a duplicate-aware source selection approach to federated querying over the Web of Data. Each of these approaches can be combined directly with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We combined the three (HiBISCuS, DAW, and TBSS) source selections approaches with query rewriting to form a complete SPARQL query federation engine named Quetsal. Furthermore, we present TopFed, a Cancer Genome Atlas (TCGA) tailored federated query processing engine that exploits the data distribution to perform intelligent source selection while querying over large TCGA SPARQL endpoints. Finally, we address the issue of rights managements and privacy while accessing sensitive resources. To this end, we present SAFE: a global source selection approach that enables decentralised, policy-aware access to sensitive clinical information represented as distributed RDF Data Cubes. Comprehensive SPARQL Benchmarks: Benchmarking is indispensable when aiming to assess technologies with respect to their suitability for given tasks. While several benchmarks and benchmark generation frameworks have been developed to evaluate federated SPARQL engines and triple stores, they mostly provide a one-fits-all solution to the benchmarking problem. This approach to benchmarking is however unsuitable to evaluate the performance of a triple store for a given application with particular requirements. The fitness of current SPARQL query federation approaches for real applications is difficult to evaluate with current benchmarks as current benchmarks are either synthetic or too small in size and complexity. Furthermore, state-of-the-art federated SPARQL benchmarks mostly focused on a single performance criterion, i.e., the overall query runtime. Thus, they cannot provide a fine-grained evaluation of the systems. We address these drawbacks by presenting FEASIBLE, an automatic approach for the generation of benchmarks out of the query history of applications, i.e., query logs and LargeRDFBench, a billion-triple benchmark for SPARQL query federation which encompasses real data as well as real queries pertaining to real bio-medical use cases. Our evaluation results show that HiBISCuS, TBSS, TopFed, DAW, and SAFE all can significantly reduce the total number of sources selected and thus improve the overall query performance. In particular, TBSS is the first source selection approach to remain under 5% overall relevant sources overestimation. Quetsal has reduced the number of sources selected (without losing recall), the source selection time as well as the overall query runtime as compared to state-of-the-art federation engines. The LargeRDFBench evaluation results suggests that the performance of current SPARQL query federation systems on simple queries does not reflect the systems\\\' performance on more complex queries. Moreover, current federation systems seem unable to deal with many of the challenges that await them in the age of Big Data. Finally, the FEASIBLE\\\'s evaluation results shows that it generates better sample queries than the state-of-the-art. In addition, the better query selection and the larger set of query types used lead to triple store rankings which partly differ from the rankings generated by previous works.
4

Modelling and managing temporal data and its application to Scottish dental information systems

Lu, Jiang January 1997 (has links)
No description available.
5

Audit materiality and risk : benchmarks and the impact on the audit process / J.J. Swart

Swart, Jacobus Johannes January 2013 (has links)
The objective of this study is to address the gap that exists in the literature regarding quantifiable guidelines, benchmarks and consistency of applications. During the research acceptable benchmarks for the calculation or quantification of the elements linked to materiality and audit risk were found. The benchmarks are in compliance with the practices and the requirements of the ISAs and regulations. Models and benchmarks based on literature were used as a basis and modified for application in the auditing environment. The combination of literature, responses from public practitioners and experience based on best practices resulted in the development of a modified risk-based assessment model. The conclusion from the empirical study indicated that there are no defined rules or basis for calculating materiality and audit risk. The inconsistencies in responses indicate that audit firms and developer of key concepts interpret and apply the above-mentioned term different in practice. The interpretations of the relevant ISAs, appear to be conceptually correct as no major non-compliances were identified. Various instances indicated that there is a lack of guidance with regard to the quantification or qualification of benchmarks. The implementation of the Sarbanes-Oxley Act (2002) was an event that leads to the consideration of more conservative benchmarks. The most consistent benchmark that stood the test of time was Discussion paper 6 (1984). The 30 years since the development of these benchmarks indicate that little attention has been given to one of the most complex issues in auditing. Companies within different industries are not generic and exceptions will occur where the auditor needs to apply professional judgment to accommodate the deviations. Further research is required to assist the audit professionals and students in the development of consistent benchmarks to increase the reputation of the profession. The conclusion drawn from this study is that audit materiality and audit risk has a significant impact on the audit process as even the audit report is influenced by proper audit planning and guidelines to support the auditor in audits. / MCom (Accountancy), North-West University, Vaal Triangle Campus, 2013
6

Audit materiality and risk : benchmarks and the impact on the audit process / J.J. Swart

Swart, Jacobus Johannes January 2013 (has links)
The objective of this study is to address the gap that exists in the literature regarding quantifiable guidelines, benchmarks and consistency of applications. During the research acceptable benchmarks for the calculation or quantification of the elements linked to materiality and audit risk were found. The benchmarks are in compliance with the practices and the requirements of the ISAs and regulations. Models and benchmarks based on literature were used as a basis and modified for application in the auditing environment. The combination of literature, responses from public practitioners and experience based on best practices resulted in the development of a modified risk-based assessment model. The conclusion from the empirical study indicated that there are no defined rules or basis for calculating materiality and audit risk. The inconsistencies in responses indicate that audit firms and developer of key concepts interpret and apply the above-mentioned term different in practice. The interpretations of the relevant ISAs, appear to be conceptually correct as no major non-compliances were identified. Various instances indicated that there is a lack of guidance with regard to the quantification or qualification of benchmarks. The implementation of the Sarbanes-Oxley Act (2002) was an event that leads to the consideration of more conservative benchmarks. The most consistent benchmark that stood the test of time was Discussion paper 6 (1984). The 30 years since the development of these benchmarks indicate that little attention has been given to one of the most complex issues in auditing. Companies within different industries are not generic and exceptions will occur where the auditor needs to apply professional judgment to accommodate the deviations. Further research is required to assist the audit professionals and students in the development of consistent benchmarks to increase the reputation of the profession. The conclusion drawn from this study is that audit materiality and audit risk has a significant impact on the audit process as even the audit report is influenced by proper audit planning and guidelines to support the auditor in audits. / MCom (Accountancy), North-West University, Vaal Triangle Campus, 2013
7

Performance benchmarking: Creating measurable energy and monetary savings in the real estate industry

January 2013 (has links)
0 / SPK / specialcollections@tulane.edu
8

The new guideline for goodwill impairment

Swanson, Nancy Jewel 15 December 2007 (has links)
Goodwill, for financial accounting purposes, is an intangible asset on the balance sheet that represents the excess of the amount paid for an acquired entity over the net fair value of the assets acquired. The Financial Accounting Standards Board has recently issued a new mandate. This new guideline eliminates annual amortization of goodwill and requires annual valuation for potential goodwill impairment and consequent writedown. Determining the amount of impairment requires management estimation, thus, allowing managerial discretion in developing the impairment amounts. Managerial discretion may then be used to manage earnings. Earnings management occurs when managers exercise their professional judgment in financial reporting to manipulate earnings. Prior literature documents that managers have strong motivations to manage earnings. Managers sometimes respond to these motivations by managing earnings to exceed key earnings thresholds. The new goodwill guideline might be used as an earnings management tool. Thus, this dissertation examines whether earnings management results from the judgmental latitude allowed in estimating goodwill when earnings will otherwise just miss key earnings benchmarks. Specifically, this study tests goodwill impairment writedowns in a cross-sectional distributional analysis for the year 2002, the first year following the effective date of the new goodwill standards. The sample is taken from the financial information of publicly-traded companies tracked in the Compustat and CRSP databases. To identify firms that are likely to have managed earnings to exceed key benchmarks, earnings per share, both before and after goodwill impairment writedowns, is compared with two thresholds established in prior research. The first, is a positive earnings per share; and the second is the prior year’s earnings per share. Results from applying both tobit and logistic regression models suggest that managers are exploiting their discretion in recognizing goodwill impairments to manage earnings. Thus, this project contributes to the earnings management literature in that it highlights the exploitation of increased judgmental latitude for earnings management purposes.
9

Top-down Approach To Securing Intermittent Embedded Systems

Santhana Krishnan, Archanaa 29 September 2021 (has links)
The conventional computing techniques are based on the assumption of a near constant source of input power. While this assumption is reasonable for high-end devices such as servers and mobile phones, it does not always hold in embedded devices. An increasing number of Internet of Things (IoTs) is powered by intermittent power supplies which harvest energy from ambient resources, such as vibrations. While the energy harvesters provide energy autonomy, they introduce uncertainty in input power. Intermittent computing techniques were proposed as a coping mechanism to ensure forward progress even with frequent power loss. They utilize non-volatile memory to store a snapshot of the system state as a checkpoint. The conventional security mechanisms do not always hold in intermittent computing. This research takes a top-down approach to design secure intermittent systems. To that end, we identify security threats, design a secure intermittent system, optimize its performance, and evaluate our design using embedded benchmarks. First, we identify vulnerabilities that arise from checkpoints and demonstrates potential attacks that exploit the same. Then, we identify the minimum security requirements for protecting intermittent computing and propose a generic protocol to satisfy the same. We then propose different security levels to configure checkpoint security based on application needs. We realize configurable intermittent security to optimize our generic secure intermittent computing protocol to reduce the overhead of introducing security to intermittent computing. Finally, we study the role of application in intermittent computing and study the various factors that affect the forward progress of applications in secure intermittent systems. This research highlights that power loss is a threat vector even in embedded devices, establishes the foundation for security in intermittent computing. / Doctor of Philosophy / The embedded systems are present in every aspect of life. They are available in watches, mobile phones, tablets, servers, health aids, home security, and other everyday useful technology. To meet the demand for powering up a rising number of embedded devices, energy harvesters emerged as a solution to provide an autonomous solution to power on low-power devices. With energy autonomy, came energy scarcity that introduced intermittent computing, where embedded systems operate intermittently because of lack of constant input power. The intermittent systems store snapshots of their progress as checkpoints in non-volatile memory and restore the checkpoints to resume progress. On the whole, the intermittent system is an emerging area of research that is being deployed in critical locations such as bridge health monitoring. This research is focused on securing intermittent systems comprehensively. We perform a top-down analysis to identify threats, mitigate them, optimize the mitigation techniques, and evaluate the implementation to arrive at secure intermittent systems. We identify security vulnerabilities that arise from checkpoints to demonstrate the weakness in intermittent systems. To mitigate the identified vulnerabilities, we propose secure intermittent solutions to protect intermittent systems using a generic protocol. Based on the implementation of the generic protocol and its performance, we propose several optimizations based on the needs of the application to securing intermittent systems. And finally, we benchmark the security properties using two-way relation between security and application in intermittent systems. With this research, we create a foundation for designing secure intermittent systems.
10

Application Benchmarks for SCMP: Single Chip Message-Passing Computer

Shah, Jignesh 27 July 2004 (has links)
As transistor feature sizes continue to shrink, it will become feasible, and for a number of reasons more efficient, to include multiple processors on a single chip. The SCMP system being developed at Virginia Tech includes up to 64 processors on a chip, connected in a 2-D mesh. On-chip memory is included with each processor, and the architecture includes support for communication and the execution of parallel threads. As with any new computer architecture, benchmark kernels and applications are needed to guide the design and development, as well as to quantify the system performance. This thesis presents several benchmarks that have been developed for or ported to SCMP. Discussion of the benchmark algorithms and their implementations is included, as well as an analysis of the system performance. The thesis also includes discussion of the programming environment available for developing parallel applications for SCMP. / Master of Science

Page generated in 0.129 seconds