• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 37
  • 37
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Replication of Concurrent Applications in a Shared Memory Multikernel

Wen, Yuzhong 19 July 2016 (has links)
State Machine Replication (SMR) has become the de-facto methodology of building a replication based fault-tolerance system. Current SMR systems usually have multiple machines involved, each of the machines in the SMR system acts as the replica of others. However having multiple machines leads to more cost to the infrastructure, in both hardware cost and power consumption. For tolerating non-critical CPU and memory failure that will not crash the entire machine, there is no need to have extra machines to do the job. As a result, intra-machine replication is a good fit for this scenario. However, current intra-machine replication approaches do not provide strong isolation among the replicas, which allows the faults to be propagated from one replica to another. In order to provide an intra-machine replication technique with strong isolation, in this thesis we present a SMR system on a multi-kernel OS. We implemented a replication system that is capable of replicating concurrent applications on different kernel instances of a multi-kernel OS. Modern concurrent application can be deployed on our system with minimal code modification. Additionally, our system provides two different replication modes that allows the user to switch freely according to the application type. With the evaluation of multiple real world applications, we show that those applications can be easily deployed on our system with 0 to 60 lines of code changes to the source code. From the performance perspective, our system only introduces 0.23\% to 63.39\% overhead compared to non-replicated execution. / Master of Science
12

A Compiler Framework to Support and Exploit Heterogeneous Overlapping-ISA Multiprocessor Platforms

Jelesnianski, Christopher Stanisław 15 December 2015 (has links)
As the demand for ever increasingly powerful machines continues, new architectures are sought to be the next route of breaking past the brick wall that currently stagnates the performance growth of modern multi-core CPUs. Due to physical limitations, scaling single-core performance any further is no longer possible, giving rise to modern multi-cores. However, the brick wall is now limiting the scaling of general-purpose multi-cores. Heterogeneous-core CPUs have the potential to continue scaling by reducing power consumption through exploitation of specialized and simple cores within the same chip. Heterogeneous-core CPUs join fundamentally different processors each which their own peculiar features, i.e., fast execution time, improved power efficiency, etc; enabling the building of versatile computing systems. To make heterogeneous platforms permeate the computer market, the next hurdle to overcome is the ability to provide a familiar programming model and environment such that developers do not have to focus on platform details. Nevertheless, heterogeneous platforms integrate processors with diverse characteristics and potentially a different Instruction Set Architecture (ISA), which exacerbate the complexity of the software. A brave few have begun to tread down the heterogeneous-ISA path, hoping to prove that this avenue will yield the next generation of super computers. However, many unforeseen obstacles have yet to be discovered. With this new challenge comes the clear need for efficient, developer-friendly, adaptable system software to support the efforts of making heterogeneous-ISA the golden standard for future high-performance and general-purpose computing. To foster rapid development of this technology, it is imperative to put the proper tools into the hands of developers, such as application and architecture profiling engines, in order to realize the best heterogeneous-ISA platform possible with available technology. In addition, it would be in the best interest to create tools to be as "timeless" as possible to expose fundamental concepts industry could benefit from and adopt in future designs. We demonstrate the feasibility of a compiler framework and runtime for an existing heterogeneous-ISA operating system (Popcorn Linux) for automatically scheduling compute blocks within an application on a given heterogeneous-ISA high-performance platform (in our case a platform built with Intel Xeon - Xeon Phi). With the introduced Profiler, Partitioner, and Runtime support, we prove to be able to automatically exploit the heterogeneity in an overlapping-ISA platform, being faster than native execution and other parallelism programming models. Empirically evaluating our compiler framework, we show that application execution on Popcorn Linux can be up to 52% faster than the most performant native execution for Xeon or Xeon Phi. Using our compiler framework relieves the developer from manual scheduling and porting of applications, requiring only a single profiling run per application. / Master of Science
13

Disaggregated Zoned Namespace for Multi-tenancy Scenarios

Ramakrishnapuram Selvanathan, Subhalakshmi 22 May 2024 (has links)
The traditional block-based interface used in flash-based Solid State Drives (SSDs) imposes limitations on performance and endurance due to write amplification and garbage collection overheads. In response to these challenges, the NVMe Zoned Namespaces (ZNS) devices introduces a novel storage interface organized into zones, optimizing garbage collection and reducing write amplification. This research delves into the exploration and profiling of ZNS device characteristics, aiming to enhance user comprehension and utilization. Additionally, the study investigates the integration of ZNS devices into disaggregated storage frameworks to improve resource utilization, proposing server-side management features to simplify client operations and minimize overhead. By offering insights for future development and optimization of ZNS-based storage solutions, this work contributes to advancing storage technology and addressing the shortcomings of traditional block-based interfaces. Through extensive experimentation and analysis, this study sheds light on the optimal configurations and deployment strategies for ZNS-based storage solutions. / Master of Science / Traditional storage drives, like those found in computers and data centers, face challenges that limit their performance and durability. These challenges stem from the way data is stored and managed within these drives, resulting in inefficiencies known as write amplification and garbage collection overheads. To address these issues, a new type of storage device called NVMe Zoned Namespaces (ZNS) has been developed. ZNS devices organize data in a smarter way, grouping it into specific areas called zones. This organization helps to reduce inefficiencies and improve performance. This research explores the characteristics of ZNS devices and how they can be used more effectively. By better understanding and using these devices, we can improve the way data is stored and accessed, leading to faster and more reliable storage solutions. Additionally, this research looks at how ZNS devices can be integrated into larger storage systems to make better use of available resources. Ultimately, this work contributes to advancing storage technology and overcoming the limitations of traditional storage interfaces. We aim to uncover the best ways to deploy and optimize ZNS-based storage solutions for a variety of applications.
14

TRADEOFFS TO CONSIDER WHEN SELECTING AN AIRBORNE DATA ACQUISITION SYSTEM

Troth, Bill 10 1900 (has links)
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California / Selecting an airborne data acquisition system involves compromises. No single data acquisition system can be at the same time, lowest cost, smallest, easiest to use and most accurate. The only way to come to a reasonable decision is to carefully plan the project, taking into account what measurements will be required, what are the physical environments involved, what personnel and resources will be needed and of course, how much money is available in the budget? Getting the right mix of equipment, resources and people to do the job within the schedule and the budget is going to involve a number of tradeoffs. A good plan and a thorough knowledge of available resources and equipment will allow you make the necessary decisions. Hopefully, this paper will offer some suggestions that will aid in preparing your plan and give some insight into available system alternatives.
15

A wide spectrum type system for transformation theory

Ladkau, Matthias January 2009 (has links)
One of the most difficult tasks a programmer can be confronted with is the migration of a legacy system. Usually, these systems are unstructured, poorly documented and contain complex program logic. The reason for this, in most cases, is an emphasis on raw performance rather than on clean and structured code as well as a long period of applying quick fixes and enhancements rather than doing a proper software reengineering process including a full redesign during major enhancements. Nowadays, the old programming paradigms are becoming an increasingly serious problem. It has been identified that 90% of the costs of a typical software system arise in the maintenance phase. Many companies are simply too afraid of changing their software infrastructure and prefer to continue with principles like "never touch a running system". These companies experience growing pressure to migrate their legacy systems onto newer platforms because the maintenance of such systems is expensive and dangerous as the risk of losing vital parts of sources code or its documentation increases drastically over time. The FermaT transformation system has shown the ability to automatically or semi-automatically restructure and abstract legacy code within a special intermediate language called WSL (Wide Spectrum Language). Unfortunately, the current transformation process only supports the migration of assembler as WSL lacks the ability to handle data types properly. The data structures in assembler are currently directly translated into C data types which involves many assumptional “hard coded” conversions. The absence of an adequate type system for WSL caused several flaws for the whole transformation process and limits its abilities significantly. The main aim of the presented research is to tackle these problems by investigating and formulating how a type system can contribute to a safe and reliable migration of legacy systems. The described research includes the definition of key aspects of type related problems in the FermaT migration process and how to solve them with a suitable type system approach. Since software migration often includes a change in programming language the type system for WSL has to be able to support various type system approaches including the representation of all relevant details to avoid assumptions. This is especially difficult as most programming languages are designed for a special purpose which means that their possible programming constructs and data types differ significantly. This ranges from languages with simple type systems whose program sare prone to unintended side-effects, to languages with strict type systems which are constrained n their flexibility. It is important to include as many type related details as necessary to avoid making assumptions during language to language translation. The result of the investigation is a novel multi layered type system specifically designed to satisfy the needs of WSL for a sophisticated solution without imposing too many limitations on its abilities. The type system has an adjustable expressiveness, able to represent a wide spectrum of typing approaches ranging from weak typing which allows direct memory access and down casting, via very strict typing with a high diversity of data types to object oriented typing which supports encapsulation and data hiding. Looking at the majority of commercial relevant statically typed programming languages, two fundamental properties of type strictness and safety can be identified. A type system can be either weakly or strongly typed and may or may not allow unsafe features such as direct memory access. Each layer of the Wide Spectrum Type System has a different combination of these properties. The approach also includes special Type System Transformations which can be used to move a given WSL program among these layers. Other emphasised key features are explicit typing and scalability. The whole approach is based on a sound mathematical foundation which assures correctness and integrates seamlessly into the present mathematical definition of WSL. The type system is formally introduced to WSL by constructing an attribute grammar for the language. Type checking and type inference are used to annotate the Abstract Syntax Tree of a given WSL program with type derivations which can be used to reveal and indicate possible typing errors or to infer types if the program did not feature explicit type declarations in the first place. Notable in this approach is also the fact that object orientation is introduced to a procedural programming language without the introduction of new semantics. It is shown that object orientation can be introduced just by adjusting type checking rules and adding some syntactical notations. The approach was implemented and tested on two case studies. The thesis describes and discusses both cases in detail and shows how a migration which ignores type systems could accidentally introduce errors due to assumptions during translation. Both case studies use all important aspects of the approach, Including type transformations and object identification. The thesis finalises by summarising the whole work, identifying limitations, presenting future perspectives and drawing conclusions
16

Green intermodal freight transportation: bi-objective modelling and analysis

Demir, Emrah, Hrusovsky, Martin, Jammernegg, Werner, Van Woensel, Tom January 2019 (has links) (PDF)
Efficient planning of freight transportation requires a comprehensive look at wide range of factors in the operation and man- agement of any transportation mode to achieve safe, fast, and environmentally suitable movement of goods. In this regard, a combination of transportation modes offers flexible and environmentally friendly alternatives to transport high volumes of goods over long distances. In order to reflect the advantages of each transportation mode, it is the challenge to develop models and algorithms in Transport Management System software packages. This paper discusses the principles of green logistics required in designing such models and algorithms which truly represent multiple modes and their characteristics. Thus, this research provides a unique practical contribution to green logistics literature by advancing our understanding of the multi-objective planning in intermodal freight transportation. Analysis based on a case study from hinterland intermodal transportation in Europe is therefore intended to make contributions to the literature about the potential benefits from com bining economic and environmental criteria in transportation planning. An insight derived from the experiments conducted shows that there is no need to greatly compromise on transportation costs in order to achieve a significant reduction in carbon-related emissions.
17

none

Jiao, Hou-Yan 06 June 2007 (has links)
Assessment of Civil Servants¡¦ Satisfaction with Electronic Official Document Exchange System: The Case of Kaohsiung City Government Abstract This study aims to understand the assessment from Kaohsiung city government civil servants to the performance of Electronic Official Document Exchange System, probing key factors which affect the executing of electronic official document. It will serves as further validation of the impact from executing of Electronic Official Document Exchange System on Administrative Efficiency. This study used the methods of quantitative for survey, and qualitative research for comprehension. 254 organizations and schools of Kaohsiung City Government were the research targets, namely comparing the variables of E-policy, information system software and hardware and E-efficiency. 254 questionnaires were sent out and all were returned, although 230 questionnaires were valid and 24 were invalid. The response rate was 90.55%. Valid responses were collated and analyzed using statistical evaluating software systems of SPSS and AMOS, testing the relationships between different variables. Several findings can be reported as follows: 1¡BThe results of assessing the key factors, such as E-policy, Information system software and hardware and E-efficiency, which affect the performance of the Electronic Official Document Exchange System, are beyond the medium level. The assessment was conducted by questionnaires completed by Kaohsiung city government civil servants. Kaohsiung city government civil servants are satisfied with the performance of the Electronic Official Document Exchange System. 2¡BThe key factors affecting the executing of electronic official document system are as follows: (1)E-policy aspect There are two factors including decision-makers focusing much attention on and completing work plans. (2)Information system software and hardware aspect There are four factors including the operating stability, accuracy, security and ease of use of information system facilities. (3)E-efficiency aspect Only the Administrative Efficiency aspect. 3¡BThe impact on Administrative Efficiency coming from executing the electronic document system is as follows¡G (1)Due to the different characteristics of organizations and schools of Kaohsiung City Government, it brings about significant discrepancies between variables of E-policy, information software systems and E-efficiency for checking the executing efficiency of pushing forward the executing of electronic official documents. (2)There is positive impact between E-policy and Information system software and hardware. (3)There is positive impact between E-policy and E-efficiency. (4)There is positive impact between Information system software and hardware and E-efficiency. Keywords: Electronic Official Document Exchange System¡BE-policy¡BE-efficiency Information system software and hardware¡BAdministrative Efficiency¡BOrganization Characteristics
18

Indoor Mobile Positioning system (MPS) classification in different wireless technology domain

Ghandchi, Bahram, Saleh, Taha January 2018 (has links)
The main purpose of this thesis work is to find and compare different network characteristics of MPS (Mobile Positioning System) in the different wireless technology domains. Since decades ago MNO’s (Mobile Network Operators) added many new services based on the geographical areas of subscribers and their needs. Here we define wireless networks and go through different types of technologies and do the comparison when they collect different types of data for their location-based services and see if we could have the same accuracy with 2G (second generation) of mobile network as like as 3G (third generation) and higher. Finally, we will come up with a proposal for new age technology.
19

Informační systémy v éře cloud computingu / Information Systems in a Cloud Computing era

Klimt, Roman January 2011 (has links)
This thesis is focused on cloud computing and its impact on enterprise resource planning information systems. The historical background of cloud computing is discussed in the first of the thesis. After that, analysis of all possible services and types of cloud computing are provided. In this first part of this thesis, there is also an analysis of cloud computing providers who provide one of the possible cloud computing service - Infrastructure as a Service(IaaS). This thesis evaluates IaaS providers based on suggested methodology. According to this methodology each provider is ranked and the provider with the highest score is used as a provider for outsourcing infrastructure for a new model of information system which is suggested further in the thesis. The next part of this thesis provides a detailed view on cloud computing service closely related to software distribution - Software as a Service(SaaS). This focuses on key features of this approach, there is a discussion about differences between ASP and on-premis approach. At the end of this part there is a detailed discussion about SaaS benefits and risks for customers as well as software vendors. Having discussed all necessary theoretical background a local software company providing mainly on-premise enterprise resource planning information system is described. After all threads and opportunities of this company are analyzed it is suggested to provide a new option of distributing one of its information system to customers - in the form of SaaS. Based on the decision in the previous part, one of the information systems offered to customers is chosen and reengineered in order to allow distribution over the Internet as SaaS. While changing the architecture of the information system, the main focus is to reduce initial costs. A necessary features which need to be changed are analyzed from three perspectives - technological, process and economical. In all of these perspectives there is a discussion about problems which need to be kept in mind. Each problem is analyzed and the solution options are discussed. Towards the end of each problematical part there is a recommendation for the company in order to reach the set goals. The final part is comparing the on-premise system and the SaaS system. The comparison is provided in the customer example. There are calculations for each solution and a discussion about potential attractiveness of a newly designed information system for potential customers.
20

Internet Infrastructures for Large Scale Emulation with Efficient HW/SW Co-design

Gula, Aiden K 20 October 2021 (has links)
Connected systems are becoming more ingrained in our daily lives with the advent of cloud computing, the Internet of Things (IoT), and artificial intelligence. As technology progresses, we expect the number of networked systems to rise along with their complexity. As these systems become abstruse, it becomes paramount to understand their interactions and nuances. In particular, Mobile Ad hoc Networks (MANET) and swarm communication systems exhibit added complexity due to a multitude of environmental and physical conditions. Testing these types of systems is challenging and incurs high engineering and deployment costs. In this work, we propose a scalable MANET emulation framework using virtualized internet infrastructures that generalizes an assortment of application spaces with diverse attributes. We then quantify the architecture using various evaluation techniques to determine both feasibility and scalability. Finally, we developed a hardware offload engine for virtualized network systems that builds upon recent work in the field.

Page generated in 0.0532 seconds