• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 228
  • 130
  • 120
  • 108
  • 83
  • 46
  • 23
  • 13
  • 13
  • 7
  • 6
  • 5
  • 2
  • 2
  • 2
  • Tagged with
  • 830
  • 248
  • 210
  • 183
  • 130
  • 127
  • 126
  • 114
  • 106
  • 84
  • 74
  • 73
  • 66
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Eine Einführung in SELinux

Winkler, Marcus. January 2007 (has links)
Chemnitz, Techn. Univ., Studienarb., 2006.
22

Systém pro sběr dat ze sítě PLC firmy Micropel / Date acquire system for Micropel's PLC network

Klusáček, Jan January 2012 (has links)
This paper deals with desgin and implementation of hardware and software for data acquisition device. Device is based on single board computer FoxBoard G20 which is placed on custom board which provide RS485 interface and short-term UPS. Device is runing Linux with device driver which enabling communication with Micropel PLCs across RS485 bus. Data are read from PLCs are saved to database and afterwards presented to user via web interface.
23

Testování softwarového nástroje

Petlan, Michael January 2017 (has links)
The PERF tool is a part of the Linux kernel since version 2.6. This tool is an event-based profiler and observability tool. It can count various event occurrences in the system, from hardware performance monitoring unit (PMU) events of the CPU at one end, to software tracepoints at the other. It can trace both kernel and userspace functions. The tool is very useful to kernel and application software developers, as well as hardware designers. This thesis aims at describing the PERF tool from the point of view of a PERF test engineer. Additionally, it is also about designing methods for verification of its correct functionality.
24

Polohovatelný stojan pro přehledovou kameru / Positionable Stand for a Surveillance Camera

Materna, Zdeněk January 2011 (has links)
Thesis deals with design of control for surveillance camera positionable stand over Ethernet. It describes the selected hardware solution based on development kit with ARM microprocessor and connected board with additional electronics. Thesis also discusses software solution using the Linux operating system, created using Buildroot package and modified for real-time control.
25

Security By Design

Tanner, M. James 10 August 2009 (has links)
Securing a computer from unwanted intrusion requires astute planning and effort to effectively minimize the security invasions computers are plagued with today. While all of the efforts to secure a computer are needed, it seems that the underlying issue of what is being secured has been overlooked. The operating system is at the core of the security issue. Many applications and devices have been put into place to add layers of protection to an already weak operating system. Security did not used to be such a prominent issue because computers were not connected 24/7, they used dialup and did not experience the effects from connecting to multiple computers. Today computers connect to high speed Internet and seem useless without access to email, chat, Internet, and videos. This interconnectedness of computers has allowed the security of many computers to be compromised because they have not been programmatically secured. The core component of computer security might best be done through security layers protecting the operating system. For this research, those who work in the computer field were asked to complete a survey. The survey was used to gather information such as the security layers and enhancements implemented on Linux computers and networks their surrounding network. This research is a stepping stone for further research as to what can be done to further improve upon security and its current implementations. / Securing a computer from unwanted intrusion requires astute planning and effort to effectively minimize the security invasions computers are plagued with today.
26

Analysis of Resource Isolation and Resource Management in Network Virtualization

Lindholm, Rickard January 2016 (has links)
Context. Virtualized networks are considered a major advancement in the technology of today, virtualized networks are offering plenty of functional benefits compared to todays dedicated networking elements. The virtualization allows network designers to separate networks and adapting the resources depending on the actual loads in other words, Load Balancing. Virtual networks would enable a minimized downtime for deployment of updates and similar tasks by performing a simple migration and then updating the linking after properly testing and preparing the Virtual Machine with the new software. When this technology is successfully proven to be efficient or evaluated and later adapted to the existing flaws. Virtualized networks will at that point claim the tasks of todays dedicated networking elements. But there are still unknown behaviors and effects of the technology for example, how the scheduler or hypervisor handles the virtual separation since they do share the same physical transmission resources.Objectives. By performing the experiments in this thesis, the hope is to learn about the effects of virtualization and how it performs under stress. By learning about the performance under stress it would also increase the knowledge about the efficiency of network virtualization. The experiments are conducted by creating scripts, using already written programs and systems, adding different loads and measuring the effects, this is documented so that other students and researchers can benefit from the research done in this thesis.Methods. In this thesis 5 different methodologies are performed: Experimental validation, statistical comparative analysis, resource sharing, control theory and literature review. Two systems are compared to previous research by evaluating the statistical results and analyzing them. As mentioned earlier the investigation of this thesis is focusing on how the scheduler executes the resource sharing under stress. The first system which is the control test is designed without any interference and a 5 Mbit/s UDP stream which is going through the system under test and being timestamped on measurement points on both the ingress and the egress, the second experiment involves an interfering load of a 5 Mbit/s UDP stream on the same system under test. Since it is a complex system quite some literature reviewing was done but mostly to gain a understanding and an overview of the different parts of the system and so that some obstacles would be able to be avoided.Results. The statistical comparative analysis of the experiments produced two graphs and two tables containing the coefficient of variance of the two experiments. The graph of the control test produced a graph with a quite even distribution over the time intervals with a coefficient of variance difference to the power of 10−3 and increasing somewhat over the larger time intervals. The second experiment with two virtual machines and an interfering packet stream are more distributed over the 0.0025 seconds and the 0.005 seconds intervals with a larger difference than the control test having a difference to the power of 10−2, showing some signs of a bottleneck in the system.Conclusions. Since the performance of the experiments and also the statistical handling of the data took longer than expected the choice was made to not deploy the system using Open Virtual Switch instead of Linux Bridge, hence there is not any other experiments to compare the performance with. But from referred research under related works the researcher concluded that the difference between Open Virtual Switch and Linux Bridge is small when compared without introducing any load. This is also confirmed on the website of Open Virtual Switch which states that Open Virtual Switch uses the same base as Linux Bridge. Linux Bridge is performing according to the expectations, it is a simple yet powerful tool and the results are confirming the previous research which claims that there are bottlenecks in the system. According to the pre-set requirement for validity for this experiment the difference of the CoV would be greater than to the power of 10−5, the measured difference was to the power of 10−2 which gives support to the theory that there are bottlenecks in the system. In the future it would be interesting to examine more about the effects of different hypervisors, virtualization techniques, packet generators etcetera to tackle these problems. A company that have taken countermeasures is Intel who have developed DPDK which confronts these efficiency problems by tailoring the scheduler towards the specific tasks. The downside of Intel’s DPDK is that it limits the user to Intel processors and removes one of the most important benefits of virtualization, the independence. But Intel have tried to keep it as independent as possible by maintaining DPDK as open source.
27

A case study of cross-branch porting in Linux Kernel

Hua, Jinru 23 July 2014 (has links)
To meet different requirements for different stakeholders, branches are widely used to maintain multiple product variants simultaneously. For example, Linux Kernel has a main development branch, known as the mainline; 35 branches to maintain older product versions which are called stable branches; and hundreds of branches for experimental features. To maintain multiple branch-based product variants in parallel, developers often port new features or bug-fixes from one branch to another. In particular, the process of propagating bug-fixes or feature additions to an older version is commonly called backporting. Prior to our study, backporting practices in large scale projects have not been systematically studied. This lack of empirical knowledge makes it difficult to improve the current backporting process in the industry. We hypothesized that cross-branch porting practice is frequent, repetitive, and error-prone. It required significant effort for developers to select patches that need to be backported and then apply them to the target implementation. We carried out two complementary studies to examine this hypothesis. To investigate the extent and effort of porting practice, this thesis first conducted a quantitative study of backporting activities in Linux Kernel with a total of 8 years version history using the data of the main branch and the 35 stable branches. Our study showed that backporting happened at a rate of 149 changes per month, and it took 51 days to propagate patches on average. 40% of changes in the stable branches were ported from the mainline and 64% of ported patches propagated to more than one branch. Out of all backporting changes from the mainline to stable branches, 97.5% were applied without any manual modifications. To understand how Linux Kernel developers keep up to date with development activities across different branches, we carried out an online survey with engineers who may have ported code from the mainline to stable branches based on our prior analysis of Linux Kernel version history. We received 14 complete responses. The participants have 12.6 years of Linux development experience on average and are either maintainers or experts of Linux Kernel. The survey showed that most backporting work was done by the maintainers who knew the program quite well. Those experienced maintainers could easily identify the edits that need to be ported and propagate them with all relevant changes to ensure consistency in multiple branches. Inexperience developers were seldom given an opportunity to backport features or bug-fixes to stable branches. In summary, based on the version history study and the online survey, we concluded that cross-branch porting is frequent, periodic, and repetitive. It required a manual effort to selectively identify the changes that need to be ported, to analyze the dependency of the selected changes, and to apply all required changes to ensure consistency. To eliminate human's omission mistakes, most backporting work was done only by experienced maintainers who could identify all relevant changes along with the change that needed to be backported. Currently inexperienced developers were excluded from cross-branch porting activities from the mainline to stable branches in Linux Kernel. Our results call for an automated approach to identify the patches that require to be ported, to collect context information to help developers become aware of relevant changes, and to notify pertinent developers who may be responsible for the corresponding porting events. / text
28

Performance Comparison of Cassandra in LXC and Bare metal : Container Virtualization case study

Thiruvallur Vangeepuram, Reventh January 2016 (has links)
Big data is a developing term that describes any large amount of structured and unstructured data that has the potential to be mined for information. To store this type of large amounts of data, cloud storage systems are necessary. These cloud storage systems are developed such that they are capable of keeping the data accessible and available to the users over a network. To store big data new platforms are required. Some of the popular big data platforms are Mongo, Cassandra and Hadoop. In this thesis we used Cassandra database system because it is a distributed database and also open source. Cassandra’s architecture is master less ring design that is easy to setup and easy to maintain. Apache Cassandra is a highly scalable distributed database designed to handle big data management with linear scalable and seamless multiple data center deployment. It is a NoSQL database system which allow schema free tables so that a data item could have a variable set of columns unlike in relational databases. Cassandra provides with high scalability with no single point of failure. For the past few years’ container based virtualization has been evolving rapidly. Container based virtualization such as LXC have been focused here. Linux Containers (LXC) is an operating system level virtualization method for running multiple isolated Linux systems on a single control host. It does not resemble a virtual machine, but provides a virtual environment that has its own CPU, memory, network, etc. space and the resource control mechanism. In this thesis work performance of Apache Cassandra database has been analyzed between bare metal and Linux Containers(LXC). A three node Cassandra cluster has been created on both bare metal and Linux container. Assuming one node as seed and Cassandra stress utility tool has been used to test the load of Cassandra cluster. The performance of Cassandra cluster database has been evaluated in bare metal and Linux Container which is the goal of this thesis work. Linux containers (LXC) are deployed in all the servers. A three node Cassandra database cluster has been created in these servers and also in Linux Container(LXC). Port forwarding is the technique used here for making communication between Cassandra in LXC which is the goal of this thesis work. The performance metrics which determine the performance of Cassandra cluster database are selected according to it. The network configuration parameters are changed according to the behavior of Cassandra. By doing changes in these parameters Cassandra starts running according to the required configuration, after this Cassandra cluster performance will be analyzed. This is done with different write, read and mixed load operations and compared with Cassandra cluster performance on bare metal. The results of the thesis show an analysis of measurements of performance metrics like CPU utilization, Disk throughput and latency while running on Cassandra cluster in both bare metal and Linux Containers. A quantitative and statistical analysis of performance of Cassandra cluster is compared. The physical resources utilized by the Cassandra database on native bare metal and Linux Containers (LXC) is similar. According to the results, CPU utilization is more for Cassandra database in Linux Containers. Disk throughput is also more in Linux Containers except in the case of 66% load write operation. Bare metal has less latency compared to Linux Containers in all the scenarios.
29

Transparent large-page support for Itanium linux

Wienand, Ian Raymond, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
The abstraction provided by virtual memory is central to the operation of modern operating systems. Making the most efficient use of the available translation hardware is critical to achieving high performance. The multiple page-size support provided by almost all architectures promises considerable benefits but poses a number of implementation challenges. This thesis presents a minimally-invasive approach to transparent multiple page-size support for Itanium Linux. In particular, it examines the interaction between supporting large pages and Itanium's two inbuilt hardware page-table walkers; one being a virtual linear page-table with limited support for storing different page-size translations and the other a more flexible but higher overhead hash table based translation cache. Compared to a single-page-size kernel, a range of benchmarks show performance improvements when multiple page-sizes are available, generally large working sets that stress the TLB. However, other benchmarks are negatively impacted. Analysis shows that the increased TLB coverage, resulting from the use of large pages, frequently does not reduce TLB miss rates sufficiently to make up for the increased cost of TLB reloads. These results, which are specific to the Itanium architecture, suggest that large-page support for Itanium Linux is best enabled selectively with insight into application behaviour.
30

Handling overruns and underruns in pre-run-time scheduling in hard real-time systems /

Zhang, Lili. January 2003 (has links)
Thesis (M.Sc.)--York University, 2003. Graduate Programme in Computer Science. / Typescript. Includes bibliographical references (leaves 115-117). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL:http://gateway.proquest.com/openurl?url%5Fver=Z39.88-2004&res%5Fdat=xri:pqdiss&rft%5Fval%5Ffmt=info:ofi/fmt:kev:mtx:dissertation&rft%5Fdat=xri:pqdiss:MQ99408

Page generated in 0.0275 seconds