• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 215
  • 1
  • Tagged with
  • 993
  • 977
  • 975
  • 139
  • 115
  • 99
  • 98
  • 83
  • 82
  • 74
  • 72
  • 60
  • 57
  • 57
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Improving performance on base stations by improving spatial locality in caches / Förbättra prestanda på basstationer genom att öka rumslokaliteten i cachen

Carlsson, Jonas January 2016 (has links)
For real-time systems like base stations there are time constraints for them to operate smoothly. This means that things like caches which brings stochastic variables will most likely not be able to be added. Ericsson however want to add caches both for the possible performance gains but also for the automatic loading of functions. As it stands, Ericsson can only use direct mapped caches and the chance for cache misses on the base stations is large. We have tried to see if randomness can be decreased by placing code in the common memory. The new placement is based on logs from earlier runs. There are two different heuristic approaches to do this. The first was developed by Pettis \& Hansen and the second was developed by Gloy \& Smith. We also discuss a third alternative by Hashemi, Kaeli \& Calder (HKC) which was not tested. However the results show there are no practical improvements by using code placement strategies.
82

Performance impacts when moving from a VM-based solution to a container-based solution

Muchow, Nicklas, Amir Jalali, Danial January 2022 (has links)
Container-based solutions are increasing in popularity and thus more companies grav- itate towards them. However, with systems growing larger and more complex there is a general need to introduce container orchestration to manage the increase of containers. While adopting these technologies, Ericsson has noticed some increase in CPU usage when switching from a VM-based solution to a container-based solution with Kubernetes. Thus this paper is focusing on identifying the factors that may impact CPU usage in this kind of scenario. To do this, a literature review was performed to identify potential factors and an experiment was conducted on these factors to determine their impact on CPU usage. The results show that factors such as number of Pods in a request chain, the message size between Pods, and where Pods are located in a Kubernetes cluster, may impact the CPU usage of a container-based system using Kubernetes. The number of Pods in the request chain and message size between Pods had the largest impact on CPU usage, and thus a conclusion could be drawn that network I/O is the prime factor one should look into when making sure that a container-based solution performs as good as possible.
83

Adaptive Hierarchical Scheduling Framework for Real-Time Systems

Khalilzad, Nima January 2013 (has links)
Modern computer systems are often designed to play a multipurpose role. Therefore, they are capable of running a number of software tasks (software programs) simultaneously in parallel. These software tasks should share the processor such that all of them run and finish their computations as expected. On the other hand, a number of software tasks have timing requirements meaning that they should not only access the processing unit, but this access should also be in a timely manner. Thus, there is a need to timely share the processor among different software programs (applications). The time-sharing often is realized by assigning a fixed and predefined processor time-portion to each application. However, there exists a group of applications where, i) their processor demand is changing in a wide range during run-time, and/or ii) their occasional timing violations can be tolerated. For systems that contain applications with the two aforementioned properties, it is not efficient to assign the applications with fixed processor time-portions. Because, if we allocate the processor resource based on the maximum resource demand of the applications, then the processor's computing capacity will be wasted during the time intervals where the applications will require a smaller portion than maximum resource demand. To this end, in this thesis we propose adaptive processor time-portion assignments. In our adaptive scheme, at each point in time, we monitor the actual demand of the applications, and we provide sufficient processor time-portions for each application. In doing so, we are able to integrate more applications on a shared and resource constrained system, while at the same time providing the applications with timing guarantees. / <p>QC 20151217</p>
84

Analysis and Synthesis of Boolean Networks

Liu, Ming January 2015 (has links)
In this thesis, we present techniques and algorithms for analysis and synthesis of synchronous Boolean and multiple-valued networks. Synchronous Boolean and multiple-valued networks are a discrete-space discrete-time model of gene regulatory networks. Their cycle of states, called \emph{attractors}, are believed to give a good indication of the possible functional modes of the system. This motivates research on algorithms for finding attractors. Existing decision diagram-based approaches have limited capacity due to the excessive memory requirements of decision diagrams. Simulation-based approaches can be applied to large networks, however, their results are incomplete. In the first part of this thesis, we present an algorithm, which uses a SAT-based bounded model checking approach to find all attractors in a multiple-valued network. The efficiency of the presented algorithm is evaluated by analysing 30 network models of real biological processes as well as \num{35000} randomly generated 4-valued networks. The results show that our algorithm has a potential to handle an order of magnitude larger models than currently possible. One of the characteristic features of genetic regulatory networks is their inherent robustness, that is, their ability to retain functionality in spite of the introduction of random faults. In the second part of this thesis, we focus on the robustness of a special kind of Boolean networks called \emph{Balanced Boolean Networks} (BBNs). We formalize the notion of robustness and introduce a method to construct \emph{BBNs} for $2$-singleton attractors Boolean networks. The experiment results show that \emph{BBNs} are capable of tolerating single stuck-at faults. Our method improves the robustness of random Boolean networks by at least $13\%$ on average, and in some special case, up to $61\%$. In the third part of this thesis, we focus on a special type of synchronous Boolean networks, namely Feedback Shift Registers (FSRs). FSR-based filter generators are used as a basic building block in many cryptographic systems, e.g. stream ciphers. Filter generators are popular because their well-defined mathematical description enables a detailed formal security analysis. We show how to modify a filter generator into a nonlinear FSR, which is faster, but slightly larger, than the original filter generator. For example, the propagation delay can be reduced 1.54 times at the expense of 1.27\% extra area. The presented method might be important for applications, which require very high data rates, e.g. 5G mobile communication technology. In the fourth part of this thesis, we present a new method for detecting and correcting transient faults in FSRs based on duplication and parity checking. Periodic fault detection of functional circuits is very important for cryptographic systems because a random hardware fault can compromise their security. The presented method is more reliable than Triple Modular Redundancy (TMR) for large FSRs, while the area overhead of the two approaches are comparable. The presented approach might be important for cryptographic systems using large FSRs. / <p>QC 20151120</p>
85

Wi-Fi analytics tool for outdoor areas

Persson, Robin January 2022 (has links)
Planning wireless networks over large, outdoor areas and ensuring their coverage remains complete over changing environments is hard without an analytic tool. Current applications on the market are aimed towards indoor use on small areas and assume that the user has a phone or laptop with them while performing an active logging session. Allowing GPS units or in-vehicle computers to passively log network and GPS diagnostic data in the background to then be sent for analysis on another location would give users a higher quantity of data over longer time while requiring less active work for the industry. During this thesis a web application which can display a range of network and GPS related metrics on top of a world map was developed. Together with an open import format which enables any type of GPS unit or network card to produce data, this resulted an application with good flexibility that users can use to improve their networks.
86

System Architecture for Positioned Data Collection : An investigation and implementation of a system architecture for positioned data collection with focus on indoor environments and Android smartphones.

Royo, Adrian January 2021 (has links)
With the location based service market being estimated to drastically increase in value to over 77 billion dollars in 2021, novel approaches to amass and combine data are being explored. One such novel approach is that of collecting positioned data (PD), which in turn consists of data gathered from radio signals associated to ground truth positions (GTP). This type of PD can be used to benefit such things as spatial network analysis or supportive data for positioning algorithms. In this thesis we investigate how such PD can be collected, managed and stored in an effective manner regardless of environment. As a means to investigate this, we have proposed a positioned data collection (PDC) system architecture.The proposed PDC system architecture has been designed based on documentation related to six different PDC related systems, the ADD method, the ATAM method, the three-tier architecture pattern and a proposed PDC system definition. Parts of the proposed architecture have been chosen for implementation and testing. The chosen parts were those which were designed to collect PD within indoor environments, as it is more scientifically interesting compared to outdoor environments. The results gathered from the tests proved that the implemented PDC system parts worked as intended, successfully associating radio signal data values to both local- and geographical GTP. Ways of altering the association between radio signal data and GTP were also explored and tested, with the most prominent alteration approach being that of spatial filtration. Both the proposed architecture and the results gathered from testing the implemented parts were assessed by stakeholders. The thesis work was generally well accepted by the stakeholders, meeting little criticism and providing valuable insights.
87

Efficient and Adaptive Content Delivery of Linear and Interactive Branched Videos

Krishnamoorthi, Vengatanathan January 2016 (has links)
Video streaming over the Internet has gained tremendous popularity over recent years and currently constitutes the majority of Internet traffic. The on-demand delivery of high quality video streaming has been enabled by a combination of consistent improvements in residential download speeds, HTTP-based Adaptive Streaming (HAS), extensive content caching, and the use of Content Distribution Networks (CDNs). However, as large-scale on-demand streaming is gaining popularity, several important questions and challenges remain unanswered, including determining how the infrastructure can best be leveraged to provide users with the best possible playback experience. In addition, it is important to develop new techniques and protocols that facilitate the next generation of streaming applications. Innovative services such as interactive branched streaming are gaining popularity and are expected to be the next big thing in on-demand entertainment. The major contributions of this thesis are in the area of efficient content delivery of video streams using HAS. To address the two challenges above, the work utilizes a combination of different methods and tools, ranging from real-world measurements, characterization of system performance, proof-of-concept implementations, protocol optimization, and evaluation under realistic environments. First, through careful experiments, we evaluate the performance impact and interaction of HAS clients with proxy caches. Having studied the typical interactions between HAS clients and caches, we then design and evaluate content-aware policies to be used by the proxy caches, which parse the client requests and prefetch the chunks that are most likely to be requested next. In addition, we also design cooperative policies in which clients and proxies share information about the playback session. Our evaluations reveal that, in general, the bottleneck location and network conditions play central roles in which policy choices are most advantageous, and the location of the bottlenecks significantly impact the relative performance differences between policy classes. We also show that careful design and policy selection is important when trying to enhance HAS performance using proxy assistance. Second, this thesis proposes, models, designs, and evaluates novel streaming applications such as interactive branched videos. In such videos, users can influence the content that is being shown to them. We design and evaluate careful prefetching policies that provides seamless playback even when the users defer their path choices to the last possible moment. We derive optimized prefetching policies using an optimization framework, design and implement effective buffer management techniques for seamless playback at branch points, and use parallel TCP connections to achieve efficient buffer workahead. Through performance evaluations, we show that our policies can effectively prefetch data of carefully adapted qualities along multiple alternative paths so to ensure seamless playback, offering users a pleasant viewing experience without playback interruptions. / <p>The series title <em>Linköping Studies in Science and Technology Licentiate Thesis</em> is incorrect. The correct series title is <em>Linköping Studies in Science and Technology Thesis</em>.</p>
88

A Control-based Approach for Self-adaptive Software Systems with Formal Guarantees

Shevtsov, Stepan January 2017 (has links)
No description available.
89

Supporting Enactment of Aspect Oriented Business Process Models : an approach to separate cross-cutting concerns in action

Jalali, Amin January 2013 (has links)
Coping with complexity in Information Systems and Software Engineering is an important issue in both research and industry. One strategy to deal with this complexity is through a separation of concerns, which can result in reducing the complexity, improving the re-usability, and simplifying the evolution.Separation of concerns can be addressed through the Aspect Oriented paradigm. Although this paradigm has been well researched in the field of programming, it is still in a preliminary stage in the area of Business Process Management. While some efforts have been made to propose aspect orientation for business process modeling, it has not yet been investigated how these models should be implemented, configured, run, and adjusted.Such a gap has restrained the enactment of aspect orientated business process models in practice.Therefore, this research enables the enactment of such models to support the separation of cross-cutting concerns in the entire business process management life-cycle.It starts by defining the operational semantics for the Aspect Oriented extension of the Business Process Model and Notation.The semantics specifies how such models can be implemented and configured, which can be used as a blueprint to support the enactment of aspect oriented business process models.The semantics is implemented in the form of artifacts, which are then used in a banking case study to investigate the current modeling technique.This investigation revealed new requirements, which should be considered in aspect oriented modeling approaches.Thus, the current modeling notation has been extended to include new requirements.The extended notation has been formalized, and investigated through re-modeling the processes in the case study. The results from this investigation show the need to refine the separation rules to support the encapsulation of aspects based on different business process perspectives. Therefore, the new refinement is proposed, formalized, and implemented.The implementation is then used as a prototype to evaluate the result through a case study.
90

Factors affecting the use of data mining in Mozambique : Towards a framework to facilitate the use of data mining

Sotomane, Constantino January 2014 (has links)
Advances in technology have enabled organizations to collect a variety ofdata at high speed and provided the capacity to store them. As a result theamount of data available is increasing daily at a rapid rate. The data stored inorganizations hold important information to improve decision making andgain competitive advantage. To extract useful information from these hugeamounts of data, special techniques such as data mining are required. Datamining is a technique capable of extracting useful knowledge from vastamounts of data. The successful application of data mining in organizationsdepends on several factors that may vary in relation to the environment. InMozambique, these factors have never been studied. The study of the factorsaffecting the use of data mining is important to determine which aspectsrequire special attention for the success of the application of data mining.This thesis presents a study of the level of awareness and use of datamining in Mozambique and the factors affecting its use. It is a step towardsthe development of a framework to facilitate the application of data miningin Mozambique. The study is exploratory and uses multiple case studies intwo institutions in Maputo city, the capital of Mozambique, one in the areaof agriculture and the other in the field of electricity, and of Maputo citymore broadly. The study involved a combination of observations, focusgroup discussions and enquiries directed at managers and practitioners onaspects of information technology (IT) and data analysis. The results of the study reveal that the level of awareness and use of datamining in Mozambique is still very weak. Only a limited number ofprofessionals in IT are aware of the concept or its uses. The main factorsaffecting the use of data mining in Mozambique are: the quality, availabilityand integration of, access to data, skill in data mining, functional integration,alignment of IT and business, interdisciplinary learning, existence ofchampions, commitment of top management, existence of changemanagement, privacy, cost and the availability of technology. Threeapplications were developed in two real settings, which showed that thereare problems to be solved with data mining. The two examples in the area ofelectricity demonstrate how data mining is used to develop models toforecast electricity consumption and how they can enhance the estimation ofelectricity to be sold to the international market. The application in the areaof agriculture extracts associations between the characteristics of smallfarmers and the yield of maize from a socioeconomic database with hundreds of attributes. The applications provide practical examples of howdata mining can help to discover patterns that can lead to the development ofmore accurate models and find interesting associations between variables inthe dataset. The factors identified in this thesis can be used to determine thefeasibility of the implementation of data mining projects and ensure itssuccess.

Page generated in 0.0589 seconds