Spelling suggestions: "subject:"datavetenskap"" "subject:"datvetenskap""
761 |
Quality of Service Analysis for CloudMACVestin, Jonathan January 2014 (has links)
Software Defined Networking (SDN) is a relatively recent technology, which has seen a rise in popularity. OpenFlow is a very popular SDN technology, but it does not have any support for Quality of Service options. Open vSwitch is a common software switch which supports OpenFlow. Open vSwitch is used in CloudMAC, another recent technology in wireless networking. Testing of CloudMAC has shown that it has a low connection success rate. This thesis investigated the cause of the connection failures, and a solution using traffic control options in Open vSwitch was developed. Furthermore, additional traffic control options were added to Open vSwitch in order to evaluate the effectiveness of different traffic control algorithms. While the connection success rate in a standard network without cross-traffic was 99%, with cross-traffic, the success rate dropped to 43%. With the introduction of traffic control, the success rate reached up to 99% again.
|
762 |
Methods for Detecting Unsolvable Planning Instances using Variable ProjectionStåhlberg, Simon January 2017 (has links)
In this thesis we study automated planning, a branch of artificialintelligence, which deals with construction of plans. A plan is typically an action sequence that achieves some specific goal. In particular, we study unsolvable planning instances, i.e. there is no plan. Historically, this topic has been neglected by the planning community, and up to recently the International Planning Competition has only evaluated planners on solvable planning instances. For many applications we can know, e.g. by design, that there is a solution, but this cannot be a general assumption. One example is penetration testing in computer security, where a system inconsidered safe if there is no plan for intrusion. Other examples are resource bound planning instances that have insufficient resources to achieve the goal. The main theme of this thesis is to use variable projection to prove unsolvability of planning instances. We implement and evaluate two planners: the first checks variable projections with the goal of finding an unsolvable projection, and the second builds a pattern collection to provide dead-end detection. In addition to comparing the planners to existing planners, we also utilise a large computer cluser to statistically assess whether they can be optimised further. On the benchmarks of planning instances that we used, it turns out that further improvement is likely to come from supplementary techniques rather than optimisation. We pursued this and enhanced variable projections with mutexes, which yielded a very competitive planner. We also inspect whether unsolvable variable projections tend to be composed of variables that play different roles, i.e. they are not 'similar'. We devise a variable similarity measure to rate how similar two variables are on a scale, and statistically analyse it. The measure can differentiate between unsolvable and solvable planning instances quite well, and is integrated into our planners. We also define a binary version of the measure, namely, that two variables are isomorphic if they behave exactly the same in some optimal solution (extremely similar). With the help of isomorphic variables we identified a computationally tractable class of planning instances that meet certain restrictions. There are several special cases of this class that are of practical interest, and this result encompass them.
|
763 |
A Constraint-Based Approach for Hybrid Reasoning in RoboticsMansouri, Masoumeh January 2016 (has links)
The quest of AI and Robotics researchers to realize fully AI-driven integrated robotic systems has not yet led to such realizations, in spite of great attainments in both research areas. This thesis claims that one of the major hindrances to these realizations is the lack of attention to what we call “the hybrid reasoning problem”. This is the problem of jointly reasoning about heterogeneous and inter-dependent aspects of the world, expressed in different forms and at different levels of abstraction. In this thesis, we propose an approach to hybrid reasoning (or integrated reasoning) for robot applications. Our approach constitutes a systematic way of achieving a domain-specific integration of reasoning capabilities. Its underpinning is to jointly reason about the sub-problems of an overall hybrid problem in the combined search space of mutual decisions. Each sub-problem represents one viewpoint, or type of requirement, that is meaningful in the particular application. We propose a Constraint Satisfaction Problem (CSP) formulation of the hybrid reasoning problem. This CSP, called meta-CSP, captures the dependencies between sub-problems. It constitutes a high-level representation of the (hybrid) requirements that define a particular application. We formalize the meta-CSP in a way that is independent of the viewpoints that are relevant in the application, as is the algorithm used for solving the meta-CSP. In order to verify the applicability of the meta-CSP approach in real-world robot applications, we instantiate it in several different domains, namely, a waiter robot, an automated industrial fleet management application, and a drill pattern planning problem in open-pit mining. These realizations highlight the important features of the approach, namely, modularity, generality, online reasoning and solution adjustment, and the ability to account for domain-specific metric and symbolic knowledge.
|
764 |
Algorithms for aggregate information extraction from sequencesBengtsson, Fredrik January 2007 (has links)
In this thesis, we propose efficient algorithms for aggregate information extraction from sequences and multidimensional arrays. The algorithms proposed are applicable in several important areas, including large databases and DNA sequence segmentation. We first study the problem of efficiently computing, for a given range, the range-sum in a multidimensional array as well as computing the k maximum values, called the top-k values. We design two efficient data structures for these problems. For the range-sum problem, our structure supports fast update while preserving low complexity of range-sum query. The proposed top-k structure provides fast query computation in linear time proportional to the sum of the sizes of a two-dimensional query region. We also study the k maximum sum subsequences problem and develop several efficient algorithms. In this problem, the k subsegments of consecutive elements with largest sum are to be found. The segments can potentially overlap, which allows for a large number of possible candidate segments. Moreover, we design an optimal algorithm for ranking the k maximum sum subsequences. Our solution does not require the value of k to be known a priori. Furthermore, an optimal linear-time algorithm is developed for the maximum cover problem of finding k subsequences of consecutive elements of maximum total element sum. / Godkänd; 2007; 20070528 (ysko)
|
765 |
On large-scale neural simulations and applications in neuroinformaticsBenjaminsson, Simon January 2013 (has links)
This thesis consists of three parts related to the in silico study of the brain: technologies for large-scale neural simulations, neural algorithms and models and applications in large-scale data analysis in neuroinformatics. All parts rely on the use of supercomputers. A large-scale neural simulator is developed where techniques are explored for the simulation, analysis and visualization of neural systems on a high biological abstraction level. The performance of the simulator is investigated on some of the largest supercomputers available. Neural algorithms and models on a high biological abstraction level are presented and simulated. Firstly, an algorithm for structural plasticity is suggested which can set up connectivity and response properties of neural units from the statistics of the incoming sensory data. This can be used to construct biologically inspired hierarchical sensory pathways. Secondly, a model of the mammalian olfactory system is presented where we suggest a mechanism for mixture segmentation based on adaptation in the olfactory cortex. Thirdly, a hierarchical model is presented which uses top-down activity to shape sensory representations and which can encode temporal history in the spatial representations of populations. Brain-inspired algorithms and methods are applied to two neuroinformatics applications involving large-scale data analysis. In the first application, we present a way to extract resting-state networks from functional magnetic resonance imaging (fMRI) resting-state data where the final extraction step is computationally inexpensive, allowing for rapid exploration of the statistics in large datasets and their visualization on different spatial scales. In the second application, a method to estimate the radioactivity level in arterial plasma from segmented blood vessels from positron emission tomography (PET) images is presented. The method outperforms previously reported methods to a degree where it can partly remove the need for invasive arterial cannulation and continuous sampling of arterial blood during PET imaging. In conclusion, this thesis provides insights into technologies for the simulation of large-scale neural models on supercomputers, their use to study mechanisms for the formation of neural representations and functions in hierarchical sensory pathways using models on a high biological abstraction level and the use of large-scale, fine-grained data analysis in neuroinformatics applications. / <p>QC 20130515</p>
|
766 |
System-Level Analysis and Design under UncertaintyUkhov, Ivan January 2017 (has links)
One major problem for the designer of electronic systems is the presence of uncertainty, which is due to phenomena such as process and workload variation. Very often, uncertainty is inherent and inevitable. If ignored, it can lead to degradation of the quality of service in the best case and to severe faults or burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and to mitigate its damaging consequences by designing electronic systems in such a way that uncertainty is effectively and efficiently taken into account. We begin by considering techniques for deterministic system-level analysis and design of certain aspects of electronic systems. These techniques do not take uncertainty into account, but they serve as a solid foundation for those that do. Our attention revolves primarily around power and temperature, as they are of central importance for attaining robustness and energy efficiency. We develop a novel approach to dynamic steady-state temperature analysis of electronic systems and apply it in the context of reliability optimization. We then proceed to develop techniques that address uncertainty. The first technique is designed to quantify the variability in process parameters, which is induced by process variation, across silicon wafers based on indirect and potentially incomplete and noisy measurements. The second technique is designed to study diverse system-level characteristics with respect to the variability originating from process variation. In particular, it allows for analyzing transient temperature profiles as well as dynamic steady-state temperature profiles of electronic systems. This is illustrated by considering a problem of design-space exploration with probabilistic constraints related to reliability. The third technique that we develop is designed to efficiently tackle the case of sources of uncertainty that are less regular than process variation, such as workload variation. This technique is exemplified by analyzing the effect that workload units with uncertain processing times have on the timing-, power-, and temperature-related characteristics of the system under consideration. We also address the issue of runtime management of electronic systems that are subject to uncertainty. In this context, we perform an early investigation into the utility of advanced prediction techniques for the purpose of fine-grained long-range forecasting of resource usage in large computer systems. All the proposed techniques are assessed by extensive experimental evaluations, which demonstrate the superior performance of our approaches to analysis and design of electronic systems compared to existing techniques.
|
767 |
The Euclidean traveling salesman problem with neighborhoods and a connecting fenceJonsson, Håkan January 2000 (has links)
An important class of problems in robotics deals with the planning of paths. In this thesis, we study this class of problems from an algorithmic point of view by considering cases where we have complete knowledge of the environment and each solution must ensure that a point-sized robot capable of moving continuously and turning arbitrarily accomplishes the following: (1) visits a given set of objects attached to an impenetrable simple polygon in the plane, and (2) travels along a path of minimum length over all the possible paths that visit the objects without crossing the polygon. In its general form, this is The (Euclidean) Traveling Salesman Problem with Neighborhoods and a Connecting Fence. We make several contributions. One is an algorithm that computes a shortest watchman path in a rectilinear polygon in time polynomial in the size of the polygon. Each point in the polygon is visible from some point along the computed path, which is a shortest visiting path for a set of convex polygons, each of which is bounded by a chord in the interior of the polygon. For the special case of computing a shortest watchman route, where the end points of the resulting path must coincide, we give a polynomial-time algorithm for general simple polygons. We also give substantially faster and more practical algorithms for computing provably short approximations, that is watchman paths/routes with lengths guaranteed to be at most a constant times longer than the length of a shortest watchman path/route only. To achieve one of these approximations, we develop a linear-time algorithm for computing a constant factor approximation in the case where the convex polygons are impenetrable. For this problem, which is called the Zookeeper's Problem, we show how an exact solution can be computed in linear time when the number of convex polygons is constant. We also present an application of our results to the computation of both exact and approximate solutions to the problem of computing a shortest visiting route for a set of lines in the plane. / Godkänd; 2000; 20061116 (haneit)
|
768 |
Time- and size-efficient supercompilationJonsson, Peter A. January 2011 (has links)
Intermediate structures such as lists and higher-order functions are very common in most styles of functional programming. While allowing the programmer to write clear and concise programs, the creation and destruction of these structures impose a run time overhead which is not negligible. Supercompilation algorithms is a family of program transformations that remove these intermediate structures in an automated fashion, thereby improving program performance.While there has been plenty of work on supercompilation algorithms that remove intermediate structures for languages with call-by-name semantics, no investigations have been performed for call-by-value languages. It has been suggested that existing call-by-name algorithms could be applied to call-by-value programs, possibly introducing termination in the program. This hides looping bugs from the programmer, and changes the behaviour of a program depending on whether it is optimized or not.We present positive supercompilation algorithms for higher-order call-by-value and callby-name languages that preserves termination properties of the programs they optimize. We prove the call-by-value algorithm correct and compare it to existing call-by-name transformations. Our results show that deforestation-like transformations are both possible and useful for call-by-value languages, with speedups up to an order of magnitude for certain benchmarks.We also suggest to speculatively supercompile expressions and discard the result if it turned out bad. To test this approach we implemented the call-by-name algorithm in GHC and performed measurements on the standard nofib benchmark suite. We manage to supercompile large parts of the imaginary and spectral parts of nofib in a matter of seconds while keeping the binary size increase below 5%.Our algorithms are particularly important in the context of embedded systems where resources are scarce. By both removing intermediate structures and performing program specialization the footprint of programs can shrink considerably without any manual intervention by the programmer. / Godkänd; 2011; 20110330 (pj); DISPUTATION Opponent: Professor Taha Walid, Sektionen för Informationsvetenskap, data- och elektroteknik, Högskolan i Halmstad Ordförande: Professor Per Lindgren, Institutionen för system- och rymdteknik, Luleå tekniska universitet Tid: Onsdag den 27 april 2011, kl 13.00 Plats: E243, Luleå tekniska universitet / ESIS
|
769 |
Data Movement on Emerging Large-Scale Parallel SystemsPeng, Ivy Bo January 2017 (has links)
Large-scale HPC systems are an important driver for solving computational problems in scientific communities. Next-generation HPC systems will not only grow in scale but also in heterogeneity. This increased system complexity entails more challenges to data movement in HPC applications. Data movement on emerging HPC systems requires asynchronous fine-grained communication and efficient data placement in the main memory. This thesis proposes an innovative programming model and algorithm to prepare HPC applications for the next computing era: (1) a data streaming model that supports emerging data-intensive applications on supercomputers, (2) a decoupling model that improves parallelism and mitigates the impact of imbalance in applications, (3) a new framework and methodology for predicting the impact of largescale heterogeneous memory systems on HPC applications, and (4) a data placement algorithm that uses a set of rules and a decision tree to determine the data-to-memory mapping in heterogeneous main memory. The proposed approaches in this thesis are evaluated on multiple supercomputers with different processors and interconnect networks. The evaluation uses a diverse set of applications that represent conventional scientific applications and emerging data-analytic workloads on HPC systems. The experimental results on the petascale testbed show that the approaches obtain increasing performance improvements as system scale increases and this trend supports the approaches as a valuable contribution towards future HPC systems. / Storskaliga HPC-system är en viktig drivkraft för att lösa datorproblem i vetenskapliga samhällen. Nästa generations HPC-system kommer inte bara att växa i skala utan också i heterogenitet. Denna ökade systemkomplexitet medför flera utmaningar för dataförflyttning i HPC-applikationer. Dataförflyttning på nya HPC-system kräver asynkron, finkorrigerad kommunikation och en effektiv dataplacering i huvudminnet. Denna avhandling föreslår en innovativ programmeringsmodell och algoritm för att förbereda HPC-applikationer för nästa generation: (1) en dataströmningsmodell som stöder nya dataintensiva applikationer på superdatorer, (2) en kopplingsmodell som förbättrar parallelliteten och minskar obalans i applikationer, (3) en ny metologi och struktur för att förutse effekten av storskaliga, heterogena minnessystem på HPC-applikationer, och (4) en datalägesalgoritm som använder en uppsättning av regler och ett beslutsträd för att bestämma kartläggningen av data-till-minnet i det heterogena huvudminnet. Den föreslagna programmeringsmodellen i denna avhandling är utvärderad på flera superdatorer med olika processorer och sammankopplingsnät. Utvärderingen använder en mängd olika applikationer som representerar konventionella vetenskapliga applikationer och nya dataanalyser på HPC-system. Experimentella resultat på testbädden i petascala visar att programmeringsmodellen förbättrar prestandan när systemskalan ökar. Denna trend indikerar att modellen är ett värdefullt bidrag till framtida HPC-system. / <p>QC 20171128</p>
|
770 |
Garbage collection for reactive real-time systemsKero, Martin January 2010 (has links)
Predictable use of resources, such as processor time and memory, is a desirable property for virtually any computer system. In real-time computing, static predictability is of particular concern. A real-time system typically maintains an ongoing interaction with its environment, concurrently executing tasks at predefined intervals or as responses to sporadic external events. Its purpose is often safety-critical, where failures to meet deadlines may have severe consequences - even loss of life. The substantial body of research in real-time scheduling theory has demonstrated that a priori guarantees on schedulability are achievable. What is needed is some carefully chosen restrictions on how tasks may interact and what timing behavior external events may exhibit. Safe use of shared resources in real-time systems are enabled by protocols for preserving mutually exclusive access to critical sections, free of deadlocks and priority inversions. The ability to achieve a priori schedulability guarantees can be preserved by taking the imposed restrictions into account. Nonetheless, allowing real-time tasks to share heap allocated data has far more complex consequences than just being a schedulability issue concerning shared resources. In order to guarantee failure free operation, the system must be free of memory related errors, such as memory leaks and dangling pointers. It is well-known that garbage collection can solve these problems. However, incorporating a garbage collector in a real-time system imposes a set of other requirements related to schedulability. This includes provably safe upper-bounds on required execution time and memory of the garbage collector in the presence of concurrently executing tasks, as well as preserved ability to achieve static schedulability guarantees of the real-time tasks. The body of research work towards establishing a priori schedulability guarantees of garbage collected real-time systems has been growing for the past decade. However, most of the existing work lacks a clear identification of the parameters required for and their impact on establishing a provably safe upper-bound on garbage collection execution time. The lack thereof has often imposed overly simplistic models for the cost of garbage collection; and - probably even more alarming - tractability of finding safe bounds of the involved parameters has not been established. Furthermore, most existing approaches require specialized scheduling policies and schedulability tests, as well as intrusive restrictions on the task model.In this dissertation, we propose the following theses: (1) the key to successful realtime garbage collection is to preserve as much as possible of previous advances in realtime scheduling theory by imposing minimal restrictions on the task model, scheduler, and schedulability test; and (2) the keys to enabling a priori schedulability guarantees are clear identification and tractable analysis of the sources of execution time for the garbage collector. Proofs of our theses are based on the run-time behavior of the real-time programming language Timber and an incremental copying garbage collector running as the lowest priority (idle) process.We identify all parameters needed for establishing a safe and tight upper bound on execution time of our collector, where the major ones (not surprisingly for a copying collector) are related to the amount of live heap space. We present a novel technique for establishing safe upper bounds of live heap space parameters for real-time systems. We rely on existing techniques for finding safe upper-bounds of parameters related to heap allocation of each task. Finally, we present the garbage collection demand analysis, decoupled from the schedulability analysis used for the real-time tasks, which determines if our collector is schedulable in the system. / <p>Godkänd; 2010; 20100916 (keero); DISPUTATION Ämnesområde: Datalogi/Computer Science Opponent: Associate Professor/Senior Lecturer Roger Henriksson, Lunds universitet Ordförande: Chaired Professor Per Lindgren, Luleå tekniska universitet Tid: Tisdag den 26 oktober 2010, kl 09.30 Plats: D770, Luleå tekniska universitet</p>
|
Page generated in 0.0573 seconds