• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5106
  • 1770
  • 1
  • Tagged with
  • 6877
  • 6877
  • 6877
  • 6816
  • 658
  • 649
  • 558
  • 479
  • 412
  • 392
  • 371
  • 368
  • 328
  • 308
  • 276
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Methods for Detecting Unsolvable Planning Instances using Variable Projection

Ståhlberg, Simon January 2017 (has links)
In this thesis we study automated planning, a branch of artificialintelligence, which deals with construction of plans. A plan is typically an action sequence that achieves some specific goal. In particular, we study unsolvable planning instances, i.e. there is no plan. Historically, this topic has been neglected by the planning community, and up to recently the International Planning Competition has only evaluated planners on solvable planning instances. For many applications we can know, e.g. by design, that there is a solution, but this cannot be a general assumption. One example is penetration testing in computer security, where a system inconsidered safe if there is no plan for intrusion. Other examples are resource bound planning instances that have insufficient resources to achieve the goal. The main theme of this thesis is to use variable projection to prove unsolvability of planning instances. We implement and evaluate two planners: the first checks variable projections with the goal of finding an unsolvable projection, and the second builds a pattern collection to provide dead-end detection. In addition to comparing the planners to existing planners, we also utilise a large computer cluser to statistically assess whether they can be optimised further. On the benchmarks of planning instances that we used, it turns out that further improvement is likely to come from supplementary techniques rather than optimisation. We pursued this and enhanced variable projections with mutexes, which yielded a very competitive planner. We also inspect whether unsolvable variable projections tend to be composed of variables that play different roles, i.e. they are not 'similar'. We devise a variable similarity measure to rate how similar two variables are on a scale, and statistically analyse it. The measure can differentiate between unsolvable and solvable planning instances quite well, and is integrated into our planners. We also define a binary version of the measure, namely, that two variables are isomorphic if they behave exactly the same in some optimal solution (extremely similar). With the help of isomorphic variables we identified a computationally tractable class of planning instances that meet certain restrictions. There are several special cases of this class that are of practical interest, and this result encompass them.

On large-scale neural simulations and applications in neuroinformatics

Benjaminsson, Simon January 2013 (has links)
This thesis consists of three parts related to the in silico study of the brain: technologies for large-scale neural simulations, neural algorithms and models and applications in large-scale data analysis in neuroinformatics. All parts rely on the use of supercomputers. A large-scale neural simulator is developed where techniques are explored for the simulation, analysis and visualization of neural systems on a high biological abstraction level. The performance of the simulator is investigated on some of the largest supercomputers available. Neural algorithms and models on a high biological abstraction level are presented and simulated. Firstly, an algorithm for structural plasticity is suggested which can set up connectivity and response properties of neural units from the statistics of the incoming sensory data. This can be used to construct biologically inspired hierarchical sensory pathways. Secondly, a model of the mammalian olfactory system is presented where we suggest a mechanism for mixture segmentation based on adaptation in the olfactory cortex. Thirdly, a hierarchical model is presented which uses top-down activity to shape sensory representations and which can encode temporal history in the spatial representations of populations. Brain-inspired algorithms and methods are applied to two neuroinformatics applications involving large-scale data analysis. In the first application, we present a way to extract resting-state networks from functional magnetic resonance imaging (fMRI) resting-state data where the final extraction step is computationally inexpensive, allowing for rapid exploration of the statistics in large datasets and their visualization on different spatial scales. In the second application, a method to estimate the radioactivity level in arterial plasma from segmented blood vessels from positron emission tomography (PET) images is presented. The method outperforms previously reported methods to a degree where it can partly remove the need for invasive arterial cannulation and continuous sampling of arterial blood during PET imaging. In conclusion, this thesis provides insights into technologies for the simulation of large-scale neural models on supercomputers, their use to study mechanisms for the formation of neural representations and functions in hierarchical sensory pathways using models on a high biological abstraction level and the use of large-scale, fine-grained data analysis in neuroinformatics applications. / <p>QC 20130515</p>

A Constraint-Based Approach for Hybrid Reasoning in Robotics

Mansouri, Masoumeh January 2016 (has links)
The quest of AI and Robotics researchers to realize fully AI-driven integrated robotic systems has not yet led to such realizations, in spite of great attainments in both research areas. This thesis claims that one of the major hindrances to these realizations is the lack of attention to what we call “the hybrid reasoning problem”. This is the problem of jointly reasoning about heterogeneous and inter-dependent aspects of the world, expressed in different forms and at different levels of abstraction. In this thesis, we propose an approach to hybrid reasoning (or integrated reasoning) for robot applications. Our approach constitutes a systematic way of achieving a domain-specific integration of reasoning capabilities. Its underpinning is to jointly reason about the sub-problems of an overall hybrid problem in the combined search space of mutual decisions. Each sub-problem represents one viewpoint, or type of requirement, that is meaningful in the particular application. We propose a Constraint Satisfaction Problem (CSP) formulation of the hybrid reasoning problem. This CSP, called meta-CSP, captures the dependencies between sub-problems. It constitutes a high-level representation of the (hybrid) requirements that define a particular application. We formalize the meta-CSP in a way that is independent of the viewpoints that are relevant in the application, as is the algorithm used for solving the meta-CSP. In order to verify the applicability of the meta-CSP approach in real-world robot applications, we instantiate it in several different domains, namely, a waiter robot, an automated industrial fleet management application, and a drill pattern planning problem in open-pit mining. These realizations highlight the important features of the approach, namely, modularity, generality, online reasoning and solution adjustment, and the ability to account for domain-specific metric and symbolic knowledge.

Algorithms for aggregate information extraction from sequences

Bengtsson, Fredrik January 2007 (has links)
In this thesis, we propose efficient algorithms for aggregate information extraction from sequences and multidimensional arrays. The algorithms proposed are applicable in several important areas, including large databases and DNA sequence segmentation. We first study the problem of efficiently computing, for a given range, the range-sum in a multidimensional array as well as computing the k maximum values, called the top-k values. We design two efficient data structures for these problems. For the range-sum problem, our structure supports fast update while preserving low complexity of range-sum query. The proposed top-k structure provides fast query computation in linear time proportional to the sum of the sizes of a two-dimensional query region. We also study the k maximum sum subsequences problem and develop several efficient algorithms. In this problem, the k subsegments of consecutive elements with largest sum are to be found. The segments can potentially overlap, which allows for a large number of possible candidate segments. Moreover, we design an optimal algorithm for ranking the k maximum sum subsequences. Our solution does not require the value of k to be known a priori. Furthermore, an optimal linear-time algorithm is developed for the maximum cover problem of finding k subsequences of consecutive elements of maximum total element sum. / Godkänd; 2007; 20070528 (ysko)

The Euclidean traveling salesman problem with neighborhoods and a connecting fence

Jonsson, Håkan January 2000 (has links)
An important class of problems in robotics deals with the planning of paths. In this thesis, we study this class of problems from an algorithmic point of view by considering cases where we have complete knowledge of the environment and each solution must ensure that a point-sized robot capable of moving continuously and turning arbitrarily accomplishes the following: (1) visits a given set of objects attached to an impenetrable simple polygon in the plane, and (2) travels along a path of minimum length over all the possible paths that visit the objects without crossing the polygon. In its general form, this is The (Euclidean) Traveling Salesman Problem with Neighborhoods and a Connecting Fence. We make several contributions. One is an algorithm that computes a shortest watchman path in a rectilinear polygon in time polynomial in the size of the polygon. Each point in the polygon is visible from some point along the computed path, which is a shortest visiting path for a set of convex polygons, each of which is bounded by a chord in the interior of the polygon. For the special case of computing a shortest watchman route, where the end points of the resulting path must coincide, we give a polynomial-time algorithm for general simple polygons. We also give substantially faster and more practical algorithms for computing provably short approximations, that is watchman paths/routes with lengths guaranteed to be at most a constant times longer than the length of a shortest watchman path/route only. To achieve one of these approximations, we develop a linear-time algorithm for computing a constant factor approximation in the case where the convex polygons are impenetrable. For this problem, which is called the Zookeeper's Problem, we show how an exact solution can be computed in linear time when the number of convex polygons is constant. We also present an application of our results to the computation of both exact and approximate solutions to the problem of computing a shortest visiting route for a set of lines in the plane. / Godkänd; 2000; 20061116 (haneit)

Time- and size-efficient supercompilation

Jonsson, Peter A. January 2011 (has links)
Intermediate structures such as lists and higher-order functions are very common in most styles of functional programming. While allowing the programmer to write clear and concise programs, the creation and destruction of these structures impose a run time overhead which is not negligible. Supercompilation algorithms is a family of program transformations that remove these intermediate structures in an automated fashion, thereby improving program performance.While there has been plenty of work on supercompilation algorithms that remove intermediate structures for languages with call-by-name semantics, no investigations have been performed for call-by-value languages. It has been suggested that existing call-by-name algorithms could be applied to call-by-value programs, possibly introducing termination in the program. This hides looping bugs from the programmer, and changes the behaviour of a program depending on whether it is optimized or not.We present positive supercompilation algorithms for higher-order call-by-value and callby-name languages that preserves termination properties of the programs they optimize. We prove the call-by-value algorithm correct and compare it to existing call-by-name transformations. Our results show that deforestation-like transformations are both possible and useful for call-by-value languages, with speedups up to an order of magnitude for certain benchmarks.We also suggest to speculatively supercompile expressions and discard the result if it turned out bad. To test this approach we implemented the call-by-name algorithm in GHC and performed measurements on the standard nofib benchmark suite. We manage to supercompile large parts of the imaginary and spectral parts of nofib in a matter of seconds while keeping the binary size increase below 5%.Our algorithms are particularly important in the context of embedded systems where resources are scarce. By both removing intermediate structures and performing program specialization the footprint of programs can shrink considerably without any manual intervention by the programmer. / Godkänd; 2011; 20110330 (pj); DISPUTATION Opponent: Professor Taha Walid, Sektionen för Informationsvetenskap, data- och elektroteknik, Högskolan i Halmstad Ordförande: Professor Per Lindgren, Institutionen för system- och rymdteknik, Luleå tekniska universitet Tid: Onsdag den 27 april 2011, kl 13.00 Plats: E243, Luleå tekniska universitet / ESIS

Data Movement on Emerging Large-Scale Parallel Systems

Peng, Ivy Bo January 2017 (has links)
Large-scale HPC systems are an important driver for solving computational problems in scientific communities. Next-generation HPC systems will not only grow in scale but also in heterogeneity. This increased system complexity entails more challenges to data movement in HPC applications. Data movement on emerging HPC systems requires asynchronous fine-grained communication and efficient data placement in the main memory. This thesis proposes an innovative programming model and algorithm to prepare HPC applications for the next computing era: (1) a data streaming model that supports emerging data-intensive applications on supercomputers, (2) a decoupling model that improves parallelism and mitigates the impact of imbalance in applications, (3) a new framework and methodology for predicting the impact of largescale heterogeneous memory systems on HPC applications, and (4) a data placement algorithm that uses a set of rules and a decision tree to determine the data-to-memory mapping in heterogeneous main memory. The proposed approaches in this thesis are evaluated on multiple supercomputers with different processors and interconnect networks. The evaluation uses a diverse set of applications that represent conventional scientific applications and emerging data-analytic workloads on HPC systems. The experimental results on the petascale testbed show that the approaches obtain increasing performance improvements as system scale increases and this trend supports the approaches as a valuable contribution towards future HPC systems. / Storskaliga HPC-system är en viktig drivkraft för att lösa datorproblem i vetenskapliga samhällen. Nästa generations HPC-system kommer inte bara att växa i skala utan också i heterogenitet. Denna ökade systemkomplexitet medför flera utmaningar för dataförflyttning i HPC-applikationer. Dataförflyttning på nya HPC-system kräver asynkron, finkorrigerad kommunikation och en effektiv dataplacering i huvudminnet. Denna avhandling föreslår en innovativ programmeringsmodell och algoritm för att förbereda HPC-applikationer för nästa generation: (1) en dataströmningsmodell som stöder nya dataintensiva applikationer på superdatorer, (2) en kopplingsmodell som förbättrar parallelliteten och minskar obalans i applikationer, (3) en ny metologi och struktur för att förutse effekten av storskaliga, heterogena minnessystem på HPC-applikationer, och (4) en datalägesalgoritm som använder en uppsättning av regler och ett beslutsträd för att bestämma kartläggningen av data-till-minnet i det heterogena huvudminnet. Den föreslagna programmeringsmodellen i denna avhandling är utvärderad på flera superdatorer med olika processorer och sammankopplingsnät. Utvärderingen använder en mängd olika applikationer som representerar konventionella vetenskapliga applikationer och nya dataanalyser på HPC-system. Experimentella resultat på testbädden i petascala visar att programmeringsmodellen förbättrar prestandan när systemskalan ökar. Denna trend indikerar att modellen är ett värdefullt bidrag till framtida HPC-system. / <p>QC 20171128</p>

Garbage collection for reactive real-time systems

Kero, Martin January 2010 (has links)
Predictable use of resources, such as processor time and memory, is a desirable property for virtually any computer system. In real-time computing, static predictability is of particular concern. A real-time system typically maintains an ongoing interaction with its environment, concurrently executing tasks at predefined intervals or as responses to sporadic external events. Its purpose is often safety-critical, where failures to meet deadlines may have severe consequences - even loss of life. The substantial body of research in real-time scheduling theory has demonstrated that a priori guarantees on schedulability are achievable. What is needed is some carefully chosen restrictions on how tasks may interact and what timing behavior external events may exhibit. Safe use of shared resources in real-time systems are enabled by protocols for preserving mutually exclusive access to critical sections, free of deadlocks and priority inversions. The ability to achieve a priori schedulability guarantees can be preserved by taking the imposed restrictions into account. Nonetheless, allowing real-time tasks to share heap allocated data has far more complex consequences than just being a schedulability issue concerning shared resources. In order to guarantee failure free operation, the system must be free of memory related errors, such as memory leaks and dangling pointers. It is well-known that garbage collection can solve these problems. However, incorporating a garbage collector in a real-time system imposes a set of other requirements related to schedulability. This includes provably safe upper-bounds on required execution time and memory of the garbage collector in the presence of concurrently executing tasks, as well as preserved ability to achieve static schedulability guarantees of the real-time tasks. The body of research work towards establishing a priori schedulability guarantees of garbage collected real-time systems has been growing for the past decade. However, most of the existing work lacks a clear identification of the parameters required for and their impact on establishing a provably safe upper-bound on garbage collection execution time. The lack thereof has often imposed overly simplistic models for the cost of garbage collection; and - probably even more alarming - tractability of finding safe bounds of the involved parameters has not been established. Furthermore, most existing approaches require specialized scheduling policies and schedulability tests, as well as intrusive restrictions on the task model.In this dissertation, we propose the following theses: (1) the key to successful realtime garbage collection is to preserve as much as possible of previous advances in realtime scheduling theory by imposing minimal restrictions on the task model, scheduler, and schedulability test; and (2) the keys to enabling a priori schedulability guarantees are clear identification and tractable analysis of the sources of execution time for the garbage collector. Proofs of our theses are based on the run-time behavior of the real-time programming language Timber and an incremental copying garbage collector running as the lowest priority (idle) process.We identify all parameters needed for establishing a safe and tight upper bound on execution time of our collector, where the major ones (not surprisingly for a copying collector) are related to the amount of live heap space. We present a novel technique for establishing safe upper bounds of live heap space parameters for real-time systems. We rely on existing techniques for finding safe upper-bounds of parameters related to heap allocation of each task. Finally, we present the garbage collection demand analysis, decoupled from the schedulability analysis used for the real-time tasks, which determines if our collector is schedulable in the system. / <p>Godkänd; 2010; 20100916 (keero); DISPUTATION Ämnesområde: Datalogi/Computer Science Opponent: Associate Professor/Senior Lecturer Roger Henriksson, Lunds universitet Ordförande: Chaired Professor Per Lindgren, Luleå tekniska universitet Tid: Tisdag den 26 oktober 2010, kl 09.30 Plats: D770, Luleå tekniska universitet</p>

Routing and quality of service in wireless and disruption tolerant networks

Lindgren, Anders January 2006 (has links)
Wireless networks have become a common means of communication, and their popularity continues to rise as they enable communication in locations and settings where it was previously unfeasible. While promising many advantages, these networks also pose new challenges. The limited radio coverage, unreliable nature of the wireless channel, and mobility of network nodes can lead to frequent disruption of communication links, dynamic network topology, variable bandwidth availability, and high channel error rates. These challenges seek novel solutions to allow a growing number of wireless, mobile users to run applications and avail network services in ways similar to that in wired networks. This thesis makes contributions to three research areas related to wireless and disruption tolerant networks: (1) routing and forwarding to enable disruption tolerant communication in intermittently connected networks, (2) analysis of properties of human mobility and their effect on network protocols in disruption tolerant networks, and (3) quality of service mechanisms for wireless and mobile networks. In intermittently connected networks, there may rarely or never exist a fully connected path between a source and destination. This invalidates the basic assumption of end-to-end communication prevalent in the Internet and renders traditional routing protocols impractical. We propose PRoPHET, a novel routing protocol for intermittently connected networks. PRoPHET takes advantage of the mobility of nodes, and the predictability of that mobility for routing. The protocol and various forwarding strategies and queueing policies are studied in detail. The benefits of PRoPHET are evident on comparing its performance with contemporary work. Communication in intermittently connected and disruption tolerant networks is often highly dependent on the mobility of the nodes in the network. Thus, it is important to have good understanding of basic properties of user mobility in order to design network protocols that can operate under those conditions. Using real-life traces, we characterize human mobility patterns and their impact on forwarding algorithms in mobile networks with and without infrastructure. Finally, the thesis presents our work on two different aspects of quality of service in wireless and mobile networks. We evaluate four mechanisms for providing service differentiation in a wireless LAN, and give recommendations on their use in different scenarios. We propose a novel admission control scheme for mobile ad hoc networks, which is able to better cope with high mobility in the network compared to previous solutions. / <p>Godkänd; 2006; 20061205 (haneit)</p>

TCP/IP technology for modern network environments

Landström, Sara January 2008 (has links)
To facilitate the merging of wireless access technologies and the traditional Internet, the core protocols for data communication should be robust and have low overhead. In this thesis, we propose refinements to the Transmission Control Protocol (TCP) that improve its cost efficiency over wireless links.TCP is unable to distinguish between congestion and error induced losses, reordered, or delayed segments. A reordering robust TCP would make it possible to simplify network elements, now performing explicit actions to prevent reordering, and open up for deployment of new technologies that naturally cause reordering. We propose TCP-Aix; a set of TCP modifications that improves the robustness of TCP to reordering and delay spikes. TCP-Aix decouples loss recovery and congestion control actions. We also present an algorithm called the winthresh algorithm for computing a duplicate acknowledgment threshold based on TCP buffer space and current send window size. The results show that TCP-Aix with the winthresh algorithm is able to maintain almost constant performance even in scenarios frequently displaying long reordering durations. It is also fair towards competing standards-compliant TCP flows.In wireless networks, where the links out of efficiency constraints are more error prone than wired links, the error and the reordering sensitivity of TCP have motivated link layer protocols that perform retransmissions and enforce in-order delivery. We investigate the potential gains of using a reordering robust TCP, like TCP-Aix, with a wireless link layer that allows out-of-order delivery, compared to using in-order delivery with a standardscompliant TCP. We found that the smoothness of TCP is strongly affected by the link layer configuration. In-order delivery leads to burstier traffic and larger network layer buffering needs, than out-of-order delivery and TCP-Aix. The interference and power consumption in foremost wireless networks make it important to reduce the communication overhead. The TCP receiver acknowledges each or every second segment. We study how to reduce the acknowledgment frequency while preserving throughput performance also in wireline networks where frequent acknowledgments generally are not problematic. To preserve throughput, the sender should use byte counting and be involved in determining the acknowledgment frequency. The results show that acknowledging four segments per send window is sufficient to maintain throughput performance also in wireline scenarios. This indicates that a lower acknowledgment frequency than provided through the delayed acknowledgment algorithm is possible today for general usage.A key service to the successful merging of traditional Internet technology and wireless cellular networks is Voice over IP (VoIP). Channels to be shared by both VoIP and TCP-based traffic is being considered for wireless cellular systems. It is challenging to provide VoIP over a shared wireless cellular channel, because VoIP is a low bitrate service with high demands on channel availability to bound the delay. The scheduling algorithm, controlling access to the channel, is central to achieve efficiency as well as to satisfy user demands. We study how a scheduler for a mix of VoIP and interactive (TCP-based) traffic should be designed for High Speed Downlink Packet Access (HSDPA). In particular, we find that slowly increasing the priority of a VoIP session relative TCP-based services is important to take advantage of the varying network conditions. / <p>Godkänd; 2008; 20080520 (ysko)</p>

Page generated in 0.1136 seconds