• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6404
  • 2120
  • 2
  • Tagged with
  • 8527
  • 8524
  • 8130
  • 8064
  • 912
  • 845
  • 668
  • 665
  • 653
  • 639
  • 573
  • 491
  • 418
  • 402
  • 350
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Measuring Programming Assignment Effort

Toll, Daniel January 2016 (has links)
Students often voice that the programming assignments are hard and that they spend a lot of time on solving them. Is this true; are we giving them too hard assignments and how much and what do they spend the time on? This is what we want to gain insight to. We constructed a tool that records programming sessions with finer granularity than the existing solutions. The tool has recorded 2643 programming sessions from students. Using that data we found that students spend only 15% of their time writing code, and that on average 40% of their programming effort is spent reading and navigating. We also estimate the time spent outside of the tool to be almost 20%. The increased detail in the recordings can be used to measure the effect of programming source code comments, and we found that the effect of both helpful and redundant comments increases the reading time but did not help to reduce the students writing effort. Finally, we used the tool to examine the effects of an improved programming assignment and found that the total effort was not reduced.
202

Efficient aggregate queries on data cubes

Bengtsson, Fredrik January 2004 (has links)
As computers are developing rapidly and become more available to the modern information society, the possibility and ability to handle large data sets in database applications increases. The demand for efficient algorithmic solutions to process huge amounts of information increases as the data sets become larger. In this thesis, we study the efficient implementation of aggregate operations on the data cube, a modern and flexible model for data warehouses. In particular, the problem of computing the k largest sum subsequences of a given sequence is investigated. An efficient algorithm for the problem is developed. Our algorithm is optimal for large values of the user-specified parameter k. Moreover, a fast in-place algorithm with good trade-off between update- and query-time, for the multidimensional orthogonal range sum problem, is presented. The problem studied is to compute the sum of the data over an orthogonal range in a multidimensional data cube. Furthermore, a fast algorithmic solution to the problem of maintaining a data structure for computing the k largest values in a requested orthogonal range of the data cube is also proposed. / <p>Godkänd; 2004; 20070131 (ysko)</p>
203

Linné : an object oriented language with parallelism

Eriksson, Mikael January 1990 (has links)
<p>Godkänd; 1990; 20080410 (ysko)</p>
204

Browsing techniques for mitigating display-size constraints

Hedman, Anna January 2007 (has links)
One of the many challenges with interface design concerns how to present and retrieve information on a display. The smaller the display is, the bigger the challenge. A typical task that requires a lot of screen space is image browsing. Several studies have been conducted in the past, but with conflicting results. This thesis includes a literature survey on browsing techniques, two user studies comparing image-browsing techniques on an electronic bulletin board, an analysis of visual factors affecting the usability of image browsing, and a user study on mobile phone interfaces. The image-browser comparisons involved three types of browsers (iconic, bifocal, and zoom-and-pan). The first experiment was conducted on a regular 19" desktop display. For the second user study, the browsers were modified and run on a 50" plasma display with a 3D input device. Results from both studies showed that an iconic interface was most efficient. Observations made during the user experiments resulted in an analysis of layout and presentation factors that have an impact on usability of image-browsers, in particular task completion time. The main purpose was to identify unwanted confounding factors in image-browser tests, but the analysis could also be used for setting up future experiments and explaining results from such tests. The mobile phone study is a comparison of interfaces of two different mobile brands. Displays and user interfaces for mobile phones have developed a lot since this study was conducted, but design for small displays is still a highly relevant topic. / <p>Godkänd; 2007; 20071201 (ysko)</p>
205

Data structures for bandwidth reservations and quality of service on the Internet

Nilsson, Andreas January 2004 (has links)
This thesis deals firstly with ways to solve the problem of limited resource reservations over time, and secondly, handling conforming traffic in routers. At first glance these topics seem a bit unrelated, they are both related to quality of service (QoS) on the Internet and are cases of Algorithm Engineering applied in the field of Computer Networking. In order to provide Quality of Service (QoS) for the users of mainly real time applications on the Internet, the need to make a reservation of bandwidth over the Internet has arisen. The idea of QoS is to provide the same quality of the service on the Internet as it is provided in the ordinary circuit switched telephone network. This means that an opened connection will never be disturbed by another connections no matter how many users there are using the phone network at the same time. Internet on the other hand is a packet switched network. That is a network in which small packets with address tags are transported from the source to the destination by several connected subnetworks. On the Internet there are no guarantees regarding unchangeable quality of service; packets may be dropped, delayed or reordered depending on the current load. This is undesirable for users of real time applications. One solution to achieve QoS on the Internet can be to reserve sufficient amount of bandwidth for a time period of resource use --- for instance, the time of a phone call. Olov Schelén et.al. used the differentiated services approach to design a new architecture to provide QoS called "bandwidth brokers". In this architecture they provide virtual leased lines using the differentiated services to perform admission control through a system of bandwidth brokers. The bandwidth brokers work on per-hop basis and each bandwidth broker needs to maintain a database of the reservations made on its hop. It must be quick to: insert more reservations in the database remove reservations already made to query the amount of reserved bandwidth during a given interval in order to see if another reservation can be made so that not more bandwidth are reserved than the link capacity can carry. We talk about a "Bandwidth Reservation Problem" ("BRP"). Our solution to the problem is more general than just to be used with the bandwidth brokers. In the thesis I present two different solutions to the BRP; one static data structure using constant space and O(log n) worst case time for all operations, and a dynamic solution using Theta(n) space and Theta(log n) time for all operations, where n is the number of leaves in the tree. Note that the running time of the operations are not depending on the number of reservations, connections, or amount of reserved bandwidth. I also present an application of the dynamic solution in the field of chemistry where it is used to analysis of large spectral data sets. I also present a paper where a new set of forwarding behaviors is presented. That is a set of packet scheduling principles that can be used on the Internet to better achieve QoS. / Godkänd; 2004; 20070128 (ysko)
206

Creating and maintaining topologies in wireless networks

Johansson, Tomas January 2006 (has links)
Wireless ad-hoc networks differs in many aspects compared to traditional infrastructured networks. Among other things, individual nodes cannot be expected to know the topology of the entire network. Also, since nodes typically are powered by batteries and eventually will run out of energy, it is imperative for algorithms to minimize the energy cost while distributing it as fairly as possible over all nodes in the network. This thesis covers different types of distributed algorithms for wireless networks, where all of them in some way creates or maintains a topology in order to facilitate communication in the network. The thesis comprises three scientific papers. The first paper concerns clustering, dividing the set of nodes in a network into subsets based on the network's connectivity graph. We propose a new clustering algorithm which uses the novel idea to maintain an existing clustering structure rather than creating a new structure from scratch, in order to minimize the changes in the structure as well as the communication overhead. The second paper covers interference reduction through topology control. We discuss previous work in how to measure interference, and present new metrics that aims to measure the average interference of the network rather than just the worst path. We also propose a new topology control algorithm and compare its performance to previous topology control algorithms, using our as well as previous interference models. In the third paper, we present a power-aware routing algorithm for Bluetooth networks. Unless similar previous work, our algorithm does not require the nodes to have any knowledge of the network except for their neighbors. By collecting path information in the routing messages, individual nodes can still make routing decisions in order to avoid nodes that are close to being depleted of energy. We also present a simplified version of the algorithm for general wireless ad-hoc networks. / Godkänd; 2006; 20061115 (ysko)
207

Positive supercompilation for a higher-order call-by-value language

Jonsson, Peter A. January 2008 (has links)
Intermediate structures such as lists and higher-order functions are very common in most styles of functional programming. While allowing the programmer to write clear and concise programs, the creation and destruction of these structures impose a run time overhead which is not negligible. Deforestation algorithms is a family of program transformations that remove these intermediate structures in an automated fashion, thereby improving program performance. While there has been plenty of work on deforestation-like transformations that remove intermediate structures for languages with call-by-name semantics, no investigations have been performed or call-by-value languages. It has been suggested that existing call-by-name algorithms could be applied to call-by-value programs, possibly introducing termination in the program. This hides looping bugs from the programmer, and changes the behaviour of a program depending on whether it is optimized or not. We present a transformation, positive supercompilation, for a higher-order call-by-value language that preserves termination properties of the programs it is applied to. We prove the algorithm correct and compare it to existing call-by-name transformations. Our results show that deforestation-like transformations are both possible and useful for call-by-value languages, with speedups up to an order of magnitude for certain benchmarks. Our algorithm is particularly important in the context of embedded systems where resources are scarce. By both removing intermediate structures and performing program specialization the footprint of programs can shrink considerably without any manual intervention by the programmer. / Godkänd; 2008; 20080520 (ysko)
208

Guarding art galleries with one guard

Jonsson, Håkan January 1995 (has links)
Godkänd; 1995; 20080330 (ysko)
209

Some aspects of algorithmic engineering

Sundström, Mikael January 1997 (has links)
Godkänd; 1997; 20070418 (ysko)
210

Garbage collecting reactive real-time systems

Kero, Martin January 2007 (has links)
As real-time systems become more complex, the need for more sophisticated runtime kernel features arises. One such feature that substantially lessens the burden of the programmer is automatic memory management, or garbage collection. However, incorporating garbage collection in a real-time kernel is not an easy task. One needs to guarantee, not only that sufficient memory will be reclaimed in order to avoid out of memory errors, but also that the timing properties of the systems real-time tasks are unaffected. The first step towards such a garbage collector is to define the algorithm in a manageable way. It has to be made incremental in such way that induced pause times are small and bounded (preferably constant). The algorithm should not only be correct, but also provably useful. That is, in order to guarantee that sufficient memory is reclaimed each time the garbage collector is invoked, one need to define some measure of usefulness. Furthermore, the garbage collector must also be guaranteed to be schedulable in the system. That is, even though the collector is correct and proved useful, it still has to be able to do its work within the system. In this thesis, we present a model of an incremental copying garbage collector based on process terms in a labeled transition system. Each kind of garbage collector step is captured as an internal transition and each kind of external heap access (read, write, and allocate) is captured as a labeled transition. We prove correctness and usefulness of the algorithm. We also deploy the garbage collector in a real-time system, to wit, the runtime kernel of Timber. Timber is a strongly typed, object-oriented, purely reactive, real-time programming language based on reactive objects. We show how properties of the language can be used in order to accomplish very efficient and predictable garbage collection. / <p>Godkänd; 2007; 20071121 (ysko)</p>

Page generated in 0.062 seconds