• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 21
  • 7
  • 6
  • 6
  • 6
  • 6
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive middleware for the delivery of context information in pervasive computing

Huebscher, Markus C. January 2006 (has links)
No description available.
2

Applications of microprocessors to error control systems

Ambler, A. P. January 1977 (has links)
No description available.
3

The impact of petri nets on system-of-systems engineering

Sinclair, Kirsten Mhairi January 2009 (has links)
The successful engineering of a large-scale system-of-systems project towards deterministic behaviour depends on integrating autonomous components using international communications standards in accordance with dynamic requirements. To-date, their engineering has been unsuccessful: no combination of top-down and bottom-up engineering perspectives is adopted, and information exchange protocol and interfaces between components are not being precisely specified. Various approaches such as modelling, and architecture frameworks make positive contributions to system-of-systems specification but their successful implementation is still a problem. One of the most popular modelling notations available for specifying systems, UML, is intuitive and graphical but also ambiguous and imprecise. Supplying a range of diagrams to represent a system under development, UML lacks simulation and exhaustive verification capability. This shortfall in UML has received little attention in the context of system-of-systems and there are two major research issues: 1. Where the dynamic, behavioural diagrams of UML can and cannot be used to model and analyse system-of-systems 2. Determining how Petri nets can be used to improve the specification and analysis of the dynamic model of a system-of-systems specified using UML This thesis presents the strengths and weaknesses of Petri nets in relation to the specification of system-of-systems and shows how Petri net models can be used instead of conventional UML Activity Diagrams. The model of the system-of-systems can then be analysed and verified using Petri net theory. The Petri net formalism of behaviour is demonstrated using two case studies from the military domain. The first case study uses Petri nets to specify and analyse a close air support mission. This case study concludes by indicating the strengths, weaknesses, and shortfalls of the proposed formalism in system-of-systems specification. The second case study considers specification of a military exchange network parameters problem and the results are compared with the strengths and weaknesses identified in the first case study. Finally, the results of the research are formulated in the form of a Petri net enhancement to UML (mapping existing activity diagram elements to Petri net elements) to meet the needs of system-of-systems specification, verification and validation.
4

Online algorithms for temperature aware job scheduling problems

Birks, Martin David January 2012 (has links)
Temperature is an important consideration when designing microprocessors. When exposed to high temperatures component reliability can be reduced, while some components completely fail over certain temperatures. We consider the design and analysis of online algorithms; in particular algorithms that use knowledge of the amount of heat a job will generate. We consider algorithms with two main objectives. The first is maximising job throughput. We show upper and lower bounds for the case where jobs are unit length, both when jobs are weighted and unweighted. Many of these bounds are matching for all cooling factors in the single and multiple machine case. We extend this to consider the single machine case where jobs have longer than unit length. When all jobs are equal length we show matching bounds for the case without preemption. We also show that both models of pre-emption enable at most a slight reduction in the competitive ratio of algorithms. We then consider when jobs have variable lengths. We analyse both the models of unweighted jobs and the jobs with weights proportional to their length. We show bounds that match within constant factors, in the non-preemptive and both preemptive models. The second objective we consider is minimising flow time. We consider the objective of minimising the total flow time of a schedule. We show NP-hardness and inapproximability results for the offline case, as well as giving an approximation algorithm for the case where all release times are equal. For the online case we give some negative results for the case where maximum job heats are bounded. We also give some results for a resource augmentation model that include a 1-competitive algorithm when the extra power for the online algorithm is high enough. Finally we consider the objective of minimising the maximum flow time of any job in a schedule.
5

The domestication of home ubiquitous computing

Ely, Philip January 2012 (has links)
Thesis Summary This thesis' primary concern is that of human interaction with entertainment, information and communication technology in the home. Its aim is to explore the situated realities of living with so-called ubiquitous computing technology through the study of an equivalent form of technology - entertainment, information and communication technologies. The thesis explores what entertainment, information and communication technologies are found in the home, how they get there and how they are incorporated into everyday life. The thesis takes an historical and theoretical look at the emergence of the ubiquitous computing paradigm and the growing interest in designing entertainment, information and communication technologies for the home. Through an in-depth qualitative study of five households in the UK conducted during a period of significant life-change, the thesis explores the ad-hoc nature of contemporary home ubiquitous computing environments. Using the conceptual framework of domestication theory as its starting point, the study analyzes the moral, economic, social, material and practical dimensions to owning, using and maintaining an ad-hoc entertainment, information and communication environment using specific empirical examples drawn from ethnographic data. Such an account of technology in the home provides for a necessary and contemporary view of living with ubicomp in the 21 st Century in the UK, a perspective that reveals just how involving (practically, financially and emotionally) living with technologies can actually be. As consumer interest in computing devices for gaming, communicating and information- gathering grows, ubiquitous computing visions articulated in research labs have been slow to understand the generative nature of home technology environments. The thesis provides not only empirical insights that have implications for the design of new ubiquitous computing devices and infrastructures for the home but also argues that a sociological study of the everyday realities of living with technology can provide the . field of ubiquitous computing research with the heuristic tools by which it can understand the everyday 'messiness' of technology appropriation, incorporation and use.
6

Medium access control protocols for wireless body area networks

Timmons, Nicholas Francis January 2012 (has links)
The medical and economic potential of wireless Body Area Networks (BANs) is gradually being realised especially in the area of medical remote monitoring and telemedicine. Medical BANs will employ both implantable and body worn devices to support a diverse range of applications with throughputs ranging from several bits per hour up to 10 Mbps. The main consideration of this thesis was the long-term power consumption of BAN devices, as these devices have to perform all associated functions such as networking, processing, and RF communications powered usually only by a small battery. Implantable devices are expected to have a lifetime of up to 10 years. The challenge was to accommodate this diverse range of applications within a single wireless network based on a suitably flexible and power efficient medium access control (MAC) protocol. Analyses established that in ultra-low data rate wireless sensor networks (WSN) waking up just to listen to a beacon every superframe can be a major waste of energy. Based on these findings a novel medical medium access control (MedMAC) protocol was developed - capable of providing energy efficient and adaptable channel access in body area networks. The MedMAC protocol achieves significant energy efficiency through a novel synchronisation algorithm which allows the device to sleep through beacons while maintaining synchronisation. Energy efficiency simulations show that the MedMAC protocol outperforms the IEEE 802.15.4 protocol. Results from a comparative analysis of MedMAC and the emerging draft IEEE 802.15.6 wireless standard for BANs show that MedMAC has superior efficiency with energy savings of between 25% and 87% for the presented scenarios. Overall this work demonstrates a new mechanism for achieving significant energy savings for a significant sector of BAN devices that operate at ultra-low data rates.
7

The ergonomics of wearable computers : implications for musculoskeletal loading

Knight, James Francis January 2002 (has links)
No description available.
8

Code memory compression technologies for embedded arm/thumb processors

Xu, Xianhong January 2007 (has links)
No description available.
9

Human interaction with digital ink : legibility measurement and structural analysis

Butler, Timothy S. January 2003 (has links)
Literature suggests that it is possible to design and implement pen-based computer interfaces that resemble the use of pen and paper. These interfaces appear to allow users freedom in expressing ideas and seem to be familiar and easy to use. Different ideas have been put forward concerning this type of interface, however despite the commonality of aims and problems faced, there does not appear to be a common approach to their design and implementation. This thesis aims to progress the development of pen-based computer interfaces that resemble the use of pen and paper. To do this, a conceptual model is proposed for interfaces that enable interaction with "digital ink". This conceptual model is used to organize and analyse the broad range of literature related to pen-based interfaces, and to identify topics that are not sufficiently addressed by published research. Two issues highlighted by the model: digital ink legibility and digital ink structuring, are then investigated. In the first investigation, methods are devised to objectively and subjectively measure the legibility of handwritten script. These methods are then piloted in experiments that vary the horizontal rendering resolution of handwritten script displayed on a computer screen. Script legibility is shown to decrease with rendering resolution, after it drops below a threshold value. In the second investigation, the clustering of digital ink strokes into words is addressed. A method of rating the accuracy of clustering algorithms is proposed: the percentage of words spoiled. The clustering error rate is found to vary among different writers, for a clustering algorithm using the geometric features of both ink strokes, and the gaps between them. The work contributes a conceptual interface model, methods of measuring digital ink legibility, and techniques for investigating stroke clustering features, to the field of digital ink interaction research.
10

An improved instruction-level power and energy model for RISC microprocessors

Wang, Wei January 2017 (has links)
Recently, the power and energy consumed by a chip has become a primary design constraint for embedded systems and is largely aaffected by software. Because aims vary with the application domain, the best program is sometimes the most power or energy efficient one rather than the fastest. However, there is a gap between software and hardware that makes it hard to predict which code consumes the least power without measurement. Therefore, it is vital to discover which factors can affect a program's power and energy consumption. In this thesis we present an instruction level model to estimate the power and energy consumed by a program. Firstly, instead of studying the different instructions individually, we cluster instructions into three groups: ALU, load and store. The power is affected by the percentage of each group in the program. Secondly, the power is affected by the instructions per cycle (IPC) of the program since IPC can reflect how fast the processor runs. There are three advantages of this method, and the first one is conciseness. The reason is that it does not consider the overhead energy as an independent factor or the operand Hamming distance of two consecutive instructions. The second one is accuracy. For example, the errors of our method across different benchmarks with different processors on the development boards are all less than 10%. The last and the most important advantage of this method is that it can apply to different processors, such as OpenRISC processor, ARM11, ARM Cortex-A8, and a dual- core ARM Cortex-A9 processor. We have demonstrated that the previous instruction level power/energy model cannot be extended to superscalar processors and multi-core processors.

Page generated in 0.0201 seconds