Spelling suggestions: "subject:"datavetenskap"" "subject:"datvetenskap""
241 |
A Generic Approach to Schedulability Analysis of Real-Time SystemsFersman, Elena January 2003 (has links)
This thesis presents a framework for design, analysis, and implementation of embedded systems. We adopt a model of timed automata extended with asynchronous processes i.e. tasks triggered by events. A task is an executable program characterized by its worst-case execution time and deadline, and possibly other parameters such as priorities etc. for scheduling. The main idea is to associate each location of an automaton with a task (or a set of tasks). A transition leading to a location denotes an event triggering the tasks and the clock constraint on the transition specifies the possible arrival times of the event. This yields a model for real-time systems expressive enough to describe concurrency and synchronization, and tasks with (or without) combinations of timing, precedence and resource constraints, which may be periodic, sporadic, preemptive and (or) non-preemptive. We believe that the model may serve as a bridge between scheduling theory and automata-theoretic approaches to system modelling and analysis. Our main result is that the schedulability checking problem for this model is decidable. To our knowledge, this is the first general decidability result on dense-time models for real time scheduling without assuming that preemptions occur only at integer time points. The proof is based on a decidable class of updatable automata: timed automata with subtraction in which clocks may be updated by subtractions within a bounded zone. As the second contribution, we show that for fixed priority scheduling strategy, the schedulability checking problem can be solved by reachability analysis on standard timed automata using only two extra clocks in addition to the clocks used in the original model to describe task arrival times. The analysis can be done in a similar manner to response time analysis in classic Rate-Monotonic Scheduling. We believe that this is the optimal solution to the problem. The third contribution is an extension of the above results to deal with precedence and resource constraints. We present an operational semantics for the model, and show that the related schedulability analysis problem can be solved efficiently using the same techniques. Finaly, to demonstrate the applicability of the framework, we have modelled, analysed, and synthesised the control software for a production cell. The presented results have been implemented in the Times tool for automated schedulability analysis and code synthesis.
|
242 |
Efficient and Flexible Characterization of Data Locality through Native Execution SamplingBerg, Erik January 2005 (has links)
Data locality is central to modern computer designs. The widening gap between processor speed and memory latency has introduced the need for a deep hierarchy of caches. Thus, the performance of an application is to a large extent dependent on the amount of data locality the caches can exploit. Some data locality comes naturally from the way most programs are written and the way their data is allocated in the memory. Compilers further try to create data locality by loop transformations and optimized data layout. Different ways of writing a program and/or laying out its data may improve an application’s locality even more. However, it is far from obvious how such a locality optimization can be achieved, especially since the optimizing compiler may have left the optimization job half done. Thus, efficient tools are needed to guide the software developers on their quest for data locality. The main contribution of this dissertation is a sample-based novel method for analyzing the data locality of an application. Very sparse data is collected during a single execution of the studied application. The sparse sampling adds a minimum overhead to the execution time, which enables complex applications running realistic data sets to be studied. The architecturalindependent information collected during the execution is fed to a mathematical cache model for predicting the cache miss ratio. The sparsely-collected data can be used to characterize the application’s data locality in respect to almost any possible cache hierarchy, such as complicated multiprocessor memory systems with multilevel cache hierarchies. Any combination of cache size, cache line size and degree of sharing can be modeled. Each new modeled design point takes only a fraction of a second to evaluate, even though the application from which the sampled data was collected may have executed for hours. This makes the tool not just usable for software developers, but also for hardware developers who need to evaluate a huge memory-system design space. We also discuss different ways of presenting data-locality information to a programmer in an intuitive and easily interpreted way. Some of the locality metrics we introduce utilize the flexibility of our algorithm and its ability to vary different cache parameters for one run. The dissertation also presents several prototype implementations of tools for profiling the memory system.
|
243 |
Memory System Design for Chip-MultiprocessorsKarlsson, Martin January 2005 (has links)
The continued decrease in transistor size and the increasing delay of wires relative to transistor switching speeds led to the development of chip multiprocessors (CMPs). The introduction of CMPs presents new challenges and trade-offs to computer architects. In particular, architects must now balance the allocation of chip resources to each processor against the number of processors on a chip. This thesis deals with some of the implications this new kind of processors have regarding the memory system and proposes several new designs based on the resource constraints of CMPs. In addition, it includes contributions on simulation technique and workload characterization, which is used to guide the design of new processors and systems. The memory system is the key to performance in contemporary computer systems. This thesis targets multiple aspects of memory system performance. To conserve bandwidth, and thereby packaging costs, a fine-grained data fetching strategy is presented that exploits characteristics of runahead execution. Two cache organizations are proposed: The RASCAL cache organization, which target capacity misses through selective caching and the Elbow cache that targets conflict misses by extending a skewed cache with a relocation algorithm. Finally, to reduce complexity and cost when designing multi-chip systems, a new trap-based system architecture is described. When designing a new processor or memory system, simulations are used to compare design alternatives. It is therefore very important to simulate workloads that accurately reflect the future use of the system. This thesis includes the first architectural characterization studies of Java-based middleware, which is a workload that is an important design consideration for the next generation of processors and servers.
|
244 |
Lean Thinking Applied to System ArchitectingGustavsson, Håkan January 2011 (has links)
Software intensive systems are an increasing part of new products, which make the business impact significant. This is especially true for the automotive industry where a very large part of new innovations are realized through the use of software. The architecture of the software intensive system will enable value creation when working properly or, in the worst case, prevent value creation. Lean thinking is about focusing on the increase of customer value and on the people who add value. This thesis investigates how system architecting is performed in industry and how it can be improved by the use of Lean thinking. The architecting process does not create immediate value to the end customer, but rather create the architecture on which value in terms of product features and functionality can be developed. A Lean tool used to improve the value creation within a process is Value Stream Mapping (VSM). We present a method based on VSM which is adapted to enable analysis of the architecting process in order to identify improvements. A study of architecting at two companies shows what effect differences such as a strong line organization or a strong project organization has on the architecting process. It also shows what consequence technical choices and business strategy have on the architecting process. In order to improve the understanding of how architecting is performed a study including architects at six different internationally well-known companies have been interviewed. The study presents the practices that are found most successful. The context of the different companies as well as the architecting practices are compared and analyzed. The early design decisions made when developing software-intensive systems are crucial to the outcome of development projects. In order to improve the decision making process a method was developed based on Real Options. The method improves the customer focus of critical design decision by taking the value of flexibility into account. This thesis provides a toolbox of knowledge on how Lean thinking can be applied to system architecting and also presents how architecting is performed in industry today.
|
245 |
A Personalised Case-Based Stress Diagnosis System Using Physiological Sensor SignalsBegum, Shahina January 2011 (has links)
Stress is an increasing problem in our present world. It is recognised that increased exposure to stress may cause serious health problems if undiagnosed and untreated. In stress medicine, clinicians’ measure blood pressure, Electrocardiogram (ECG), finger temperature and respiration rate etc. during a number of exercises to diagnose stress-related disorders. However, in practice, it is difficult and tedious for a clinician to understand, interpret and analyze complex, lengthy sequential sensor signals. There are few experts who are able to diagnose and predict stress-related problems. Therefore, a system that can help clinicians in diagnosing stress is important. This research work has investigated Artificial Intelligence techniques for developing an intelligent, integrated sensor system to establish diagnosis and treatment plans in the psychophysiological domain. This research uses physiological parameters i.e., finger temperature (FT) and heart rate variability (HRV) for quantifying stress levels. Large individual variations in physiological parameters are one reason why case-based reasoning is applied as a core technique to facilitate experience reuse by retrieving previous similar cases. Feature extraction methods to represent important features of original signals for case indexing are investigated. Furthermore, fuzzy techniques are also employed and incorporated into the case-based reasoning system to handle vagueness and uncertainty inherently existing in clinicians’ reasoning. The evaluation of the approach is based on close collaboration with experts and measurements of FT and HRV from ECG data. The approach has been evaluated with clinicians and trial measurements on subjects (24+46 persons). An expert has ranked and estimated the similarity for all the subjects during classification. The result shows that the system reaches a level of performance close to an expert in both the cases. The proposed system could be used as an expert for a less experienced clinician or as a second opinion for an experienced clinician to supplement their decision making tasks in stress diagnosis.
|
246 |
Flexible Scheduling for Media Processing in Resource Constrained Real-Time SystemsIsovic, Damir January 2004 (has links)
The MPEG-2 standard for video coding is predominant in consumer electronics for DVD players, digital satellite receivers, and TVs today. MPEG-2 processing puts high demands on audio/video quality, which is achieved by continuous and synchronized playout without interrupts. At the same time, there are restrictions on the storage media, e.g.., limited size of a DVD disc, communication media, e.g., limited bandwidth of the Internet, display devices, e.g., the processing power, memory and battery life of pocket PCs or video mobile phones, and finally the users, i.e., humans ability of perceiving motion. If the available resources are not sufficient to process a full-size MPEG-2 video, then video stream adaptation must take place. However, this should be done carefully, since in high quality devices, drops in perceived video quality are not tolerated by consumers. We propose real-time methods for resource reservation of MPEG-2video stream processing and introduce flexible scheduling mechanisms for video decoding. Our method is a mixed offline and online approach for scheduling of periodic, aperiodic and sporadic tasks, based on slot shifting. We use the offline part of slot shifting to eliminate all types of complex task constraints before the runtime of the system. Then, we propose an online guarantee algorithm for dealing with dynamically arriving tasks. Aperiodic and sporadic tasks are incorporated into the offline schedule by making use of the unused resources and leeways in the schedule. Sporadic tasks are guaranteed offline for the worst-case arrival patterns and scheduled online, where an online algorithm keeps track of arrivals of instances of sporadic tasks to reduce pessimism about future sporadic arrivals and improve response times and acceptance of firm aperiodic tasks. At runtime, our mechanism ensures feasible execution of tasks with complex constraints in the presence of additional tasks or overloads. We use the scheduling and resource reservation mechanism above to flexibly process MPEG-2 video streams. First, we present results from a study of realistic MPEG-2 video streams to analyze the validity of common assumptions for software decoding and identify a number of misconceptions. Then, we identify constraints imposed by frame buffer handling and discuss their implications on the decoding architecture and timing. Furthermore, we propose realistic timing constraints demanded by high quality MPEG-2 software video decoding. Based on these, we present a MPEG-2 video frame selection algorithm with focus on high video quality perceived by the users, which fully utilize limited resources. Given that not all frames in a stream can be processed, it selects those which will provide the best picture quality while matching the available resources, starting only such decoding, which is guaranteed to be completed. As a final result, we provide a real-time method for flexible scheduling of media processing in resource constrained system. Results from study based on realistic MPEG-2 video underline the effectiveness of our approach.
|
247 |
Predicting Quality Attributes in Component-based Software SystemsLarsson, Magnus January 2004 (has links)
No description available.
|
248 |
Fault Diagnosis of Industrial Machines using Sensor Signals and Case-Based ReasoningOlsson, Erik January 2009 (has links)
Industrial machines sometimes fail to operate as intended. Such failures can be more or less severe depending on the kind of machine and the circumstances of the failure. E.g. the failure of an industrial robotcan cause a hold-up of an entire assembly line costing the affected company large amounts of money each minute on hold. Research is rapidly moving forward in the area of artificial intelligence providing methods for efficient fault diagnosis of industrial machines. The nature of fault diagnosis of industrial machines lends itself naturally to case-based reasoning. Case-based reasoning is a method in the discipline of artificial intelligence based on the idea of assembling experience from problems and their solutions as ”cases” for reuse in solving future problems. Cases are stored in a case library, available for retrieval and reuse at any time.By collecting sensor data such as acoustic emission and current measurements from a machine and representing this data as the problem part of a case and consequently representing the diagnosed fault as the solution to this problem, a complete series of the events of a machine failure and its diagnosed fault can be stored in a case for future use.
|
249 |
Data Management in Vehicle Control-SystemsNyström, Dag January 2005 (has links)
As the complexity of vehicle control-systems increases, the amount of information that these systems are intended to handle also increases. This thesis provides concepts relating to real-time database management systems to be used in such control-systems. By integrating a real-time database management system into a vehicle control-system, data management on a higher level of abstraction can be achieved. Current database management concepts are not sufficient for use in vehicles, and new concepts are necessary. A case-study at Volvo Construction Equipment Components AB in Eskilstuna, Sweden presented in this thesis, together with a survey of existing database platforms confirms this. The thesis specifically addresses data access issues by introducing; (i) a data access method, denoted database pointers, which enables data in a real-time database management system to be accessed efficiently. Database pointers, which resemble regular pointers variables, permit individual data elements in the database to be directly pointed out, without risking a violation of the database integrity. (ii) two concurrency-control algorithms, denoted 2V-DBP and 2V-DBP-SNAP which enable critical (hard real-time) and non-critical (soft real-time) data accesses to co-exist, without blocking of the hard real-time data accesses or risking unnecessary abortions of soft real-time data accesses. The thesis shows that 2V-DBP significantly outperforms a standard real-time concurrency control algorithm both with respect to lower response-times and minimized abortions. (iii) two concepts, denoted substitution and subscription queries that enable service- and diagnostics-tools to stimulate and monitor a control-system during run-time. The concepts presented in this thesis form a basis on which a data management concept suitable for embedded real-time systems, such as vehicle control-systems, can be built. / Ett modernt fordon är idag i princip helt styrt av inbyggda datorer. I takt med att funktionaliteten i fordonen ökar, blir programvaran i dessa datorer mer och mer komplex. Komplex programvara är svår och kostsam att konstruera. För att hantera denna komplexitet och underlätta konstruktion, satsar nu industrin på att finna metoder för att konstruera dessa system på en högre abstraktionsnivå. Dessa metoder syftar till att strukturera programvaran idess olika funktionella beståndsdelar, till exempel genom att använda så kallad komponentbaserad programvaruutveckling. Men, dessa metoder är inte effektiva vad gäller att hantera den ökande mängden information som följer med den ökande funktionaliteten i systemen. Exempel på information som skall hanteras är data från sensorer utspridda i bilen (temperaturer, tryck, varvtal osv.), styrdata från föraren (t.ex. rattutslag och gaspådrag), parameterdata, och loggdata som används för servicediagnostik. Denna information kan klassas som säkerhetskritisk eftersom den används för att styra beteendet av fordonet. På senare tid har dock mängden icke säkerhetskritisk information ökat, exempelvis i bekvämlighetssystem som multimedia-, navigations- och passagerarergonomisystem. Denna avhandling syftar till att visa hur ett datahanteringssystem för inbyggda system, till exempel fordonssystem, kan konstrueras. Genom att använda ett realtidsdatabashanteringssystem för att lyfta upp datahanteringen på en högre abstraktionsnivå kan fordonssystem tillåtas att hantera stora mängder information på ett mycket enklare sätt än i nuvarande system. Ett sådant datahanteringssystem ger systemarkitekterna möjlighet att strukturera och modellera informationen på ett logiskt och överblickbart sätt. Informationen kan sedan läsas och uppdateras genom standardiserade gränssnitt anpassade förolika typer av funktionalitet. Avhandlingen behandlar specifikt problemet hur information i databasen, med hjälp av en concurrency-control algoritm, skall kunna delas av både säkerhetskritiska och icke säkerhetskritiska systemfunktioner i fordonet. Vidare avhandlas hur information kan distribueras både mellan olika datorsystem i fordonet, men också till diagnostik- och serviceverktyg som kan kopplas in i fordonet.
|
250 |
Autopositionering för röntgensystem / Auto positioning for X-ray systemsMarchal, Jakob, Andreasen, Mathias Andreasen January 2014 (has links)
Abstrakt Röntgen är ett område där man ställer frågan om processen skulle kunna automatiseras för att göra den enklare för sjuksköterskor. På så sätt ökar antalet patienter som kan röntgas eftersom det skulle gå snabbare. Med hjälp av datorseende och en servostyrd röntgenkamera kan man förverkliga delar av dessa drömmar genom att låta röntgenkameran själv justera sig efter en patient och även flyttas till en vald kroppsdel. Här undersöks och testas open-source biblioteket OpenCV. En prototyp på ett automatiskt system tas fram med syftet att testa OpenCVs funktionalitet och besvara ett antal frågor: Hur kan röntgen automatiseras genom användning av open-source programvarubibliotek med inriktning på bildbehandling? Vilka för – och nackdelar kan användandet av ett datorseendebibliotek vara? Kan man med dagens teknik utveckla en automatisk lösning som kan göras till en kommersiell produkt?
|
Page generated in 0.0792 seconds