• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 683
  • 196
  • 99
  • 69
  • 32
  • 24
  • 20
  • 17
  • 12
  • 9
  • 6
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 1247
  • 1247
  • 365
  • 363
  • 353
  • 239
  • 213
  • 200
  • 187
  • 148
  • 133
  • 132
  • 128
  • 126
  • 124
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Compiling Concurrent Programs for Manycores

Gebrewahid, Essayas January 2015 (has links)
The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error. In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase programming productivity and program portability for manycores and to analyze their impact on performance and efficiency. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany and Ambric. For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-pi’s constructs for dynamic reconfiguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. We also present a new CAL compilation framework which has a front end, two intermediate representations and three backends: for a uniprocessor, Epiphany, and Ambric. We show the feasibility of our approach by compiling a CAL implementation of the 2D-IDCT for the three backends. We also present an evaluation and optimization of code generation for Epiphany by comparing the code generated from CAL with a hand-written C code implementation of 2D-IDCT.
22

Investigating Fog- and Cloud-based Control Loops for Future Smart Factories

Ismahil, Dlovan January 2017 (has links)
In the recent years we have seen that internet connectivity has multiplied vastly and that more and more computation and information storage are moved to the cloud. Similarly to other types of networks, industrial systems also see an increase in the number of communicating devices. Introduction of wireless communication into industrial systems instead of currently used wired networks will allow interconnection of all kinds of stationary and mobile machinery, robots and sensors and thereby bring multiple benefits. Moreover, recent developments in cloud and fog computing open many new opportunities in control, analysis and maintenance of industrial systems. Wireless systems are easy to install and maintain and relocation of data analysis and control services from local controllers to the cloud can make possible computations requiring a lot of resources, improve efficiency of collaboration between different parts of a plant or several plans as cloud servers will be able to store information and be accessible from all of them. However, even though introduction of wireless communication and cloud services brings a lot of benefits, new challenges in fulfilling industrial requirements arise as, e.g., packet delivery rates might be affected by disturbances introduced in wireless channels, data storage on distant servers might introduce timing and security issues, resource allocation and reservation for controllers supervising multiple processes should be considered to provide real-time services. The main goal of this thesis work is to consider design possibilities for a factory including local and cloud controllers, i.e. look at how the work of the factory should be organized, where control decisions should be made, analyze pros and cons of making the decisions at local (fog) and cloud servers. To narrow down the problem, an example factory with who independent wireless networks (each consisting of one sensor, one actuator and one local control node) and a cloud controller controlling both of them will be considered. Selected structure allows all the questions of interest to be considered, while its prototype can be built using available for this thesis work equipment
23

Att styra en konferensenhet med gester / To control a conference unit with gestures

Thelander Lööf, Hans January 2018 (has links)
Projektet är utfört i samarbete med ett företag som tillverkar konferensenheter. Konferensenheter består oftast av ett användargränssnitt riktat mot den person som sätter upp mötet, vilket begränsar de övriga deltagarnas möjlighet till interaktion med enheten. Syftet med projektet är att underlätta för övriga deltagare att interagera med konferensenhet, vilket kan göras med hjälp av gester. Målet med projektet är att ta fram en produkt för utvärdering och test. Produkten ska kunna omvandla detekterade gester till kommandon för att kunna kontrollera högtalarvolym och aktivering av mikrofon. Efter en jämförelse mellan olika metoder och tekniker valdes en fas-baserad gestdetektering med en infraröd närhetssensor. Två mönsterkort tillverkades, ett bestående av en mikrokontroller och ett bestående av en närhetssensor med tre infraröda lysdioder. En mjukvara togs fram för gestdetektering. Test gjordes med handrörelser till höger, vänster, upp och ned med ett avstånd på ca. 30 cm mellan hand och sensor. Resultatet visar en gestdetektering av fyra olika gester. Dessa kan i framtida utveckling omvandlas till kommandon som kan styra högtalarvolym samt mikrofon på konferensenhet. / The project is conducted in cooperation with a company that manufactures conference units. Conference units usually consist of a user interface aimed at the person who sets up the meeting, limiting the other participants' ability to interact with the device. The purpose of the project is to facilitate other participants to interact with conference units, which can be done by means of gestures. The aim of the project is to produce a product for evaluation and testing. The product should be able to convert detected gestures to commands to control speaker volume and microphone. After a comparison of different methods and techniques, a phase-based gesture detection was selected with an infrared proximity sensor. Two PCBs were made, one consisting of a microcontroller and one consisting of a proximity sensor with three infrared LEDs. A software was developed for gesture detection. Test was done with hand movements to the right, left, up and down with a distance of approx. 30 cm between hand and sensor. The result shows a gesture detection of four different gestures. In future developments, these can be converted into commands that can control speaker volume as well as microphone on conference device.
24

Data communication for near shore applications

Stetenfeldt, Andreas January 2017 (has links)
The wave energy conversion concept developed at Uppsala University is based on a buoy at sea level that is connected to a linear generator on the sea bed. The movements of the buoy riding the waves gets converted into electricity by the reciprocal movements of the translator inside the generator. To be able to compensate the negative impact of water level variations on power production, which is especially important at sites with high tidal range, a sea level compensation system to be placed on the buoy was developed. During development, the system used cellphone technology to communicate, which can be power demanding and is dependent on adequate cellphone reception. Since future wave power parks could be localized up to 10 km offshore, in rural areas of developing countries, a new approach is needed for communication with the sea level compensation system that is not dependent on cellphone reception at sea. In this report, a review of the regulations for radio communication and radio equipment in Sweden, Spain, Nigeria, Ghana and India is presented together with research of different possibilities of communication. Moreover, a new system for sending commands and receiving telemetry have been developed and have been tested for basic functionality, range and power efficiency. Due to differences in the countries regulations and uncertainties about conditions at the future sites of deployment, the programs in the system are to be easily adapted to function with different radios depending on the country of interest and the conditions at the site. Hence, a system layout have been proposed rather than a specific communication solution. The experimental setup developed has been tested over land with license free radios, over a range of 10 km in the vicinity of Uppsala. In the test, 100% of the transmitted commands were received and acknowledged within three attempts. The new control system for the buoys reduced the energy consumption from the previous development system by 90%.
25

Behavioral modelling of embedded software using execution traces

Khandeparkar, Satej January 2017 (has links)
Software updates made by developers often achieve their intendedpurpose, but these updates may also lead to an anomalous behaviorpreviously unknown to the developers. This might be due to theirinteraction with other parts of the system. If the developers had a toolwhich could help them to visually see these changes as a behavioralmodel, it would benefit them to actually know how the changes haveaffected the behavior of the system. Thus, empowering them to fixany side effects or bugs that arise as a part of their update. Thus, in order to visualize and compare learned behavioral models,a tool was created which would model the behavior from traces generatedby scenarios based on the related work in the area of inferringmodels of software systems. This tool was specifically intended forembedded software. So, to compare changes based on updates andfunctional changes of embedded software, behavioral models of scenarioswere obtained for different versions of a Real Time OperatingSystem (RTOS) Kernel. The visual comparison algorithm proved tobe effective in visualizing the differences between behavioral modelsfor a particular scenario across the versions.
26

Applications of decision diagrams in digital circuit design

Lindgren, Per January 1999 (has links)
Design methodology of digital circuits is a rapidly changing field. In the last 20 years, the number of transistors on a single chip has increased from thousands to tens of millions. This sets new demands on the design tools involved, their ability to capture specifications on a high level, and finally synthesize them into hardware implementations. The introduction of Decision Diagrams (DDs) has brought new means towards solving many of the problems raised by the increasing complexity of todays designs. In this thesis, we study their use in VLSI CAD and develop a number of novel applications. Incomplete specifications are inherent to the functionality of almost all digital circuits. We present a design methodology providing a common basis between design validation and logic synthesis, namely the semantics of Kleenean Strong Ternary Logic. This is called upon as commonly used design methodologies, based e.g. on VHDL are shown to put design correctness in jeopardy. By an extension of DDs, we can efficiently represent and manipulate incompletely specified functions. The method presented, not only guarantees correctness of the final circuit, but also offers potential towards expressing and utilizing incompleteness in ways other methodologies are incapable of. The increasing density and speed of todays target technologies also changes the conditions for logic synthesis; e.g., traditional quality measures based on gate delays are becoming less accurate as delays caused by interconnections are raising their heads. To address this problem we propose methodologies allowing quality measures of the final circuit to be foreseen and considered throughout the whole synthesis process. In general this is a very hard task. We approach the problem by limiting our synthesis methodologies to those rendering regular layouts (Such as computational arrays and lattices). The regularity allows us to predict properties of the final circuit and at the same time, ensure design criteria to be met, e.g., path delays and routability of the final circuit. In this thesis, we develop new design methodologies and their algorithms. By our experimental results, they are shown to offer significant improvements to both state of the art two-level and multi-level based tools in the area of layout driven synthesis. Our minimization methods are based on Pseudo Kronecker Decision Diagrams (PKDDs) which are the most general type of ordered bitlevel diagrams for switching functions. In the thesis we elaborate on the properties of PKDDs and Ternary PKDDs (TPKDDs) and develop an efficient minimization method based on local variable exchange for TPKDDs. Furthermore, the problem of PKDD minimization is discussed and a number of different strategies are introduced and evaluated; the potential compactness of PKDDs is confirmed. The thesis spans from validation and verification of high-level specifications all the way down to layout driven synthesis, combining logic minimization, mapping and routing to the target architecture at hand. We conclude our work to offer new means towards solving many of the crucial problems occurring along the design process of modern digital circuits. / Godkänd; 1999; 20061117 (haneit)
27

Synthesis of Extremely Large Time-Triggered Network Schedules

Pozo, Francisco January 2017 (has links)
Many embedded systems with real-time requirements demand minimal jitter and low communication end-to-end latency for its communication networks. The time-triggered paradigm, adopted by many real-time protocols, was designed to cope with these demands. A cost-efficient way to implement this paradigm is to synthesize a static schedule that indicates the transmission times of all the time-triggered frames such that all requirements are met. Synthesizing this schedule can be seen as a bin-packing problem, known to be NPcomplete, with complexity driven by the number of frames. In the last years, requirements on the amount of data being transmitted and the scalability of the network have increased. A solution was proposed, adapting real-time switched Ethernet to benefit from its high bandwidth. However, it added more complexity in computing the schedule, since every frame is distributed over multiple links. Tools like Satisfiability Modulo Theory solvers were able to cope with the added complexity and synthesize schedules of industrial size networks. Despite the success of such tools, applications are appearing requiring embedded systems with even more complex networks. In the future, real-time embedded systems, such as large factory automation or smart cities, will need extremely large hybrid networks, combining wired and wireless communication, with schedules that cannot be synthesized with current tools in a reasonable amount of time. With this in mind, the first thesis goal is to identify the performance limits of Satisfiability Modulo Theory solvers in schedule synthesis. Given these limitations, the next step is to define and develop a divide and conquer approach for decomposing the entire scheduling problem in smaller and easy solvable subproblems. However, there are constraints that relate frames from different subproblems. These constraints need to be treated differently and taken into account at the start of every subproblem. The third thesis goal is to develop an approach that is able to synthesize schedules when different frame constraints related to different subproblems are inter-dependent. Last, is to define the requirements that the integration of wireless communication in hybrid networks will bring to the schedule synthesis and how to cope with the increased complexity. We demonstrate the viability of our approaches by means of evaluations, showing that our method is capable to synthesize schedules of hundred of thousands of frames in less than 5 hours. / RetNet
28

Model based Design of a Sailboat Autopilot

Ruzicka, Theophil January 2017 (has links)
No description available.
29

Tools to Compile Dataflow Programs for Manycores

Gebrewahid, Essayas January 2017 (has links)
The arrival of manycore systems enforces new approaches for developing applications in order to exploit the available hardware resources. Developing applications for manycores requires programmers to partition the application into subtasks, consider the dependence between the subtasks, understand the underlying hardware and select an appropriate programming model. This is complex, time-consuming and prone to error. In this thesis, we identify and implement abstraction layers in compilation tools to decrease the burden of the programmer, increase program portability and scalability, and increase retargetability of the compilation framework. We present compilation frameworks for two concurrent programming languages, occam-pi and CAL Actor Language, and demonstrate the applicability of the approach with application case-studies targeting these different manycore architectures: STHorm, Epiphany, Ambric, EIT, and ePUMA. For occam-pi, we have extended the Tock compiler and added a backend for STHorm. We evaluate the approach using a fault tolerance model for a four stage 1D-DCT algorithm implemented by using occam-pi's constructs for dynamic reconguration, and the FAST corner detection algorithm which demonstrates the suitability of occam-pi and the compilation framework for data-intensive applications. For CAL, we have developed a new compilation framework, namely Cal2Many. The Cal2Many framework has a front end, two intermediate representations and four backends: for a uniprocessor, Epiphany, Ambric, and a backend for SIMD based architectures. Also, we have identied and implemented of CAL actor fusion and fission methodologies for efficient mapping CAL applications. We have used QRD, FAST corner detection, 2D-IDCT, and MPEG applications to evaluate our compilation process and to analyze the limitations of the hardware.
30

Lock-Based Resource Sharing for Real-Time Multiprocessors

Afshar, Sara January 2017 (has links)
Embedded systems are widely used in the industry and are typically resource constrained, i.e., resources such as processors, I/O devices, shared buffers or shared memory might be limited in the system. Hence, techniques that can enable an efficient usage of processor bandwidths in such systems are of great importance. Locked-based resource sharing protocols are proposed as a solution to overcome resource limitation by allowing the available resources in the system to be safely shared. In recent years, due to a dramatic enhancement in the functionality of systems, a shift from single-core processors to multi-core processors has become inevitable from an industrial perspective to tackle the raised challenges due to increased system complexity. However, the resource sharing protocols are not fully mature for multi-core processors. The two classical multi-core processor resource sharing protocols, spin-based and suspension-based protocols, although providing mutually exclusive access to resources, can introduce long blocking delays to tasks, which may be unacceptable for many industrial applications. In this thesis we enhance the performance of resource sharing protocols for partitioned scheduling, which is the de-facto scheduling standard for industrial real-time multi-core processor systems such as in AUTOSAR, in terms of timing and memory requirements.   A new scheduling approach uses a resource efficient hybrid approach combining both partitioned and global scheduling where the partitioned scheduling is used to schedule the major number of tasks in the system. In such a scheduling approach applications with critical task sets use partitioned scheduling to achieve higher level of predictability. Then the unused bandwidth on each core that is remained from partitioning is used to schedule less critical task sets using global scheduling to achieve higher system utilization. These scheduling schema however lacks a proper resource sharing protocol since the existing protocols designed for partitioned and global scheduling cannot be directly applied due to the complex hybrid structure of these scheduling frameworks. In this thesis we propose a resource sharing solution for such a complex structure. Further, we provide the blocking bounds incurred to tasks under the proposed protocols and enhance the schedulability analysis, which is an essential requirement for real-time systems, with the provided blocking bounds.

Page generated in 0.1215 seconds