301 |
Power Savings in MPSoCSavvidis, Ioannis January 2009 (has links)
High performance integrated circuits suffer from a permanent increase of the dissipated power per square millimeter of silicon, over the past years. This phenomenon is driven by the miniaturization of CMOS processes, increasing packing density of transistors and increasing clock frequencies of microchips, thus pushing heat removal and power distribution to the forefront of the problems confronting the advance of microelectronics. In the opposite direction is the market growth of mainstream portable devices, which require extremely low power consumption. These evolving factors brought power dissipation into play and transformed it into a major design metric. This thesis comprises those knowledge and methodological tools that can offer a preliminary safe path toward less power-hungry SoC and MPSoC designs, thus contributing towards a holistic approach of power-related effects. This is accomplished by providing the essential theoretical background of CMOS power dissipation, investigating a vast range of power saving techniques and plotting their classifications, according to the power components each technique is meant to suppress and, the level of abstraction that it can be applied at, thus facilitating proper decision making about which power saving techniques to apply on a certain design. Moreover, this thesis implements, demonstrates and evaluates generic power analysis and optimization flows that are based on the ASIC industry’s de facto standard Synopsys tools. The tools’ actual capabilities are contrasted to the theoretical expectations and the chief tradeoffs that are involved in terms of speed versus accuracy and attainable power savings versus abstraction level are stressed. Our extracted power results, for an Ericsson’s large ASIC block, show that by putting emphasis on coping with power early, thus enhancing typical synthesis flows with an appropriate set of techniques, significant savings can be achieved for both dynamic and static power components in the front-end synthesis domain.
|
302 |
Implementing a state snapshot for the EMCA SimulatorSaghazadeh, Sahar January 2014 (has links)
The aim of this master thesis was designing and developing snapshot functionality for EMCA hardware simulator system which is used for implementing digital signal processing functions at Ericsson AB. The literature review contains the background knowledge on modeling, simulation, virtual machines and hardware simulators. The simulator system has been reviewed and the design of the snapshot functionality for the system has been explained. The performance of the snapshot functionality for saving and retrieving the visible state information has been evaluated. The results show that developed functionality works well in saving and retrieving the information. Moreover, the effect of the invisible simulator states has been investigated. Some suggestions are presented for furtherworks.
|
303 |
Design and Implementation of Requirements Handler over the farkle and OptimaMajidi, Arghavan January 2014 (has links)
This Master thesis is a part of an ongoing project called industrial Framework for Embedded Systems Tools, iFEST. IFEST is an EU founded project for developing a tool integration framework in order to facilitate both hardware and software co-design and life-cycle aspects for the development of embedded systems. This leads to reducing the engineering life-cycle costs and time-to-market factor for complex embedded system projects. Test Manager is a part of the testing framework that invokes some test cases for testing functionalities of other components of the system which are “Optima” and “Farkle”. When the test is done the Test Manager will get the test results from the system and return it to the tester user. The final implementation and integration of Test Manager is not within the scope of this thesis work. However, a pilot version of Test Manager was implemented as a working prototype to get stakeholders’ feedback and validate the initial requirement and design. After iterating on requirement factors and finding criteria for the optimum design, different design alternatives went through an AHP1 decision making process to come up with an ideal design model. The aforementioned process is followed by four different aspects of our design model; the integration models, the choice of programming language, the choice between web and desktop user interface, and the choice of database system. For each of these four choices, different options are presented during in the literature study. The final design model is the outcome of the AHP analysis. 1 Analytic / Detta examensarbete är en del i ett pågående projekt som kallas industriell ram för inbyggda systemverktyg: iFEST. IFEST är ett EU-grundat projekt för att utveckla ett ramverk för verktygsintegration i syfte att underlätta samtidig design av både hårdvara och mjukvara, samt livscykelaspekter för utveckling av inbyggda system. Detta leder till att minska de tekniska livscykelkostnaderna och time-to-market i komplexa projekt för inbyggda system. Test Manager är en del av en testningsram som anropar tester för att testa funktionerna i andra komponenter i systemet, som "Optima" och "Farkle". När testet är gjort kommer Test Manager att få testresultaten från systemet och returnera dem till den mänskliga testaren. Den slutliga genomförandet och integrationen av Test Manager är inte inom ramen för detta examensarbete. Emellertid har en pilotversion av Test Manager implementerats som en fungerande prototyp för att få intressenternas synpunkter och validera ursprungliga krav och design. Efter iteration av kravfaktorer och sökande efter kriterier för optimal utformning, gick olika designalternativ genom en AHP-baserad beslutsprocess för att komma till en ideal designmodell. Den tidigare nämnda processen följdes av fyra olika aspekter på designmodellen; integrationsmodeller, valet av programmeringsspråk, valet mellan webben eller särskilt användargränssnitt, och valet av databassystem. För vart och ett av dessa fyra aspekter, presenteras olika alternativ i litteraturstudien. Den slutliga utformningen av modellen är resultatet av AHP-analysen.
|
304 |
Performance evaluation and analysis of Barrelfish using SimicsOlczak, Mateusz January 2013 (has links)
Personal computing hardware is becoming ever more complex with more cores being added. It is moving from being a multi-core to a many-core system. In the next ten years we are expected to see hundreds of cores on one single chip. It is also very likely we will see more specialized hardware in coexistence with general purpose processing units. The cache-coherent shared-memory operating systems of today do not scale well on the hardware of tomorrow. As the number of cores grows, so does the complexity of the interconnects. The hardware cache-coherence protocols shared-memory operating systems of today rely upon subsequently become increasingly expensive with a greater overhead. As a result, it is entirely possible that operating system of tomorrow will have to handle non-coherent memory. The expected increase in hardware diversity and issues such as cache coherency on hundred core systems poses new challenges for operating system designers. Barrelfish is a research operating system with the purpose of exploring operating-system design of the future. It is a multi-kernel operating system for multi-core systems and utilizes message passing as a way of communication between kernels. Barrelfish assumes no shared-memory, however it does not explicitly forbid shared-memory. The purpose of this thesis is to performance evaluate Barrelfish using Simics with a modeled approximation of an existing cache structure from a modern processor. A Wool ported version of the Barcelona OpenMP Task suite was used for the purpose of work load simulation, and a comparison between Linux and Barrelfish has been made.
|
305 |
Analysis of the Impact of TD-LTE on mobile broadbandChen, Xi January 2014 (has links)
The mobile broadband services have developed rapidly over the years, becoming critical revenue components of operator business. To accommodate users' growing interests on mobile broadband services and applications, global operators have tried to reorganize their former voice-centered networks by focusing more on the reinforcement of mobile data network performances and capacities. TD-LTE, as one of the emerging powers for the mobile broadband solutions, has gained global attentions and momentums. Different from previous work about the technical evaluations or market predictions, the thesis aims to provide a techno-economic analysis to TD-LTE system, linking its technological characters to the market opportunities and implementation strategies. In order to help TD-LTE operators identify the profit potentials of the system, the services and applications that TD-LTE could enable are discussed together with the analysis of the terminal developments, which are critical to end users' acceptance of TD-LTE. The network deployment strategies are examined and the methods of implementing a decent TD-LTE mobile data network are analyzed with cost efficiency considerations. The analysis find out that the availability of spectrum resource are of most importance for the adoption of TD-LTE technology and the sustainable growth of TD-LTE business relies on the differentiation strategy of services and applications. The feasibility study shows that TD-LTE could enable varied network deployment scenarios, including integrated network solution with legacy networks, and convergent network solution with LTE FDD. The cost analysis find out site infrastructure sharing is beneficial for cost reduction during a national level rollout, especially when the traffic volume increases. Reusing the coverage and capacity of legacy network is mostly effective in the initial phase, and TD-LTE deployment pace should accord with the data prediction on the live network. For compact and indoor scenarios, TD-LTE femto could be a cost effective alternative for mobile broadband access, however the bottleneck of the solution is the limitation on the number of concurrent connections for each femtocell.
|
306 |
Study of Mobile Payment Services in India : Distribution of the roles, responsibilities and attitudes amongst actors of the payment systemsSingh Sambhy, Gurpreet January 2014 (has links)
Information technology and payment systems have witnessed the introduction, acceptance and wide scale deployment of electronic payment systems. The payment system ecosystem has now witnessed the introduction of mobile payment systems and their associated services. Major actors involved in mobile payment systems include telecom operators, banks, merchants and consumers. They need to aggregate their resources and develop a coherent ecosystem which would help the individual actors while also benefiting the overall mobile payment ecosystem. Financial institutions and mobile carriers are becoming increasingly interested and have started collaborating in order to provide mobile payment capabilities which would pave the way for the migration of payment systems from card-based to phone-based. In a developing country like India, mobile payment systems have experienced rapid growth, deployment and acceptance in a very short span of time. However, these systems are still far from mature and need to be customized and refined further in order to replace or equal the deployment and acceptance of electronic payment systems. Mobile payment services are primarily centered on the unbanked population but also consider the existing population with active bank accounts especially in developing countries. The thesis describes the rapid growth and development of payment systems in India and how there has been a slow shift from e-payment systems to m-payment systems. The key mobile payment systems described in the thesis includes but not limited to, the Nokia money, and Airtel money. The key findings of the thesis have been supplemented with SWOT analysis, ARA model analysis and Ansoff matrix models of the mobile payment systems in the Indian market. The business models described in the thesis have been analyzed by considering a few key factors and analysis results depicted that the biggest challenge of deploying mobile payment systems is initiated by uncertainties in the environment which result in lack of Acceptance Network, Interoperability and Accessibility for everyone in society (Including educated and un-educated).
|
307 |
Solar Energy Control System DesignYang, Sun January 2013 (has links)
This thesis covers design, simulation and implementation of a solar energy control system for an on grid energy storage device. The design covers several control methods such as energy balance control, operating mode switching and data exchange. A genetic algorithm was designed to optimize the control system parameters design, and the algorithm's simulation and real time operating system implementation showed comparable results. The control system was implemented to connect a power supply to the grid. The power supply simulated a solar panel and connected to an electrical grid via Energy Hub equipment, and the energy transfer characteristics of designed control system were tested. The results showed that the selected algorithm matches the target performance criteria.
|
308 |
Genium Data Store : Distributed Data storeProskurnia, Iuliia January 2013 (has links)
In recent years the need for distributed data storage has led the way to design new systems in a large-scale environment. The growth of unbounded stream of data, the necessity to store and analyze it in real time, reliably, scalable and fast are the reasons for appearance of such systems in financial sector, stock exchange Nasdaq OMX especially. Furthermore, internally designed totally ordered reliable message bus is used in Nasdaq OMX for almost all internal subsystems. Theoretical and practical extensive studies on reliable totally ordered multicast were made in academia and it was proven to serve as a fundamental block in construction of distributed fault-tolerant applications. In this work, we are leveraging NOMX low-latency reliable totally ordered message bus with a capacity of at least 2 million messages per second to build high performance distributed data store. The data operations consistency can be easily achieved by using the messaging bus as it forwards all messages in reliable total order fashion. Moreover, relying on the reliable totally ordered messaging, active in-memory replication support for fault tolerance and load balancing is integrated. Consequently, the prototype was developed using production environment requirements to demonstrate its feasibility. Experimental results show a great scalability and performance serving around 400,000 insert operations per second over 6 data nodes that can be served with 100 microseconds latency. Latency for single record read operations are bound to sub-half millisecond, while data ranges are retrieved with sub-100 Mbps capacity from one node. Moreover, performance improvements under a greater number of data store nodes are shown for both writes and reads. It is concluded that uniform totally ordered sequenced input data can be used in real time for large-scale distributed data storage to maintain strong consistency, fault-tolerance and high performance.
|
309 |
A Domain Specific Search Engine WithExplicit Document RelationsLi, Zhongmiao January 2013 (has links)
The current web consists of documents that are highly heterogeneous and hard for machines to understand. The SemanticWeb is a progressive movement of the Word Wide Web, aiming at converting the current web of unstructured documents to the web of data. In the Semantic Web, web documents are annotated with metadata using standardized ontology language. These annotated documents are directly processable by machines and it highly improves their usability and usefulness. In Ericsson, similar problems occur. There are massive documents being created with well-defined structures. Though these documents are about domain specific knowledge and can have rich relations, they are currently managed by a traditional search engine, which ignores the rich domain specific information and presents few data to users. Motivated by the Semantic Web, we aim to find standard ways to process these documents, extract rich domain specific information and annotate these data to documents with formal markup languages. We propose this project to develop a domain specific search engine for processing different documents and building explicit relations for them. This research project consists of the three main focuses: examining different domain specific documents and finding ways to extract their metadata; integrating a text search engine with an ontology server; exploring novel ways to build relations for documents. We implement this system and demonstrate its functions. As a prototype, the system provides required features and will be extended in the future.
|
310 |
Techniques and applications of earlyapproximate results for big-dataanalyticsBrundza, Vaidas January 2013 (has links)
The amount of data processed by large-scale data processing frameworks is overwhelming. To improve the efficiency, such frameworks employ data parallelization and similar techniques. However, the expectations are growing: near real-time data analysis is desired. MapReduce is one of the most common large-scale data processing models in the area. Due to the batch processing nature of this framework, results are returned after job execution is finished. With the growth of data, batch operating environment is not always preferred: large number of applications can take advantage of early approximate results. It was first addressed by the online aggregation technique, applied to the relational databases. Recently it has been adapted for the MapReduce programming model, but with a focus to technical rather than data processing details. In this thesis project we overview the techniques, which can enable early estimation of results. We propose several modifications of the MapReduce Online framework. We show that our proposed system design changes possess properties required for the accurate results estimation. We present an algorithm for data bias reduction and block-level sampling. Consequently, we describe the implementation of our proposed system design and evaluate it with a number of selected applications and datasets. With our system, a user can calculate the average temperature of the 100 GB weather dataset six times faster (in comparison to the complete job execution) with as low as 2% error.
|
Page generated in 0.2278 seconds