Spelling suggestions: "subject:"forminformation ciences"" "subject:"forminformation csciences""
391 |
Portal för kvinnligt affärsnätverkCassel, Ludvig, Karstorp, Viking January 2014 (has links)
Visionen bakom produkten, som utvecklas i detta arbete, är att på ett effektivt och lättillgängligt sätt tillhandahålla en portal på internet för professionellt nätverkande inriktat specifikt mot kvinnor. Önskan är att samla befintliga professionella nätverk till en plats och erbjuda tjänster för rekrytering. Ansatsen är att bygga en portal med hjälp av öppen källkod som sedan anpassas efter specifika önskemål. Rapporten beskriver hur ett team av webbutvecklare i projektform bygger upp och vidareutvecklar en kombination mellan mötesplats och professionellt nätverk. Vidare beskrivs hur teamet arbetade med metodiken ”incremental build model” för att hantera kraven från beställaren samt hur projektmodellen PDLC användes för att hantera hela produktens livscykel. Genom att hantera kraven i en iterativ och inkrementell metodik kunde dessa kontinuerligt kontrolleras mot visionen för produkten. De tekniska problem som uppstod under utvecklingen av produkten diskuteras utifrån utvecklingsteamets perspektiv med beställarens krav i fokus. Vidare presenteras lärdomar i projektet rörande den tekniska utvecklingen samt erfarenheter av metodiken och modellen som använts. Avslutningsvis dras slutsatser kring hur projektet genomförts, alternativa lösningar samt hur en eventuell vidareutveckling av produkten skulle kunna utföras. Resultatet är en lättanvänd portal för professionella och sociala aktiviteter där den önskade funktionaliteten samlats. En svaghet i produkten är den grafiska formgivningen. En resurs med kunskap inom området hade egentligen behövts. / The vision of the project was the development of a web portal, designed for professional networking and community building directed at women. The idea was to gather small networks under a common platform which would combine the communicative asset of a community site with the functional asset of a recruitment site. This report describes how a team of web developers builds and further develops a combination of a community and a professional network. The utilization of the "incremental build model" for dealing with the requirements are presented, including the development of the product and how the project model “PDLC” was used to manage the entire lifecycle of the product. The requirements could continuously be checked against the vision of the product by managing them in this iterative and incremental methodology. The technical problems that arose during the development of the product are discussed from the perspective of the development team with the client’s requirements and vision in mind. Further, lessons learned in the project concerning the technological development and the experiences of the methodology and model used are presented. Finally, conclusions are drawn about how the project was carried out, alternative solutions, and how further development of the product could be made. The outcome of the project is an easy to use portal for professional and social activities where the requested functionality is present. A weakness in the product is the graphic design. A resource with expertise in that area had been necessary.
|
392 |
Analysis of Automatic Parallelization Methods for Multicore Embedded SystemsFrantzén, Fredrik January 2014 (has links)
There is a demand for reducing the cost of porting legacy code to di erent embedded platforms. One such system is the multicore system that allows higher performance with lower energy consumption and it is a popular solution in embedded systems. In this report, I have made an evaluation of a number of open source tools supporting the parallelization e ort. The evaluation is made using a set of small highly parallel programs and two complex face recognition applications that show what the current advantages and disadvantages are of di erent parallelization methods. The results show that parallelization tools are not able to parallelize code automatically without substantial human involvement. Therefore it is more protable to parallelize by hand. The outcome of the study is a number of guidelines on how to parallelize their program and a set of requirement that serves as a basis for designing an automatic parallelization tool for embedded systems. / Det finns ett behov av att minska kostnaderna för portning av legacykod till olika inbyggda system. Ett sådant system är de flerkärniga systemen som möjliggör högre prestanda med lägre energiförbrukning och är en popular lösning i inbyggda system. I denna rapport, har jag utfört en utvärdering av ett antal open source-verktyg, som hjälper till med arbetet att parallelisera kod. Detta görs med hjälp av små paralleliserbara program och två komplexa ansiktsigenkännings-applikationer som visar vad de nuvarande för- och nackdelar de olika parallelliserings metoderna har. Resultaten visar att parallelliseringsverktygen inte klarar av att parallellisera automatiskt utan avsevärd mänsklig inblandning. Detta medför att det är lönsammare att parallelisera för hand. Utfallet av denna studie är ett antal riktlinjer för hur man ska göra för att parallelisera sin kod, samt ett antal krav som agerar som bas till att designa ett automatiskt paralleliseringsverktyg för inbyggda system.
|
393 |
Speeding Up Value at Risk Calculations Using Acceleratorsaf Sandeberg, Jonas January 2014 (has links)
Calculating Value at Risk (VaR) can be a time consuming task. Therefore it is of interest to find a way to parallelize this calculation to increase performance. Based on a system built in Java, which hardware is best suited for these calculations? This thesis aims to find which kind of processing unit that gives optimal performance when calculating scenario based VaR. First the differences of the CPU, GPU and coprocessor is examined in a theoretical study. Then multiple versions of a parallel VaR algorithm are implemented for a CPU, a GPU and a coprocessor trying to make use of the findings from the theoretical study. The performance and ease of programming for each version is evaluated and analyzed. By running performance tests it is found that the CPU was the winner when coming to performance while running the chosen VaR algorithm and problem sizes. / Att beräkna Value at Risk (VaR) kan vara tidskrävande. Därför är det instressant att finna möjligheter att parallelisera och snabba upp dessa beräkningar för att förbättra prestandan. Men vilken hårdvara är bäst lämpad för dessa beräkningar? Detta arbete syftar till att för ett system skrivet i Java hitta vilken typ av beräkningsenhet som ger optimal prestanda vid scenariobaserade VaR beräkningar. Först gjordes en teoretisk undersökning av CPUn, GPUn och en coprocessor. Flera versioner av en parallel VaR algoritm implementeras för en CPU, GPU och en coprocessor där resultaten från undersökningen utnyttjas. Prestandan samt enkelheten att programmera varje version utvärderas och analyseras. De utförda prestanda testerna visar att vinnaren vad gäller prestanda är CPUn för den valda VaR algoritmen och de testade problemstorlekarna.
|
394 |
Implementation and Evaluation of Concurrency on ParallellaEngström, Gustav, Falgert, Marcus January 2014 (has links)
The question asked is what optimizations can be done when working with the Parallella board from Adapteva and how they di er from other concurrent solutions. Parallella is a small super computer with a unique 16 core co-processor that we were to utilize. We have been working to parallelizing image manipulation software, and then analyzing the results of some performed tests. The goal is to conclude how to properly utilize the Epiphany accelerator, and also see how it performs in comparison to other CPUs. This project is a part of the PaPP project, which will utilize Parallella, and the work can be seen as an initial evaluation of the board. We have tested the board to see how it holds up and made our best e orts to adapt to the hardware and explain our path of working. This report is worth reading for anyone who has little experience with Parallella, who desires to learn how well it works and what it's good for. There are descriptions of all libraries used and detailed thoughts on how to implement software solutions for Epiphany. This is a bachelor level project and was performed with no prior knowledge of Parallella.
|
395 |
Unit Testing of Java EE Web ApplicationsCastillo Patino, Christian, Hamra, Mustafa January 2014 (has links)
Målet med denna rapport att är utvärdera testramverken Mockito och Selenium för att se om de är väl anpassade för nybörjare som ska enhetstesta och integritetstesta existerande Java EE Webbapplikationer. Rapporten ska också hjälpa till med inlärningsprocessen genom att förse studenterna, i kursen IV1201 – Arkitektur och design av globala applikationer, med användarvänliga guider. / This report determines if the Mockito and Selenium testing frameworks are well suited for novice users when unit- and integration testing existing Java EE Web applications in the course IV1201 – Design of Global Applications. The report also provides user-friendly tutorials to help with the learning process.
|
396 |
Evaluation of Code Generation ToolsSedghi Farooji, Farzad January 2014 (has links)
Code generation is an important part of today’s software development. Using code generation can increase code quality, ease maintenance and shorten development time. It can be used for development of different parts of software systems like database access layers, communication protocols and their proxies/stubs, user interface and many others. Code generators may be ready to use products or developed in-house for project’s specific requirements. There are different tools and environments for the development of code generators. As there are so many different possibilities for code generation solutions, it becomes hard for a developer or team to choose the best solution for their purpose, especially when there are few academic or industrial resources for comparing such solutions or providing the criteria for their comparison. Most of the academic works related to code generation are about specific software areas like parsers, signal processing and embedded systems, rather than general software development. This report defines a framework for comparison of code generation solutions, which provides a categorized list of relevant criteria for such comparison. The list of criteria is gathered by reviewing a set of available code generation solutions and categorized based on software quality attributes, since the code generation solution is software itself. Finally some of the tools are chosen based on the requirements and applications of the company and they are compared side-by-side using the comparison framework.
|
397 |
How to Get the Most Out Of Your Embedded Hardware While Keeping Development Time to a Minimum : A Comparison of Two architectures and Two IDEs for Atmel AVR 8-bit MicrocontrollersArndt, Niclas January 2014 (has links)
This thesis aims to answer a number of basic questions about microcontroller development: • What’s the potential for writing more efficient program code and is it worth the effort? How could it be done? Could the presumed trade-off between code space and development time be overcome? • Which microcontroller hardware architecture should you choose? • Which IDE (development ecosystem) should you choose? This is an investigation of the above, using separate sets of incremental code changes (improvements) to a simple serial port communication test program. Two generations of Atmel 8-bit AVR microcontrollers (ATmega and ATxmega) and two conceptually different IDEs (BASCOM-AVR and Atmel Studio 6.1) are chosen for the comparison. The benefits of producing smaller and/or faster code is the ability to use smaller (cheaper) devices and reduce power consumption. A number of techniques for manual program optimization are used and presented, showing that it’s the developer skills and the IDE driver library concept and quality that mainly affect code quality and development time, rather than high code quality and low development time being mutually exclusive. The investigation shows that the complexity costs incurred by using memory-wise bigger and more powerful devices with more features and peripheral module instances are surprisingly big. This is mostly seen in the IV table space (many and advanced peripherals), ISR prologue and epilogue (memory size above 64k words), and program code size (configuration and initialization of peripherals). The 8-bit AVR limitation of having only three memory pointers is found to have consequences for the programming model, making it important to avoid keeping several concurrent memory pointers, so that the compiler doesn’t have to move register data around. This means that the ATxmega probably can’t reap the full benefit of its uniform peripheral module memory layout and the ensuing struct-based addressing model. The test results show that a mixed addressing model should be used for 8-bit AVR ATxmega, in which “static” (absolute) addressing is better at one (serial port) instance, at three or more the “structs and pointers” addressing is preferable, and at two it’s a draw. This fact is not dependent on the three pointer limitation, but is likely to be strengthened by it. As a mixed addressing model is necessary for efficient programming, it is clear that the driver library must reflect this, either via alternative implementations or by specifying “interfaces” that the (custom) driver must implement if abstraction to higher-level application code is desired. A GUI-based tool for driver code generation based on developer input is therefore suggested. The translation from peripheral instance number to base address so far used by BASCOM-AVR for ATxmega is expensive, which resulted in a suggestion for a HW-based look-up table that would generally reduce both code size and clock cycle count and possibly enable a common accessing model for ATmega, ATxmega, and ARM. In the IDE evaluation, both alternatives were very appreciated. BASCOM-AVR is found to be a fine productivityenhancement tool due to its large number of built-in commands for the most commonly used peripherals. Atmel Studio 6.1 suffers greatly in this area from its ill-favored ASF driver library. For developers familiar with the AVRs, the powerful avrgcc optimizing compiler and integrated debugger still make it worthwhile adapting application note code and datasheet information, at a significant development time penalty compared to BASCOM-AVR. Regarding ATmega vs. ATxmega, it was decided that both have its place, due to differences in feature sets and number of peripheral instances. ATxmega seems more competitively priced compared to ATmega, but incurs a complexity cost in terms of code size and clock cycles. When it’s a draw, ATmega should be chosen.
|
398 |
Multiprocessor Scheduling of Synchronous Data Flow Graphs using Local Search AlgorithmsCallerström, Emma, Elfström, Kajsa January 2014 (has links)
Design space exploration (DSE) is the process of exploring design alternatives before implementing real-time multiprocessor systems. One part of DSE is scheduling of the applications the system is developed for and to evaluate the performance to ensure that the real-time requirements are satisfied. Many real-time systems today use multiprocessors and finding the optimal schedule for an application on a multiprocessor system is known to be an NP-hard problem. Such an optimization problem can be time-consuming which justifies the use of heuristics. This thesis presents an approach for scheduling applications onto multiprocessors using local search algorithms. The applications are represented by SDF-graphs and the assumed platform has homogeneous processors without constraints regarding communication delays, memory consumption or buffer sizes. The goal of this thesis project was to investigate if heuristic search algorithms could find sufficiently good solutions in a reasonable amount of time. Experimental results show that local search algorithms have potential of contributing to DSE by finding high-performance schedules with reduced search time compared to algorithms trying to find the optimal scheduling solution.
|
399 |
Processuell KartgenereringDavidsson, Victor, Lundberg, Anders January 2014 (has links)
Processuell Generering Detta projekts syfte var att skapa en algoritm som genom processuell generering skapar höjdkartor för ett spel. Dessa kartor skulle kunna uppfylla en rad kvalitetskrav som sattes upp. För detta genomfördes en förstudie i hur processuell generering fungerar och hur det kan appliceras. Baserat på denna studie producerades därefter en algoritm som kunde skapa tidigare nämnda höjdkarta. Under förstudien studerades olika algoritmer och metoder inom området. De som verkade mest lovande testades via förenklade implementationer. I slutet av förstudien valdes därefter ett antal av de testade algoritmerna och metoderna ut för att medverka i den slutgiltiga implementationen. Den slutgiltiga algoritmen baserades på Voronoidiagram då det var den mest lämpade metoden givet de uppsatta kraven. Denna implementation togs fram, testades och optimerades. Alla krav som sattes upp i början av projektet uppnåddes inte, men de viktigaste kraven implementerades och testades inför den version av algoritmen som slutgiltigen presenterades. För de övriga kraven togs en teoretisk lösning fram och i de flesta fall utvecklades halvfärdiga implementationer. / Procedural Generation The purpose of this project was to create a terrain map generator using procedural generation to be used in a game. Specific quality requirements were set that each produced map had to fulfill. To meet the goals that had been set, a preliminary study was conducted on how procedural generation works and how it can be applied. Based on the results from the preliminary study an algorithm that could produce terrain maps with the desired qualities was developed. During the preliminary study a number of algorithms and methods were examined. For the most promising methods simple implementations were developed and tested. At the end of the study a number of the algorithms and methods were selected for use in the final implementation. The final implementation was based on Voronoidiagrams since it proved to be the most suitable method given the set requirements. This implementation was developed, tested and optimized. Not all requirements were met at the end of the project. However, solutions to the most important requirement were developed and successfully tested. The remaining requirements received at the very least a theoretical solution and in many cases semifinished implementations were developed.
|
400 |
Network Sharing Strategies and Regulations in Europe and the Middle EastTobal, Marc January 2014 (has links)
The thesis work intends to tackle network sharing practices in Eu- rope and in the Middle East and to convey the regulatory structure in both continents. To do this, a number of countries have been chosen such as to show the dierent types of sharing and how the agreements dier depending on the depth of them. To do this, I look at the market conditions and the incentive for a certain depth of sharing, and at the same time I investigate to what extent sharing is possible from the regulatory authority. Equal attention is given both on Europe and the Middle East in order to draw fair comparisons. Most results given on the Middle East have been evident due to the interviews conducted. The regulatory structure in both continents is given in order to un- derstand how properly separated regulations are and in what way the regulatory role diers. In the last part, I investigate whether or not a higher extent of sharing would suit the Arabic countries and if it is attractive by the market players. The results show that the operators in Europe are sharing all kind of equipment and a shift towards a full sharing scenario is evident, whereas the sharing agreements in the Middle East are limited to pas- sive sharing or no sharing at all. In terms of what is allowed and what's not, the regulators in EU have dierent opinions on how much is allowed whether the regulators in the Middle East are fairly neutral towards it, i.e they do not actively support it nor do they reject it. In terms of regulations, the regulator role is to apply the laws of the EU Commission in all the members states of EU, whereas the role is to follow the policy of the Ministry in the Arabic countries. This has not always allowed for a proper separation of the Ministry and acting regulator in the Middle East, and due to that some regulators have not been able to conduct their responsibilities freely. When the sepa- ration has been successful, the regulator has been seen to give a much clearer stance on network sharing, typically in the form of including requirements with licenses that facilitates sharing, such as in Oman and Jordan. The need to share more infrastructures have been evident through a initiative taken by the biggest operators in the MENA re- gion. At the same time, I have concluded that none of the countries I have studied has the advantage of adopting a higher extent of shar- ing today because competition is limited and because passive sharing needs to become fully common and co-opetition developed, before it is to happen. Meanwhile, to get two competitors to go beyond passive will most likely not happen today if they are not compelled by the regulator.
|
Page generated in 0.1255 seconds