• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6404
  • 2120
  • 2
  • Tagged with
  • 8527
  • 8524
  • 8130
  • 8064
  • 912
  • 845
  • 668
  • 665
  • 653
  • 639
  • 573
  • 491
  • 418
  • 402
  • 350
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Proxyserver för passiv informationssökning

Ahlin, Daniel, Jartelius, Martin, Tingdahl, Johanna January 2005 (has links)
<p>In today’s society the average person is flooded by information from everywhere. This is in particular the case when using the Internet; consider for a moment the fact that a decent search engine at this moment scans 8 058 million homepages. For a user that repeatedly comes back to the same site, the case is often that they know what they are looking for. The problem is to isolate the important information from all the other information embedding it.</p><p>We would like to state that we have found one possible solution to this problem, where the user himself can define what information he is looking for at a specific server, then scan the server when visiting it with his browser. The information is then saved and made easily accessible to the user, independent of what system he is using.</p><p>Our solution is based on a proxy-server, through which the user makes his connections. The server is configurable as to what information to scan for and where, as well as in what format, the data should be saved. Our method with an independent proxyserver is not as efficient as including this support into a browser, but it is enough to give proof of the concept. For high-speed connections to a server on the same network as the user, it might be possible for a user to notice that it is slowing down the connection, but it’s a matter of fractions of a second, and surfing under normal conditions the user is very unlikely to be bothered by the proxy. The actual loss in performance is the time required to make a second TCP-connection for each call, as well as a slight loss of efficiency due to Java’s thread synchronization.</p> / <p>I dagens samhälle översvämmas vi ofta av information. Detta gäller i allra högsta grad på Internet; betänk att en bra sökmotor i skrivande stund genomsöker 8 058 miljoner hemsidor. Det händer ofta att användare vet vad de söker för typ av information, svårigheten ligger i att snabbt kunna hitta den i det gytter av annan information som den ligger inbakad i. Vi anser att vi hittat en möjlig lösning till detta problem, där användaren själv kan ange vilken information som söks på en specifik server och sedan besöka de sidor som är intressanta. Informationen sparas och görs lättillgänglig för användaren. Vår lösning är baserad på en proxyserver, genom vilken användaren ansluter, som kan konfigureras för att spara olika typer av information. Vår metod med en fristående proxyserver är inte lika effektiv som att integrera lösningen i en webbläsare, men den bevisar att konceptet är fungerande.</p><p>För mycket snabba anslutningar till en webbserver är det möjligt, om än svårt, att märka att proxyservern ligger mellan användaren och servern. Tidsförlusten är tidsskillnaden mellan att öppna en eller två TCP-anslutningar, samt till viss del förlust av tid på grund av Javas trådsynkronisering. Vid normala förhållanden med surfande mot servrar som inte står på det egna nätverket är tidsförlusten marginell.</p>
92

En jämförande studie av IPv4 och IPv6

Harnell, Jonas, Alemayehn, Yonatan January 2005 (has links)
<p>The Internet protocol of today has been used for over 20 years. A new version of the protocol has been developed to replace the old one. This is a direct result due to the explosive growth of the usage of the Internet. This follows by new demands which needs new solutions. This report brings up the old protocol IPv4, the new protocol IPv6 and shows what kind of changes that has been developed to meet the users demand.</p><p>The report brings up two important aspects; internet security and mobility. To show the important changes within these areas, the report compares the old protocol with the new one. Furthermore, the report also studies the world’s greatest test environment to this date. The aim of this is to show how the protocol behaves in reality and thereby get an insight to the specific areas that needs to be illustrated. To conclude the report, there is a broad discussion regarding the future of the two protocols and how this may effect the Internet in a future perspective.</p> / <p>Dagens Internetprotokoll är över 20 år gammalt och en ny version av protokollet har</p><p>utvecklats för att ersätta det befintliga. Detta är ett resultat av en explosionsartad</p><p>tillväxt av Internetanvändandet med nya krav som kräver nya lösningar. Rapporten tar upp det gamla protokollet IPv4, det nya protokollet IPv6 och visar vilka förändringar som har gjorts.</p><p>Speciellt viktiga aspekter i form av säkerhet och</p><p>mobilitet tas upp för att sedan jämföras mellan de olika protokollen. Dessutom</p><p>studeras den hittills största testmiljön i världen, detta för att få en känsla av hur</p><p>protokollet beter sig i praktiken och därigenom få en inblick i områden som behöver</p><p>belysas. Rapporten avslutas med en diskussion om hur framtiden för de olika</p><p>protokollen ser ut för ett framtida Internet.</p>
93

Kriterier för säkra betaltjänster på nätet

Magnusson, Emil January 2006 (has links)
<p>This report has a purpose to identify the payment methods available for e-commerce, to</p><p>see if they fulfill certain requirements. By investigating thirty of the most popular web</p><p>shops, a few primary services have been found and revised. Security requirements for</p><p>e-payment systems include authentication, non-repudiation, integrity and confidentiality.</p><p>Other requirements considered to be important are usability, flexibility, affordability,</p><p>reliability, availability, speed of transaction and interoperability. Advantages and disadvantages</p><p>have been identified to see if the services fulfill the requirements. Also surveys</p><p>of consumer payment habits have been investigated to identify the factors of decisive importance</p><p>to the usage of payment services. Other questions answered in the report are if</p><p>the requirements are adequate, and what services that could be available also in the future.</p><p>All services fulfill the requirements of confidentiality and integrity. Usability varies,</p><p>but most of the methods are easy to use. None of them involve any further expenses</p><p>for the customer, but the consumption of time increases when authentication is used.</p><p>Authentication is fulfilled except with regular card payment, and authentication prevents</p><p>all parties from denying participation in the purchase. Availability is the biggest problem</p><p>when merchants and issuers don’t offer many of the services.</p><p>The crucial cause for using one of the services, seems to be availability, cause the</p><p>most offered method card payment, is also the most common service among customers.</p><p>Because of that, secure card services will be needed also in the future, with the requirement</p><p>that they are made available to a large amount of customers. Availability is also a</p><p>requirement for all of the minor services.</p> / <p>Syftet med rapporten är att identifiera de betalningstjänster som finns vid e-handel, för att</p><p>sedan se om de uppfyller vissa krav. En undersökning av de trettio populäraste webbutikerna</p><p>har gjorts och några primära betalningsmetoder har påträffats och granskats. De</p><p>säkerhetskrav som ställs på betaltjänsterna är autentisering, oavvislighet, integritet och</p><p>konfidentialitet. Övriga krav som anses spela in vid val av tjänst är användarvänlighet,</p><p>flexibilitet, kostnad, tillförlitlighet, tillgänglighet, tid/hastighet och plattformsoberoende.</p><p>Fördelar och nackdelar hos de olika metoderna har tagits fram för att se om de uppfyller</p><p>kraven. Även undersökningar av konsumenternas betalvanor har granskats för att komma</p><p>fram till vilka faktorer som avgör tjänsternas användande. Andra frågeställningar som har</p><p>besvarats är om kraven är tillräckliga och vilka tjänster som kan beräknas gälla även i</p><p>framtiden.</p><p>Samtliga av tjänsterna uppfyller kraven på konfidentialitet och integritet. Användarvänligheten</p><p>varierar, men de flesta är enkla att använda. Ingen av betaltjänsterna innebär några</p><p>ytterligare kostnader för kunden, men tidskonsumtionen ökar vid autentisering. Förutom</p><p>vid vanlig kortbetalning uppfylls kravet på autentisering, vilket i sin tur kan användas för</p><p>att binda en kund till ett köp. Tillgängligheten är det största problemet, då säljare och</p><p>banker inte erbjuder många av tjänsterna.</p><p>Den avgörande anledningen för tjänsternas användande tycks vara tillgängligheten, då</p><p>den mest erbjudna tjänsten kortbetalning, är den som används mest. Därför kommer säkra</p><p>korttjänster behövas också i framtiden, med kravet att de görs tillgängliga för ett så stort</p><p>antal kunder som möjligt. Detta gäller också för samtliga av de mindre betaltjänsterna.</p>
94

Efficient Implementation of Concurrent Programming Languages

Stenman, Erik January 2002 (has links)
Dissertation in Computer Science to be publicly examined in Häggsalen, Ångströmlaboratoriet, Uppsala University, on Friday, November 1, 2002 at 1:00 pm for the degree of doctor of philosophy. The examination will be conducted in English. This thesis proposes and experimentally evaluates techniques for efficient implementation of languages designed for high availability concurrent systems. This experimental evaluation has been done while developing the High Performance Erlang (HiPE) system, a native code compiler for SPARC and x86. The two main goals of the HiPE system are to provide efficient execution of Erlang programs, and to provide a research vehicle for evaluating implementation techniques for concurrent functional programming languages. The focus of the thesis is the evaluation of two techniques that enable inter-process optimization through dynamic compilation. The first technique is a fast register allocator called linear scan, and the second is a memory architecture where processes share memory. The main contributions of the thesis are: An evaluation of linear scan register allocation in a different language setting. In addition the performance of linear scan on the register poor x86 architecture is evaluated for the first time. A description of three different heap architectures (private heaps, shared heap, and a hybrid of the two), with a systematic investigation of implementation aspects and an extensive discussion on the associated performance trade-offs of the heap architectures. The description is accompanied by an experimental evaluation of the private vs. the shared heap setting. A novel approach to optimizing a concurrent program, by merging code from a sender with code from a receiver, is presented together with other methods for reducing the overhead of context switching. A description of the implementation aspects of a complete and robust native code Erlang system, which makes it possible to test compiler optimizations on real world programs.
95

A Study of Combinatorial Optimization Problems in Industrial Computer Systems

Bohlin, Markus January 2009 (has links)
A combinatorial optimization problem is an optimization problem where the number of possible solutions are finite and grow combinatorially with the problem size. Combinatorial problems exist everywhere in industrial systems. This thesis focuses on solving three such problems which arise within two different areas where industrial computer systems are often used. Within embedded systems and real-time systems, we investigate the problems of allocating stack memory for an system where a shared stacks may be used, and of estimating the highest response time of a task in a system of industrial complexity. We propose a number of different algorithms to compute safe upper bounds on run-time stack usage whenever the system supports stack sharing. The algorithms have in common that they can exploit commonly-available information regarding timing behaviour of the tasks in the system. Given upper bounds on the individual stack usage of the tasks, it is possible to estimate the worst-case stack behaviour by analysing the possible and impossible preemption patterns. Using relations on offset and precedences, we form a preemption graph, which is further analysed to find safe upper-bounds on the maximal preemptions chain in the system. For the special case where all tasks exist in a single static schedule and share a single stack, we propose a polynomial algorithm to solve the problem. For generalizations of this problem, we propose an exact branch-and-bound algorithm for smaller problems and a polynomial heuristic algorithm for cases where the branch-and-bound algorithm fails to find a solution in reasonable time. All algorithms are evaluated in comprehensive experimental studies. The polynomial algorithm is implemented and shipped in the developer tool set for a commercial real-time operating system, Rubus OS. The second problem we study in the thesis is how to estimate the highest response time of a specified task in a complex industrial real-time system. The response-time analysis is done using a best-effort approach, where a detailed model of the system is simulated on input constructed using a local search procedure. In an evaluation on three different systems we can see that the new algorithm were able to produce higher response times much faster than what has previously been possible. Since the analysis is based on simulation and measurement, the results are not safe in the sense that they are always higher or equal to the true response time of the system. The value of the method lies instead in that it makes it possible to analyse complex industrial systems which cannot be analysed accurately using existing safe approaches. The third problem is in the area of maintenance planning, and focus on how to dynamically plan maintenance for industrial systems. Within this area we have focused on industrial gas turbines and rail vehicles.  We have developed algorithms and a planning tool which can be used to plan maintenance for gas turbines and other stationary machinery. In such problems, it is often the case that performing several maintenance actions at the same time is beneficial, since many of these jobs can be done in parallel, which reduces the total downtime of the unit. The core of the problem is therefore how to (or how not to) group maintenance activities so that a composite cost due to spare parts, labor and loss of production due to downtime is minimized. We allow each machine to have individual schedules for each component in the system. For rail vehicles, we have evaluated the effect of replanning maintenance in the case where the component maintenance deadline is set to reflect a maximum risk of breakdown in a Gaussian failure distribution. In such a model, we show by simulation that replanning of maintenance can reduce the number of maintenance stops when the variance and expected value of the distribution are increased.  For the gas turbine maintenance planning problem, we have evaluated the planning software on a real-world scenario from the oil and gas industry and compared it to the solutions obtained from a commercial integer programming solver. It is estimated that the availability increase from using our planning software is between 0.5 to 1.0 %, which is substantial considering that availability is currently already at 97-98 %.
96

Integrated Code Generation

Eriksson, Mattias January 2011 (has links)
Code generation in a compiler is commonly divided into several phases: instruction selection, scheduling, register allocation, spill code generation, and, in the case of clustered architectures, cluster assignment. These phases are interdependent; for instance, a decision in the instruction selection phase affects how an operation can be scheduled. We examine the effect of this separation of phases on the quality of the generated code. To study this we have formulated optimal methods for code generation with integer linear programming; first for acyclic code and then we extend this method to modulo scheduling of loops. In our experiments we compare optimal modulo scheduling, where all phases are integrated, to modulo scheduling where instruction selection and cluster assignment are done in a separate phase. The results show that, for an architecture with two clusters, the integrated method finds a better solution than the non-integrated method for 39% of the instances. Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal. Another code generation problem that we study is how to optimize the usage of the address generation unit in simple processors that have very limited addressing modes. In this problem the subtasks are: scheduling, address register assignment and stack layout. Also for this problem we compare the results of integrated methods to the results of non-integrated methods, and we find that integration is beneficial when there are only a few (1 or 2) address registers available.
97

Evolution av beteende för boids

Mabäcker, Petter January 2008 (has links)
<p>Denna rapport beskriver användandet av artificiella neurala nätverk, som styrsystem för att utföra evolution av beteende för boids. Boids handlar i grunden om att skapa en simulering av flockbeteende, genom att låta självständiga individer sträva efter att uppnå samma mål. Uppbyggnaden av boids sker enligt tre ursprungliga regler, att undvika kollision, matcha hastighet och centrera flocken, dessa regler ligger även till grund för boidsen i detta projekt. Då beteendet för boids i grunden programmeras för hand, undersöker arbetet hur resultatet påverkas genom evolution av beteendet m h a neurala nätverk. Arbetet jämför även de två neurala nätverksarkitekturerna reaktiva- och rekurrenta nätverk. Resultatet visar att beteendet går att få fram med valda tekniken, men att det skiljer sig något gentemot originalet. Dessutom visar resultatet att de två nätverken ger vissa skillnader, men att dessa i stort är likvärdiga.</p>
98

MONOLITH : TCP/IP kommunikation och seriell dataöverföring

Andersson, Mikael, Wessman, Christian January 1998 (has links)
<p>Bofors UwS (Underwater Systems) har påbörjat utvecklingen av ett nytt centraliserat system för att koordinera alla torpedhanterare ombord på ett fartyg till en central enhet. Detta system måste kunna ta emot och i vissa fall sända data till de övriga delsystemen ombord på fartyget. För att lyckas med detta behöver de en kommunikationsapplikation till denna centrala enhet. MONOLITH är en prototyp av en sådan applikation. Dess huvudändamål är att demonstrera hur detta kan utföras och att testa enheter som kan kontrollera ett antal torpeder. En sådan enhet kallas för en TIU vilket står för Torpedo Interface Unit.</p><p>MONOLITH är för närvarande kapabel att kommunicera med ett godtyckligt antal TIU:er (förutsatt att TIU:erna använder sig av TCP/IP-kommunikation) samt ta emot navigationsdata från en GPS-mottagare (Global Positioning System). MONOLITH är även förberett för att implementera flera andra kommunikationsenheter, såsom sonarer, radar o.s.v.</p>
99

Molnsystem för automatisk blankettinmatning : Blankettutformning &amp; databearbetning / Automatic data entry utilizing the cloud : Form design &amp; data processing

Granberg, Andreas, Olofsson, Johan January 2018 (has links)
Denna tekniska rapport beskriver ett projekt med syftet att utveckla ett molnbaserat system vars uppgift är att digitalisera maskin- och handskriven text från inskannade blanketter med hjälp av Microsofts kognitiva tjänster. Det färdiga systemet var avsett att vara av tillräckligt god kvalitet för att tolka både maskin- och handskrivna blanketter samt extrahera nyckelvärden från dessa. Projektet resulterade i ett något fragmenterad system som består av en databas, en webbapplikation samt tre Azure-specifika resurser: ett Blob Storage, en Logic App och en Function App. Det anfördes i projektets ursprungliga specifikation att systemet skulle utvecklas med endast molnbaserade komponenter. På grund av tidsbegränsningar uppfylldes aldrig detta krav till fullo. Trots det faktum att inte alla komponenter är molnbaserade är systemet fortfarande fullt fungerande i den meningen att det kan tolka maskin- och handskrivna blanketter samt extrahera nyckelvärden från dessa.  De blanketter som användes utvecklades vid sidan av systemet och deras primära syfte är att inte påföra systemet några begränsningar medan de fortfarande är lättförståeliga av dess användare. / This technical report describes a project aimed at developing a cloud based system whose task it is to digitize hand and machine written text in forms using Microsoft’s Cognitive Services. The resulting system was intended to be of sufficient quality to interpret both machine and handwritten forms as well as extracting key values from them. The project resulted in a somewhat fragmented system which consists of a database, a web application, and three Azure specific resources: Blob Storage, Logic App, and Function App.  It was stated in the project’s original specification that the system should be developed entirely using cloud-based components. Due to time constraints this requirement was never fully met. Despite the fact that not all components are cloudbased, the system is still fully functional in the sense that it is able to interpret machine- and handwritten text, as well as extracting key values. The forms used were developed alongside the system and their primary purpose is to not impose any constraints on the system whilst still being easily understandable by its users.
100

Type-directed Generation and Shrinking for Imperative Programming Languages

Samuel, Bodin, Joel, Söderman January 2018 (has links)
Optimizing compilers are large and complex systems, potentially consisting of millions of lines of code. As a result, the implementation of an industry standard compiler can contain several serious bugs. This is alarming given the critical part that software plays in modern society. Therefore, we investigate type-directed program generation and shrinking for the testing of compilers for imperative programming languages. We implement a type-directed generator and shrinker for a subset of C in Haskell. The generator is capable of generating type-correct programs that contain variable mutations and challenging program constructs, such as while-statements and if statements. We evaluate the quality of the generated programs by measuring statement coverage of a reference interpreter and by comparing the results to reference test suite. The results show that the generated programs can surpass the test suite in terms of quality by one percentage point of coverage. To test the bug finding capabilities of the generator and the shrinker, we use them to test seven interpreters and manage to identify a total of seven bugs. Furthermore, we test the same interpreters with the reference test suite and find only three bugs. The implications of the results lead us to question the reliability of code coverage as a measure of quality for compiler test suites. Finally, we conclude that type-directed generation and shrinking is a powerful method that can be used for the testing of compilers for imperative programming languages.

Page generated in 0.0601 seconds