• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4506
  • 975
  • 69
  • 49
  • 39
  • 11
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 5890
  • 5890
  • 5606
  • 5343
  • 5321
  • 773
  • 451
  • 372
  • 320
  • 314
  • 304
  • 286
  • 265
  • 257
  • 254
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

En värdtjänst för mjukvarutvecklingsprojekt : Utveckling av ett verktyg för att effektivisera programmering

Brinnen, David, Nord, Rasmus January 2013 (has links)
Title: A hosting service for software development projects that use the Git revision control system. To learn using tools to make software development more effective should be self-evident at a high level institution as the competitive industry races on. The absence of directives for programming students in Sweden to use source code management (SCM) was the basis for this report. The report describing the developing of a hosting service for software development to use SCM of Git, which includes a web application, storage, API and authentication of students. The project resulted in a hosting service and a smaller survey of how the today Swedish students using habits of SCM during their studies.
262

Impact of Three-Dimensional Indoor Environment on the Performance of Ultra-Dense Wireless Networks

Saber, Khamooshi January 2014 (has links)
With rapidly increasing traffic demand, it is expected that ultra-dense wireless access networks are deployed in many buildings in a near future. Performance evaluation of in-building ultra-dens networks is thus of profound importance. Buildings consist of walls and floors in three-dimensional environments, and the walls and floors attenuate the radio propagation. However, previous studies on the performance evaluation of wireless networks have mainly focused on open areas with an assumption of two-dimensional environments.  In this thesis, we investigate the effects of walls and floors on the performance of user data rate when wireless access networks are densely deployed inside a building. We assume a building of a typical shape, and perform Monte Carlo simulations with multiple configurations of different wall and floor losses as well as different sets of numbers of users and base stations per floor. Numerical results indicate that penetration loss due to walls and floors can increase the data rate of both average and five-percentile users, as this tends to better isolate a given base station and its connected users from the signals of others. We also observe that increasing the number of indoor base stations does not necessarily improve received user data rate because the number of users is limited
263

Implementing Confidence-based Work Stealing Search in Gecode

Eklöf, Patrik January 2014 (has links)
Constraint programming is a field whose goal is to solve extremely large problems with a set of restrictions that define the problem. One such example is generating CPU instructions from source code, because a compiler must choose the optimal instructions that best matches the source code, schedule them optimally to minimize the amount of time it takes to execute the instructions and possibly at the same time also minimize power consumption. The difficulty of the problem lies in that there is no single good way to approach the problem since all parameters are so dependent on each other. For example, if the compiler chooses to minimize the amount of instructions, it usually ends up with large instructions which are big and complex and minimizes the amount of power used. However, they are also more difficult to schedule in an efficient manner in order to reduce the runtime. Choosing many smaller instructions gives more flexibility in scheduling, but draws more power. The compiler must also take caches into account in order to minimize misses which costs power and slows down the execution, making the whole problem even more complex. To find the best solution to such problems, one must typically explore every single possibility and measure which one is fastest. This creates a huge amount of possible solutions which takes a tremendous amount of time to explore to find a solution that meets the requirements (often finding the “optimal” solution). Typically, these problems are abstracted into search trees where they are explored using different techniques. Typically, there are two different ways to parallelize the exploration of search trees. These methods are coarse grained parallel search, which splits exploration into several threads as far up in the tree as possible, near the root, and fine grained parallel search which splits up the work as far down the search tree as possible so that each thread gets only a small subtree to explore. Coarse grained search has the advantage that it can achieve super-linear speedup if the solution is not in the leftmost subtree; otherwise, it wastes all work (compared to DFS). Fine grained search has the advantage that it always achieves linear speedup, but can never achieve super-linear speedup. An interesting way of search known as confidence-based search combines these two approaches. It works by having a set of probabilities for each branch provided by the user (called a confidence model); search method takes the help of probabilities as a guide for how many resources it should spend to explore different subtrees (e.g. if there are 10 threads and a probability of 0.8 that there is a solution in a subtree, the search method sends 8 threads for exploring that subtree; an alternative of looking at the problem is that the search method spends 80% of its resources to explore that subtree and spends the remaining 20% to exploring the rest of the subtrees). As the search method finds failed nodes, it updates the probabilities by taking into account that it is less probable that there is a solution in a subtree where there are more failed nodes. Periodically, the algorithm also restarts, and when it does, it uses the updated probabilities as a guide for where to look for solutions. This thesis took upon the goal of creating such a search for a constraint-based framework called Gecode from scratch. The resulting engine had a lot of potential, and while not perfect, it showed clear signs of super linear speedup for some problems tested with naïve confidence models.
264

Secure Multicast with Source Authentication for the Internet of Things

Martynov, Nikita January 2014 (has links)
Sakernas Internet är ett snabbt växande område av avancerad teknik och forskning. Dess säkerhet är avgörande för tillförlitligheten och tryggheten av framtida dagliga kommunikationer. DTLS protokollet är ett standardprotokoll för att garantera säkerheten för unicast kommunikation. En DTLS rekord lager tillägg för multicast i begränsade miljöer håller på att utformas för att garantera säkerheten för multicast. Däremot nuvarande förslag av DTLS baserad multicast ger inte en sådan viktig egenskap som källäkthet för de överförda uppgifterna. Dessutom är handslag lagret utformat för att bara etablera parvisa nycklar, och därmed finns det inget sätt att distribuera och hantera gruppnycklar heller. De två ovan nämnda nackdelar blir de primära målen för designen för examensarbetet. Detta examensarbete genomförs i samarbete med Philips. I examensarbetet, formulerar vi kraven för att säkerställa multicast i en begränsad miljö baserad på företagets utomhusbelysning scenariot med en centraliserad förtroende modell. Vi utvärderar olika autentiseringssystem av källor och 4 nyckelhanteringsprotokoll med avseende på de formulerade kraven. Vi väljer två autentiseringssystem och använder dem på vårt scenario. Som resultat konstruerar vi en förlängning av DTLS baserad multicast med stöd av ECDSA signaturen för källautentisering och utvecklar en prototyp implementering. Förutom det, vi bestämmer kryptografiska primitiver för TESLA systemet och anpassar det system som ska användas för den periodiska kommunikationsmodellen. Till slut, konstruerar vi en lätt och flexibel lösning för gruppnyckelhantering för att dela ut gruppnycklar och allmänna nycklar från den pålitliga myndigheten. / The Internet of Things is a rapidly evolving field of high-end technology and research. Its security is vital to the reliability and safety of the future everyday communications. The DTLS protocol is a default protocol to assure security for unicast communication. A DTLS record layer extension for multicast in constrained environments is being designed to assure security for multicast. However, currently proposed DTLS-based multicast does not provide such an essential property as source authenticity for the transmitted data. Moreover, handshake layer is designed to establish pairwise keys only, and hence, there is no way to distribute and manage group keys either. The two aforementioned downsides become the primary objectives of the design for the thesis. This thesis is conducted in collaboration with Philips. In the thesis, we formulate requirements to secure multicast in constrained environment based on the company's outdoor lighting scenario with centralized trust model. We evaluate various source authentication schemes and 4 key management protocols with regards to the formulated requirements. We select two authentication schemes and apply them to our scenario. As a result we design an extension of DTLS based multicast with support of ECDSA signature for source authentication and we develop a prototype implementation. Besides that, we determine cryptographic primitives for the TESLA scheme and adapt the scheme to be used for periodic communication pattern. Further, we design a lightweight and flexible group key management solution to distribute group keys and public keys by the trusted authority.
265

HW Fault Coverage Analysis

Bardis, Dimitrios January 2014 (has links)
In Ericsson Radio Base Station (RBS) products a very high quality is crucial. To achieve such a high quality, the production test must be capable of detecting all potential faults introduced in the production process. During the production phase it is very important to achieve the maximum coverage possible on a HW implementation. The major test strategies that will be evaluated in this Project will be BSCAN (Boundary Scan Testing), FT (Functional Testing) and AOI (Automated Optical Inspection) and the PCB that will be tested under these test strategies is TCU board. Searching the commerce for a valuable Fault Coverage Analysis tool is the basic step in order to test the PCB. Next, a suitable method for the use of the tool will be reported to Ericsson and recommendations also to Ericsson AB on whether to use the tool or not should be the conclusion of this Project.
266

Utvärdering av prestandaoptimeringsverktyg för Android

Cederlund, Mattias January 2014 (has links)
På senare tid har smarta mobila enheter fått en allt större roll i vardagen och det finns en uppsjö av applikationer till dessa. Den som är uppmärksam kan upptäcka att prestandan och användarupplevelsen kan variera kraftigt mellan olika applikationer. Prestandaoptimering är en viktig del i utvecklingsprocessen för mobila applikationer eftersom mobila enheter ofta har betydligt mer begränsade resurser jämfört med till exempel persondatorer. Eftersom prestanda är komplext med många faktorer som spelar in kan man använda sig av verktyg för att underlätta optimeringsarbetet. För att hitta de mest lämpliga verktygen för prestandaoptimering av Android-applikationer har en utvärdering av en delmängd verktyg som finns på marknaden utförts. Utvärderingen har fokuserat på verktygens funktionalitet och effektivitet och målet är att utifrån utvärderingsresultatet ge en rekommendation av de verktyg som är lämpligast att använda. Resultatet av utvärderingen visade på att alla verktyg som utvärderats gav goda indikationer och prestandavinster hos testprogramvaran kunde dokumenteras vid användning av samtliga verktyg. Det verktyg som var mest heltäckande gällande dess funktionalitet var Traceview, ett profileringsverktyg som kunde användas för att analysera CPU-prestanda, layout-prestanda och svarstids-prestanda. För att utföra en heltäckande prestandaoptimering krävdes dock kompletterande verktyg för optimeringsområdet minnes-prestanda. Genom arbetet och den resulterade rekommendationen kan utvecklare av Android-applikationer förbättra sina arbetsmetoder vid prestandaoptimering, genom att använda sig av lämpliga och effektiva verktyg.
267

Konfigurering av slutartider för ljusdetekterande mjukvara

Lundström, Gustav January 2014 (has links)
This project measures the upper bound of exposure time for laser reflection detection in the software DotDetector. Via measurement of the exposure time for which distortion happens in a room lit with everyday light we conclude that the upper bound for exposure times are 100 milliseconds. This value does not change as long as the lighting in the room is the same. As future work this project proposes variable upper bounds depending on secondary lighting in the room. Also we propose automating the colour masking of the detection algorithm. / Den här rapporten mäter den övre gränsen för slutartider i den laserljusreflektionsdetekterande mjukvaran DotDetector. Genom att ställa upp och mata slutartider vid vilka distortioner uppstår i bilden i ett normalt upplyst rum konstaterar vi i rapporten att den övra gränsen for slutartdider bör ligga vid 100 millisekunder. Detta värde är detsamma sa länge den övriga belysningen i rummet ar densamma. Projektet föreslår som framtida utveckling att ge ett variabelt gränsvärde baserat pa belysningen i rummet. Projektet föreslår även automatisering av färgaskningen i programmets detektionsalgoritm.
268

Automatisk detektering av förutbestämda former i olika miljöer

Forslund, Joakim January 2014 (has links)
Den här rapporten mäter i vilka miljöer det går att hitta förutbestämda former med OpenCV och en "off-the-shelf" webbkamera. / This project measures in which environments a predetermined shape can be found with OpenCV and "off-the-shelf" webcameras.
269

Framtiden för Google Glass : En studie i acceptans av ny teknik

Kilström, Therése, Sjöblom, Caroline January 2014 (has links)
Google Glass är ett par glasögon med utökade tekniska funktioner som är under utveckling av Google X. Denna kandidatuppsats undersöker hur produkten kommer att stå sig på marknaden. De områden som denna studie har valt att fokusera på är de säkerhetsfrågor som kan komma att uppstå gällande produkten. Säkerhetsfrågorna syftar på integritetsaspekten för användaren och dess omgivning, samt hantering och spridning av data. Det andra område som denna rapport analyserar är människa-datorinteraktion. Interaktionsstil samt utseende var de två faktorer som vägde tyngst gällande frågor kring MDI. En enkätundersökning och olika slags intervjuer genomfördes för att sammanställa ett kvantitativt respektive kvalitativt resultat. Resultatet av metoderna visar att det finns en skeptisk syn på produkten gällande utseendet, prissättningen och integritetsaspekterna.
270

Thread Dispatching in Barrelfish

Delikoura, Eirini January 2014 (has links)
Current computer systems are becoming more and more complex. Even commodity computers nowadays have multiple cores while heterogeneous systems are about to become the mainstream computer architecture. Parallelism, synchronization and scaling are thus becoming issues of grave importance that need to be addressed efficiently. In environments like that, creating dedicated software and Operating Systems is becoming a difficulty for performance enhancement. Developing code for just a specific machine can prove to be both expensive and wasteful since technology advances with such speed that what is considered state-of-the-art today becomes quickly obsolete. The Multikernel schema and its implementation, the Barrelfish OS, target a group of different architectures and environments even when these environments “co-exist" on the same system. We present a different approach on loading and executing programs and using our new scheduling policy we handle tasks rather than threads, balancing work-load and developing a dynamic environment with respect to scaling and performance. Our goal is to use our findings in order to establish a more controlled way to use resources.

Page generated in 0.1113 seconds