• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 247
  • 27
  • 19
  • 12
  • 10
  • 8
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 395
  • 135
  • 79
  • 64
  • 62
  • 57
  • 55
  • 52
  • 49
  • 48
  • 46
  • 42
  • 35
  • 35
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A scalable architecture for the demand-driven deployment of location-neutral software services

MacInnis, Robert F. January 2010 (has links)
This thesis presents a scalable service-oriented architecture for the demand-driven deployment of location-neutral software services, using an end-to-end or ‘holistic’ approach to address identified shortcomings of the traditional Web Services model. The architecture presents a multi-endpoint Web Service environment which abstracts over Web Service location and technology and enables the dynamic provision of highly-available Web Services. The model describes mechanisms which provide a framework within which Web Services can be reliably addressed, bound to, and utilized, at any time and from any location. The presented model eases the task of providing a Web Service by consuming deployment and management tasks. It eases the development of consumer agent applications by letting developers program against what a service does, not where it is or whether it is currently deployed. It extends the platform-independent ethos of Web Services by providing deployment mechanisms which can be used independent of implementation and deployment technologies. Crucially, it maintains the Web Service goal of universal interoperability, preserving each actor’s view upon the system so that existing Service Consumers and Service Providers can participate without any modifications to provider agent or consumer agent application code. Lastly, the model aims to enable the efficient consumption of hosting resources by providing mechanisms to dynamically apply and reclaim resources based upon measured consumer demand.
62

Testing scalability of cloud gaming for multiplayer game

Printzell, Dan January 2018 (has links)
Background. The rendering of games takes a lot of processing power and requires expensivehardware to be able to perform this task in a real-time with an acceptableframe-rate. Games often also require an anti-cheat system that require extrapower to be able to always verify that the game has not been modified. With the help ofgame streaming these disadvantages could be removed from the clients. Objectives. The objective of this thesis is to create a game streaming server and client tosee if a game streaming server could scale with the amount of coresit has access to. Methods. The research question will be answered using the implementation methodology, and an experiment will be conducted using that implementation. Two programs are implemented, the server program and the client program.The servers implement the management of clients, the game logic, the rendering and the compression. Each client can only be connected to one server and the server and its clients live inside of a game instance. Everyone that is connected to one server play on the same instance.The implementation is implemented in the D programming language, and it uses the ZLib and the SDL2 libraries as the building blocks.With all of these connected an experiment is designed where as many clients as possible connect to the server. With this data a plot is create in the result section. Results. The output data shows that the implementation scale and a formula was made-up to match the scalability. The formula is <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?f(x)%20=%208%20+%205x%20-%200.11x%5E2" />. Conclusions. The experiment was successful and showed that the game server successfully scaledbased on the number of cores that where allocated. It does not scale as good as expected,but it is still an success. The test results are limited as it was only tested on one setup. More research is needed to test it on more hardware and to be able find more optimized implementations.
63

Performance and Scaling Analysis of a Hypocycloid Wiseman Engine

January 2014 (has links)
abstract: The slider-crank mechanism is popularly used in internal combustion engines to convert the reciprocating motion of the piston into a rotary motion. This research discusses an alternate mechanism proposed by the Wiseman Technology Inc. which involves replacing the crankshaft with a hypocycloid gear assembly. The unique hypocycloid gear arrangement allows the piston and the connecting rod to move in a straight line, creating a perfect sinusoidal motion. To analyze the performance advantages of the Wiseman mechanism, engine simulation software was used. The Wiseman engine with the hypocycloid piston motion was modeled in the software and the engine's simulated output results were compared to those with a conventional engine of the same size. The software was also used to analyze the multi-fuel capabilities of the Wiseman engine using a contra piston. The engine's performance was studied while operating on diesel, ethanol and gasoline fuel. Further, a scaling analysis on the future Wiseman engine prototypes was carried out to understand how the performance of the engine is affected by increasing the output power and cylinder displacement. It was found that the existing Wiseman engine produced about 7% less power at peak speeds compared to the slider-crank engine of the same size. It also produced lower torque and was about 6% less fuel efficient than the slider-crank engine. These results were concurrent with the dynamometer tests performed in the past. The 4 stroke diesel variant of the same Wiseman engine performed better than the 2 stroke gasoline version as well as the slider-crank engine in all aspects. The Wiseman engine using contra piston showed poor fuel efficiency while operating on E85 fuel. But it produced higher torque and about 1.4% more power than while running on gasoline. While analyzing the effects of the engine size on the Wiseman prototypes, it was found that the engines performed better in terms of power, torque, fuel efficiency and cylinder BMEP as their displacements increased. The 30 horsepower (HP) prototype, while operating on E85, produced the most optimum results in all aspects and the diesel variant of the same engine proved to be the most fuel efficient. / Dissertation/Thesis / M.S.Tech Mechanical Engineering 2014
64

Resource Allocation Guidelines : Configuring a large telecommunication application / Riktlinjer för resursallokering

Eriksson, Daniel January 2001 (has links)
Changing the architecture of the Ericsson Billing Gateway application has shown to solve the problem with dynamic memory management that decreased the performance. The new architecture that is focused on processes instead of threads showed increased performance. It also allowed for the possibility to adjust the process / thread configuration towards the network topology and hardware. Measurements of different configurations showed the importance of an accurate configuration and also that certain guidelines could be established based on the results. / Genom att ändra architecturen på Billing Gateway löste man på Ericsson problemet med &quot;Dynamisk minneshantering&quot;. Den nya architecturen som fokuserar på processer istället för trådar visade ökad prestanda. Den nya architecturen tillät också vissa konfigurationsmöjligheter gentemot nätverkstopologi och hårdvara. Mätningar på olika konfigurationer visade vikten av rätt konfiguration och också att vissa riktlinjer kunde utrönas från resultaten. / Phone#: +46 457 66582
65

Dependability aspects of COM+ and EJB in multi-tiered distributed systems

Karásson, Robert January 2002 (has links)
COM+ and Enterprise JavaBeans are two component-based technologies that can be used to build enterprise systems. These are two competing technologies in the software industry today and choosing which technology a company should use to build their enterprise system is not an easy task. There are many factors to consider and in this project we evaluate these two technologies with focus on scalability and the dependability aspects security, availability and reliability. Independently, these technologies are theoretically evaluated with the criteria in mind. We use a 4-tier architecture for the evaluation and the center of attention is a persistence layer, which typically resides in an application server, and how it can be realized using the technologies. This evaluation results in a recommendation about which technology is a better approach to build a scalable and dependable distributed system. The results are that COM+ is considered a better approach to build this kind of multi-tier distributed systems.
66

Limitations of Azure in GIS Scalability : A performance and migration study

Bäckström, Jonas January 2012 (has links)
In this study, the cloud platform Windows Azure has been targeted for test implementations of Geographical Information System (GIS) software in the form of map servers and tile caches. The map servers included were GeoServer, MapNik, MapServer and SharpMap, which together with the tile caches, GeoWebCache, MapCache and TileCache, were installed on Windows Azures three different virtual machine roles (Web, Worker and VM). Furthermore, different techniques for scalingapplications and internal role communication are presented, followed by four sets of performance tests. The performance tests attempt to highlight the differences in request times, how the different role sizes handle the load from the incoming requests, how the different role sizes handle many concurrent TCP-connections and how well the incoming requests are load balanced in between the worker roles. The test implementations showed that all map servers and tile caches were successfully installed in Azure, which leads to the conclusion that Windows Azure is suitable for hosting GIS software with similar installation requirements to the previously mentioned software. Four different approaches (Direct mapping, Public Internal Endpoints, Queue and Worker Role Request Broker) are presented showing how Azure allows different methods in order to scale the internal role communication as well as the external client requests. The performance tests provided somewhat inconclusive test results due to hardware limitations in the test setup. This made it difficult to draw concluding parallels between the final results and the expected values. Minor tendencies in performance gain can be seen when scaling the VM size as well as the number of VMs.
67

Adapting video compression to new formats / Adaptation de la compression vidéo aux nouveaux formats

Bordes, Philippe 18 January 2016 (has links)
Les nouvelles techniques de compression vidéo doivent intégrer un haut niveau d'adaptabilité, à la fois en terme de bande passante réseau, de scalabilité des formats (taille d'images, espace de couleur…) et de compatibilité avec l'existant. Dans ce contexte, cette thèse regroupe des études menées en lien avec le standard HEVC. Dans une première partie, plusieurs adaptations qui exploitent les propriétés du signal et qui sont mises en place lors de la création du bit-stream sont explorées. L'étude d'un nouveau partitionnement des images pour mieux s'ajuster aux frontières réelles du mouvement permet des gains significatifs. Ce principe est étendu à la modélisation long-terme du mouvement à l'aide de trajectoires. Nous montrons que l'on peut aussi exploiter la corrélation inter-composantes des images et compenser les variations de luminance inter-images pour augmenter l'efficacité de la compression. Dans une seconde partie, des adaptations réalisées sur des flux vidéo compressés existants et qui s'appuient sur des propriétés de flexibilité intrinsèque de certains bit-streams sont investiguées. En particulier, un nouveau type de codage scalable qui supporte des espaces de couleur différents est proposé. De ces travaux, nous dérivons des metadata et un modèle associé pour opérer un remapping couleur générique des images. Le stream-switching est aussi exploré comme une application particulière du codage scalable. Plusieurs de ces techniques ont été proposées à MPEG. Certaines ont été adoptées dans le standard HEVC et aussi dans la nouvelle norme UHD Blu-ray Disc. Nous avons investigué des méthodes variées pour adapter le codage de la vidéo aux différentes conditions de distribution et aux spécificités de certains contenus. Suivant les scénarios, on peut sélectionner et combiner plusieurs d'entre elles pour répondre au mieux aux besoins des applications. / The new video codecs should be designed with an high level of adaptability in terms of network bandwidth, format scalability (size, color space…) and backward compatibility. This thesis was made in this context and within the scope of the HEVC standard development. In a first part, several Video Coding adaptations that exploit the signal properties and which take place at the bit-stream creation are explored. The study of improved frame partitioning for inter prediction allows better fitting the actual motion frontiers and shows significant gains. This principle is further extended to long-term motion modeling with trajectories. We also show how the cross-component correlation statistics and the luminance change between pictures can be exploited to increase the coding efficiency. In a second part, post-creation stream adaptations relying on intrinsic stream flexibility are investigated. In particular, a new color gamut scalability scheme addressing color space adaptation is proposed. From this work, we derive color remapping metadata and an associated model to provide low complexity and general purpose color remapping feature. We also explore the adaptive resolution coding and how to extend scalable codec to stream-switching applications. Several of the described techniques have been proposed to MPEG. Some of them have been adopted in the HEVC standard and in the UHD Blu-ray Disc. Various techniques for adapting the video compression to the content characteristics and to the distribution use cases have been considered. They can be selected or combined together depending on the applications requirements.
68

Finding Community Structures In Social Activity Data

Peng, Chengbin 19 May 2015 (has links)
Social activity data sets are increasing in number and volume. Finding community structure in such data is valuable in many applications. For example, understand- ing the community structure of social networks may reduce the spread of epidemics or boost advertising revenue; discovering partitions in tra c networks can help to optimize routing and to reduce congestion; finding a group of users with common interests can allow a system to recommend useful items. Among many aspects, qual- ity of inference and e ciency in finding community structures in such data sets are of paramount concern. In this thesis, we propose several approaches to improve com- munity detection in these aspects. The first approach utilizes the concept of K-cores to reduce the size of the problem. The K-core of a graph is the largest subgraph within which each node has at least K connections. We propose a framework that accelerates community detection. It first applies a traditional algorithm that is relatively slow to the K-core, and then uses a fast heuristic to infer community labels for the remaining nodes. The second approach is to scale the algorithm to multi-processor systems. We de- vise a scalable community detection algorithm for large networks based on stochastic block models. It is an alternating iterative algorithm using a maximum likelihood ap- proach. Compared with traditional inference algorithms for stochastic block models, our algorithm can scale to large networks and run on multi-processor systems. The time complexity is linear in the number of edges of the input network. The third approach is to improve the quality. We propose a framework for non- negative matrix factorization that allows the imposition of linear or approximately linear constraints on each factor. An example of the applications is to find community structures in bipartite networks, which is useful in recommender systems. Our algorithms are compared with the results in recent papers and their quality and e ciency are verified by experiments.
69

Scalability Guidelines for Software as a Service: Recommendations based on a case analysis of Quinyx FlexForce AB and UCMS Group Ltd.

Rapp, Mikael January 2010 (has links)
Software as a service (SaaS) has become a common business model for application providers. However, this success has lead to scalability issues for service providers. Their user base and processing needs can grow rapidly and it is not always clear how a SaaS provider should optimally scale their service. This thesis summarizes the technological (both software and hardware) related solutions to scaling, but will also cover financial and managerial aspects of scalability. Unfortunately, there is no existing out of the box solution for managing scalability, every situation and application is currently viewed as a unique problem, but there exists a lot of good advice from many sources about scaling. Obviously there are practical solutions to scaling, as there are successful SaaS providers, but it is not clear if there exists some fundamental principles that every SaaS provider could use to address the issue of scalability. This thesis seeks to find such fundamental principles though previous research, articles and finally a case analysis of Quinyx FlexForce AB. The thesis concludes that there are many principles of scaling a 3-tier web system and that most principles can be applied by SaaS providers. / Software as a Service (SaaS) har blivit en allt vanligare lösning för företag. Detta har dock lett till skalbarhets problem för många leverantörer av SaaS. En SaaS leverantör kan få problem med skalning ifall deras användarbas eller beräkningsbehov växer för snabbt. Denna avhandling sammanfattar de tekniska (mjukvara och hårdvara) relaterade lösningar på skalning. Avhandlingen kommer även kortfattat att omfatta ekonomiska och administrativa aspekter av skalbarhet. Tyvärr finns det inga befintliga universallösningar för hantering skalbarhet utan varje situation och tillämpning måste ses som ett unikt problem. Det finns många goda råd från många källor om skalning och uppenbarligen finns det praktiska lösningar på att skala, då det finns framgångsrika SaaS leverantörer. Det är dock oklart om det finns några grundläggande principer som varje SaaS-leverantör kan använda för att underlätta skalbarhet. Avhandlingen syftar till att hitta sådana grundläggande principer och grundar sig på tidigare forskning, aktuella artiklar och avslutats med en analys av Quinyx FlexForce AB. Avhandlingen drar slutsatsen att det finns grundläggande principer som SaaS leverantörer kan tillämpa vid skalning av ett ”3-tier” webserver system.
70

Scalable Stream Processing and Management for Time Series Data

Mousavi, Bamdad 15 June 2021 (has links)
There has been an enormous growth in the generation of time series data in the past decade. This trend is caused by widespread adoption of IoT technologies, the data generated by monitoring of cloud computing resources, and cyber physical systems. Although time series data have been a topic of discussion in the domain of data management for several decades, this recent growth has brought the topic to the forefront. Many of the time series management systems available today lack the necessary features to successfully manage and process the sheer amount of time series being generated today. In this today we stive to examine the field and study the prior work in time series management. We then propose a large system capable of handling time series management end to end, from generation to consumption by the end user. Our system is composed of open-source data processing frameworks. Our system has the capability to collect time series data, perform stream processing over it, store it for immediate and future processing and create necessary visualizations. We present the implementation of the system and perform experimentations to show its scalability to handle growing pipelines of incoming data from various sources.

Page generated in 0.0269 seconds