Spelling suggestions: "subject:"scalability."" "subject:"calability.""
331 |
Intrapreneurship in Swedish Technology Consultancy Firms : Influences of the billing per hour mindset on intrapreneurial activities / Intraprenörskap i Svenska Konsultbolaginom TeknikBårdén, Sandra, Pärend, Kärte January 2022 (has links)
The financial billing per hour model consultancies use is a successful concept. The financial model has also influenced a mindset where the structures, culture, and leadership of the consultancies revolve around billability. Currently, many consultancies are aiming at business model innovation and turn their innovation capabilities internally through intrapreneurship. Therefore, this thesis answers the questions on how intrapreneurship is integrated into the business model and what role structural, cultural, and leadership aspects have in regard to intrapreneurship. The conducted study shows that there are two main ways of structuring the incorporation of intrapreneurship initiatives, structured and hired intrapreneurship. The culture that revolves around the billing per hour mindset influences both leadership’s willingness to supply resources to the initiative negatively and in many cases, management lacks the knowledge of what is needed to succeed with business model innovation. The study shows that the strategy of the intrapreneurship initiative and the communication and acceptance of the vision are two crucial parameters. / Dagens konsultfirmor använder sig av ett historiskt framgångsrik finansiell struktur som bygger på debiterbara timmar. Fokuset på de debiterbara timmarna påverkar inte bara den finansiella strukturen, den influerar även hur konsultfirmorna struktureras, kulturen och ledarskapet. Fokuset på debiterbara timmar komplicerar för interna projekt via intraprenörskap och innovation kring affärsmodellen. För att undersöka detta vidare svarar denna uppsats på hur intraprenörskap integreras i affärsmodellen och till vilken del struktur, kultur och ledarskap influerar initiativen. Den genomförda studien visar att det finns två huvudsakliga sätt att strukturera integrerandet av intraprenörskap, strukturerat och hyrt intraprenörskap. Kulturen som kretsar kring debiterbar tid påverkar ledarskapets vilja att tillföra resurser till initiativet negativt och i många fall saknar ledningen kunskap om vad som behövs för att lyckas med affärsmodellsinnovation. Studien visar att strategin för intraprenörskap initiativet och kommunikationen samt acceptansen av visionen är två avgörande parametrar för hur initiativen implementeras i affärsmodellen.
|
332 |
Study of Feasible Cell-Free Massive MIMO Systems in Realistic Indoor ScenariosPrado Alvarez, Danaisy 14 December 2023 (has links)
[ES] El uso masivo de las telecomunicaciones exige redes de mayor capacidad. Esta capacidad puede incrementarse de las siguientes maneras: aumentando el número de antenas, el ancho de banda, la eficiencia espectral o una combinación de ellas. En respuesta a esto, han surgido los sistemas masivos MIMO sin celdas. Estos sistemas pretenden ofrecer un servicio ubicuo y fiable, apoyándose en un número masivo de antenas y adaptando la red a las necesidades de los usuarios en cada momento. Se han estudiado sistemas MIMO masivos sin celdas tanto para frecuencias inferiores a 6 GHz como en la banda mmW, demostrando ser una buena alternativa a las celdas pequeñas. Sin embargo, hay muchas cuestiones que todavía requieren más estudio. Esta Tesis aborda las cuestiones relativas a los despliegues masivos MIMO sin celdas en términos de escalabilidad, consumo de energía, modelado realista de los escenarios de despliegue y diseño de precodificadores para dichos escenarios en la banda mmW. Los sistemas masivos sin celdas en su forma canónica consideran que todos los APs están conectados a una única CPU y que todos ellos sirven a todos los UEs al mismo tiempo. Sin embargo, en la práctica, tal sistema no es factible debido a temas de escalabilidad. Por ello, en esta Tesis se estudian y proponen diferentes soluciones de agrupación que alivian la carga tanto de cada AP individual como de la CPUs, ya que la carga total de procesamiento se divide entre ellas. Las soluciones propuestas muestran un mejor rendimiento que la solución del estado del arte estudiada para todos los tamaños de agrupación considerados e independientemente del número de UEs en el escenario. Tras las consideraciones sobre la topología lógica de la red, esta Tesis analiza el impacto en el rendimiento de la red de diferentes configuraciones de topologías físicas. En concreto, se estudia el modelado del consumo de energía considerando front-haul totalmente dedicado, híbrido y totalmente en serie. En este sentido, se sugieren algunas modificaciones al modelo tradicional de consumo de energía para obtener resultados más precisos cuando se analizan entornos en serie. A partir de los resultados obtenidos, se destaca la importancia de aplicar las modificaciones propuestas que consideran el ahorro de energía debido a las conexiones serie en un despliegue de MIMO masivo sin celdas donde cada AP transmite la misma información (excepto por los coeficientes de precodificación). Por otro lado, aunque en la banda milimétrica se dispone de mayores anchos de banda, el uso de estas frecuencias conlleva ciertos retos. Uno de estos retos es el modelado del canal radioeléctrico, ya que al trabajar con longitudes de onda del orden de decenas de milímetros cualquier objeto o rugosidad del mismo puede afectar a la propagación de la onda. En este sentido, esta Tesis, en primer lugar, propone algunas adaptaciones al modelo de bloqueo del cuerpo humano del 3GPP. Los resultados obtenidos tras las modificaciones se acercan más a los valores de las mediciones reales, lo que hace que el modelo adaptado sea más preciso para la consideración del bloqueo corporal en mmW. En segundo lugar, esta Tesis presenta una herramienta de simulación de radiocanales basada en el trazado de rayos. Se han obtenido resultados de pérdidas de trayecto para un escenario de interior que se aproximan notablemente a las medidas reales. Asimismo, los resultados obtenidos muestran que cuando no se modelan correctamente las características electromagnéticas de los materiales o no se tiene en cuenta el mobiliario en un escenario de interior, los resultados pueden diferir considerablemente de las medidas reales. Por último, esta Tesis aborda el diseño de precodificadores en sistemas MIMO masivos sin celdas en un escenario realista. Para ello, se considera un escenario industrial con requerimientos de potencia específicos. En particular, se resuelve un problema de optimización con diferentes restricciones de potencia por antena. / [CA] L'ús massiu de les telecomunicacions exigeix xarxes de major capacitat. Aquesta capacitat pot incrementar-se de les següents maneres: augmentant el nombre d'antenes, l'amplada de banda, l'eficiència espectral o una combinació d'elles. En resposta a això, han sorgit els sistemes massius MIMO sense cel·les. Aquests sistemes pretenen oferir un servei ubic i fiable, secundant-se en un nombre massiu d'antenes i adaptant la xarxa a les necessitats dels usuaris a cada moment. S'han estudiat sistemes MIMO massius sense cel·les tant per a freqüències inferiors a 6 GHz com en la banda mmW, demostrant ser una bona alternativa a les cel·les xicotetes. No obstant això, hi ha moltes qüestions que encara requereixen més estudi. Aquesta Tesi aborda les qüestions relatives als desplegaments massius MIMO sense cel·les en termes d'escalabilitat, consum d'energia, modelatge realista dels escenaris de desplegament i disseny de precodificadors per a aquests escenaris en la banda mmW. Els sistemes massius sense cel·les en la seua forma canònica consideren que tots els APs estan connectats a una única CPU i que tots ells serveixen a tots els UEs al mateix temps. No obstant això, en la pràctica, tal sistema no és factible a causa de temes d'escalabilitat. Per això, en aquesta Tesi s'estudien i proposen diferents solucions d'agrupació que alleugen la càrrega tant de cada AP individual com de la CPUs, ja que la càrrega total de processament es divideix entre elles. Les solucions proposades mostren un millor rendiment que la solució de l'estat de l'art estudiada per a totes les grandàries d'agrupació considerats i independentment del número de UEs en l'escenari. Després de les consideracions sobre la topologia lògica de la xarxa, aquesta Tesi analitza l'impacte en el rendiment de la xarxa de diferents configuracions de topologies físiques. En concret, s'estudia el modelatge del consum d'energia considerant front-haul totalment dedicat, híbrid i totalment en sèrie. En aquest sentit, se suggereixen algunes modificacions al model tradicional de consum d'energia per a obtindre resultats més precisos quan s'analitzen entorns en sèrie. A partir dels resultats obtinguts, es destaca la importància d'aplicar les modificacions proposades que consideren l'estalvi d'energia a causa de les connexions serie en un desplegament de MIMO massiva sense cel·les on cada AP transmet la mateixa informació (excepte pels coeficients de precodificació). D'altra banda, encara que en la banda mil·limètrica es disposa de majors amplades de banda, l'ús d'aquestes freqüències comporta uns certs reptes. Un d'aquests reptes és el modelatge del canal radioelèctric, ja que en treballar amb longituds d'ona de l'ordre de desenes de mil·límetres qualsevol objecte o rugositat del mateix pot afectar la propagació de l'ona. En aquest sentit, aquesta Tesi, en primer lloc, proposa algunes adaptacions al model de bloqueig del cos humà del 3GPP. Els resultats obtinguts després de les modificacions s'acosten més als valors dels mesuraments reals, la qual cosa fa que el model adaptat siga més precís per a la consideració del bloqueig corporal en mmW. En segon lloc, aquesta Tesi presenta una eina de simulació de radiocanales basada en el traçat de raigs. S'han obtingut resultats de pèrdues de trajecte per a un escenari d'interior que s'aproximen notablement a les mesures reals. Així mateix, els resultats obtinguts mostren que quan no es modelen correctament les característiques electromagnètiques dels materials o no es té en compte el mobiliari en un escenari d'interior, els resultats poden diferir considerablement de les mesures reals. Finalment, aquesta Tesi aborda el disseny de precodificadors en sistemes MIMO massius sense cel·les en un escenari realista. Per a això, es considera un escenari industrial amb requeriments de potència específics. En particular, es resol un problema d'optimització amb diferents restriccions de potència per antena. / [EN] The massive use of telecommunications demands higher capacity networks. This capacity can be increased by increasing the number of antennas, bandwidth, spectral efficiency, or a combination of these. In response to this, cell-free massive MIMO systems have emerged. These systems aim to offer a ubiquitous and reliable service, relying on a massive number of antennas and adapting the network to users' needs. Cell-free massive MIMO systems have been studied both for frequencies below 6 GHz and in the mmW band, proving to be a good alternative to small cells. However, many issues still require further study. This Thesis addresses the issues concerning cell-free massive MIMO deployments in terms of scalability, power consumption, realistic modeling of deployment scenarios, and design of precoders for such scenarios in the mmW band.
Cell-free massive systems in their canonical form consider that all the APs are connected to a single CPU and serve all UEs simultaneously. However, in practice, such a system is not feasible, due to scalability reasons. Therefore, in this Thesis, different clustering solutions that alleviate the load of both each individual AP and the CPUs, as the total processing load is divided among them, are studied and proposed. The proposed solutions show a better performance than the state-of-the-art solution studied for all cluster sizes considered and independently of the number of UEs in the scenario.
After the logical network topology considerations, the impact on the network performance of different physical topologies configurations is analyzed. Specifically, the power consumption modeling considering fully dedicated, hybrid, and fully serial front-haul is studied. In this sense, some modifications are suggested for the traditional power consumption model in order to get more accurate results when serial environments are analyzed. The obtained results highlight the importance of applying the proposed modifications that consider the power savings due to the serial connections in a cell-free massive MIMO deployment where each AP transmits the same information (except by the precoding coefficients).
On the other hand, although wider bandwidths are available in the millimeter band, the use of these frequencies brings certain challenges. One of these challenges is modeling the radio channel since when working with wavelengths in the order of tens of millimeters, any object or roughness of the same order can affect the propagation of the wave. Another challenge is to consider the electromagnetic impact of the human body at mmW frequencies. In this sense, this Thesis, firstly, proposes some adaptations to the 3GPP body blockage model. The results obtained after the modifications are closer to real measurement values, what makes the adapted model more accurate for the consideration of body blockage at mmW. Secondly, this Thesis presents a radio channel simulation tool based on ray tracing. With this tool, path loss results have been obtained for an indoor scenario that are remarkably close to the actual measurements. Also, the results show that when the electromagnetic characteristics of the materials are not modeled correctly or the furniture is not taken into account in an indoor scenario, the adjustment of the simulation results can differ considerably from the actual measurements.
Finally, the design of precoders in cell-free massive MIMO systems in a realistic scenario is addressed. For this purpose, an industrial scenario with specific power requirements is considered. In particular, an optimization problem with different per-antenna power constraints is solved. In this case, the scenario and the radio channel are modeled using the above-mentioned tool. This fact makes it possible to find with high precision the power coefficients to be used by each transmitting antenna to transmit to each user so that the achieved data rate is maximized. / I would like to thank the H2020 Marie Curie Program that has funded this thesis within Project Grant No. 766231 WAVECOMBE - ITN - 2017 / Prado Alvarez, D. (2022). Study of Feasible Cell-Free Massive MIMO Systems in Realistic Indoor Scenarios [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/191375
|
333 |
Application of Information Theory and Learning to Network and Biological TomographyNarasimha, Rajesh 08 November 2007 (has links)
Studying the internal characteristics of a network using measurements obtained from endhosts is known as network tomography. The foremost challenge in measurement-based approaches is the large size of a network, where only a subset of measurements can be obtained because of the inaccessibility of the entire network. As the network becomes larger, a question arises as to how rapidly the monitoring resources (number of measurements or number of samples) must grow to obtain a desired monitoring accuracy. Our work studies the scalability of the measurements with respect to the size of the network. We investigate the issues of scalability and performance evaluation in IP networks, specifically focusing on fault and congestion diagnosis. We formulate network monitoring as a machine learning problem using probabilistic graphical models that infer network states using path-based measurements. We consider the theoretical and practical management resources needed to reliably diagnose congested/faulty network elements and provide fundamental limits on the relationships between the number of probe packets, the size of the network, and the ability to accurately diagnose such network elements. We derive lower bounds on the average number of probes per edge using the variational inference technique proposed in the context of graphical models under noisy probe measurements, and then propose an entropy lower (EL) bound by drawing similarities between the coding problem over a binary symmetric channel and the diagnosis problem. Our investigation is supported by simulation results. For the congestion diagnosis case, we propose a solution based on decoding linear error control codes on a binary symmetric channel for various probing experiments. To identify the congested nodes, we construct a graphical model, and infer congestion using the belief propagation algorithm. In the second part of the work, we focus on the development of methods to automatically analyze the information contained in electron tomograms, which is a major challenge since tomograms are extremely noisy. Advances in automated data acquisition in electron tomography have led to an explosion in the amount of data that can be obtained about the spatial architecture of a variety of biologically and medically relevant objects with sizes in the range of 10-1000 nm A fundamental step in the statistical inference of large amounts of data is to segment relevant 3D features in cellular tomograms. Procedures for segmentation must work robustly and rapidly in spite of the low signal-to-noise ratios inherent in biological electron microscopy. This work evaluates various denoising techniques and then extracts relevant features of biological interest in tomograms of HIV-1 in infected human macrophages and Bdellovibrio bacterial tomograms recorded at room and cryogenic temperatures. Our approach represents an important step in automating the efficient extraction of useful information from large datasets in biological tomography and in speeding up the process of reducing gigabyte-sized tomograms to relevant byte-sized data. Next, we investigate automatic techniques for segmentation and quantitative analysis of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscope, and tomograms of Liposomal Doxorubicin formulations (Doxil), an anticancer nanodrug, imaged at cryogenic temperatures. A machine learning approach is formulated that exploits texture features, and joint image block-wise classification and segmentation is performed by histogram matching using a nearest neighbor classifier and chi-squared statistic as a distance measure.
|
334 |
Performance Optimisation of Discrete-Event Simulation Software on Multi-Core Computers / Prestandaoptimering av händelsestyrd simuleringsmjukvara på flerkärniga datorerKaeslin, Alain E. January 2016 (has links)
SIMLOX is a discrete-event simulation software developed by Systecon AB for analysing logistic support solution scenarios. To cope with ever larger problems, SIMLOX's simulation engine was recently enhanced with a parallel execution mechanism in order to take advantage of multi-core processors. However, this extension did not result in the desired reduction in runtime for all simulation scenarios even though the parallelisation strategy applied had promised linear speedup. Therefore, an in-depth analysis of the limiting scalability bottlenecks became necessary and has been carried out in this project. Through the use of a low-overhead profiler and microarchitecture analysis, the root causes were identified: atomic operations causing a high communication overhead, poor locality leading to translation lookaside buffer thrashing, and hot spots that consume significant amounts of CPU time. Subsequently, appropriate optimisations to overcome the limiting factors were implemented: eliminating the expensive operations, more efficient handling of heap memory through the use of a scalable memory allocator, and data structures that make better use of caches. Experimental evaluation using real world test cases demonstrated a speedup of at least 6.75x on an eight-core processor. Most cases even achieve a speedup of more than 7.2x. The various optimisations implemented further helped to lower run times for sequential execution by 1.5x or more. It can be concluded that achieving nearly linear speedup on a multi-core processor is possible in practice for discrete-event simulation. / SIMLOX är en kommersiell mjukvara utvecklad av Systecon AB, vars huvudsakliga funktion är en händelsestyrd simuleringskärna för analys av underhållslösningar för komplexa tekniska system. För hantering av stora problem så används parallellexekvering för simuleringen, vilket i teorin borde ge en nästan linjär skalning med antal trådar. Prestandaförbättringen som observerats i praktiken var dock ytterst begränsad, varför en ordentlig analys av skalbarheten har gjorts i detta projekt. Genom användandet av ett profileringsverktyg med liten overhead och mikroarkitektur-analys, så kunde orsakerna hittas: atomiska operationer som skapar mycket overhead för kommunikation, dålig lokalitet ger fragmentering vid översättning till fysiska adresser och dåligt utnyttjande av TLB-cachen, och vissa flaskhalsar som kräver mycket CPU-kraft. Därefter implementerades och testade optimeringar för att undvika de identifierade problem. Testade lösningar inkluderar eliminering av dyra operationer, ökad effektivitet i minneshantering genom skalbara minneshanteringsalgoritmer och implementation av datastrukturer som ger bättre lokalitet och därmed bättre användande av cache-strukturen. Verifiering på verkliga testfall visade på uppsnabbningar på åtminstone 6.75 gånger på en processor med 8 kärnor. De flesta fall visade på en uppsnabbning med en faktor större än 7.2. Optimeringarna gav även en uppsnabbning med en faktor på åtminstone 1.5 vid sekventiell exekvering i en tråd. Slutsatsen är därmed att det är möjligt att uppnå nästan linjär skalning med antalet kärnor för denna typ av händelsestyrd simulering.
|
335 |
Conception des réseaux maillés sans fil à multiples-radios multiples-canauxBenyamina, Djohara 01 1900 (has links)
Généralement, les problèmes de conception de réseaux consistent à sélectionner les arcs et
les sommets d’un graphe G de sorte que la fonction coût est optimisée et l’ensemble de
contraintes impliquant les liens et les sommets dans G sont respectées. Une modification dans le critère d’optimisation et/ou dans l’ensemble de contraintes mène à une nouvelle représentation d’un problème différent. Dans cette thèse, nous nous intéressons au problème de conception d’infrastructure de réseaux maillés sans fil (WMN- Wireless Mesh Network en Anglais) où nous montrons que la conception de tels réseaux se transforme d’un
problème d’optimisation standard (la fonction coût est optimisée) à un problème
d’optimisation à plusieurs objectifs, pour tenir en compte de nombreux aspects, souvent
contradictoires, mais néanmoins incontournables dans la réalité. Cette thèse, composée de
trois volets, propose de nouveaux modèles et algorithmes pour la conception de WMNs où
rien n’est connu à l’ avance.
Le premiervolet est consacré à l’optimisation simultanée de deux objectifs
équitablement importants : le coût et la performance du réseau en termes de débit. Trois
modèles bi-objectifs qui se différent principalement par l’approche utilisée pour maximiser
la performance du réseau sont proposés, résolus et comparés.
Le deuxième volet traite le problème de placement de passerelles vu son impact sur la
performance et l’extensibilité du réseau. La notion de contraintes de sauts (hop constraints)
est introduite dans la conception du réseau pour limiter le délai de transmission. Un nouvel
algorithme basé sur une approche de groupage est proposé afin de trouver les positions
stratégiques des passerelles qui favorisent l’extensibilité du réseau et augmentent sa
performance sans augmenter considérablement le coût total de son installation.
Le dernier volet adresse le problème de fiabilité du réseau dans la présence de pannes
simples. Prévoir l’installation des composants redondants lors de la phase de conception
peut garantir des communications fiables, mais au détriment du coût et de la performance
du réseau. Un nouvel algorithme, basé sur l’approche théorique de décomposition en
oreilles afin d’installer le minimum nombre de routeurs additionnels pour tolérer les pannes
simples, est développé.
Afin de résoudre les modèles proposés pour des réseaux de taille réelle, un algorithme
évolutionnaire (méta-heuristique), inspiré de la nature, est développé. Finalement, les
méthodes et modèles proposés on été évalués par des simulations empiriques et
d’événements discrets. / Generally, network design problems consist of selecting links and vertices of a graph G so
that a cost function is optimized and all constraints involving links and the vertices in G are
met. A change in the criterion of optimization and/or the set of constraints leads to a new
representation of a different problem. In this thesis, we consider the problem of designing
infrastructure Wireless Mesh Networks (WMNs) where we show that the design of such
networks becomes an optimization problem with multiple objectives instead of a standard
optimization problem (a cost function is optimized) to take into account many aspects, often
contradictory, but nevertheless essential in the reality.
This thesis, composed of three parts, introduces new models and algorithms for
designing WMNs from scratch.
The first part is devoted to the simultaneous optimization of two equally important
objectives: cost and network performance in terms of throughput. Three bi-objective models
which differ mainly by the approach used to maximize network performance are proposed,
solved and compared.
The second part deals with the problem of gateways placement, given its impact on
network performance and scalability. The concept of hop constraints is introduced into the
network design to reduce the transmission delay. A novel algorithm based on a clustering
approach is also proposed to find the strategic positions of gateways that support network
scalability and increase its performance without significantly increasing the cost of installation.
The final section addresses the problem of reliability in the presence of single failures.
Allowing the installation of redundant components in the design phase can ensure reliable
communications, but at the expense of cost and network performance. A new algorithm is
developed based on the theoretical approach of "ear decomposition" to install the minimum
number of additional routers to tolerate single failures.
In order to solve the proposed models for real-size networks, an evolutionary algorithm
(meta-heuristics), inspired from nature, is developed. Finally, the proposed models and
methods have been evaluated through empirical and discrete events based simulations.
|
336 |
Vers une gestion coopérative des infrastructures virtualisées à large échelle : le cas de l'ordonnancement / Toward cooperative management of large-scale virtualized infrastructures : the case of schedulingQuesnel, Flavien 20 February 2013 (has links)
Les besoins croissants en puissance de calcul sont généralement satisfaits en fédérant de plus en plus d’ordinateurs (ou noeuds) pour former des infrastructures distribuées. La tendance actuelle est d’utiliser la virtualisation système dans ces infrastructures, afin de découpler les logiciels des noeuds sous-jacents en les encapsulant dans des machines virtuelles. Pour gérer efficacement ces infrastructures virtualisées, de nouveaux gestionnaires logiciels ont été mis en place. Ces gestionnaires sont pour la plupart hautement centralisés (les tâches de gestion sont effectuées par un nombre restreint de nœuds dédiés). Cela limite leur capacité à passer à l’échelle, autrement dit à gérer de manière réactive des infrastructures de grande taille, qui sont de plus en plus courantes. Au cours de cette thèse, nous nous sommes intéressés aux façons d’améliorer cet aspect ; l’une d’entre elles consiste à décentraliser le traitement des tâches de gestion, lorsque cela s’avère judicieux. Notre réflexion s’est concentrée plus particulièrement sur l’ordonnancement dynamique des machines virtuelles, pour donner naissance à la proposition DVMS (Distributed Virtual Machine Scheduler). Nous avons mis en œuvre un prototype, que nous avons validé au travers de simulations (notamment via l’outil SimGrid), et d’expériences sur le banc de test Grid’5000. Nous avons pu constater que DVMS se montrait particulièrement réactif pour gérer des infrastructures virtualisées constituées de dizaines de milliers de machines virtuelles réparties sur des milliers de nœuds. Nous nous sommes ensuite penchés sur les perspectives d’extension et d’amélioration de DVMS. L’objectif est de disposer à terme d’un gestionnaire décentralisé complet, objectif qui devrait être atteint au travers de l’initiative Discovery qui fait suite à ces travaux. / The increasing need in computing power has been satisfied by federating more and more computers (called nodes) to build the so-called distributed infrastructures. Over the past few years, system virtualization has been introduced in these infrastructures (the software is decoupled from the hardware by packaging it in virtual machines), which has lead to the development of software managers in charge of operating these virtualized infrastructures. Most of these managers are highly centralized (management tasks are performed by a restricted set of dedicated nodes). As established, this restricts the scalability of managers, in other words their ability to be reactive to manage large-scale infrastructures, that are more and more common. During this Ph.D., we studied how to mitigate these concerns ; one solution is to decentralize the processing of management tasks, when appropriate. Our work focused in particular on the dynamic scheduling of virtual machines, resulting in the DVMS (Distributed Virtual Machine Scheduler) proposal. We implemented a prototype, that was validated by means of simulations (especially with the SimGrid tool) and with experiments on the Grid’5000 test bed. We observed that DVMS was very reactive to schedule tens of thousands of virtual machines distributed over thousands of nodes. We then took an interest in the perspectives to improve and extend DVMS. The final goal is to build a full decentralized manager. This goal should be reached by the Discovery initiative,that will leverage this work.
|
337 |
Cloud application platform - Virtualization vs Containerization : A comparison between application containers and virtual machinesVestman, Simon January 2017 (has links)
Context. As the number of organizations using cloud application platforms to host their applications increases, the priority of distributing physical resources within those platforms is increasing simultaneously. The goal is to host a higher quantity of applications per physical server, while at the same time retain a satisfying rate of performance combined with certain scalability. The modern needs of customers occasionally also imply an assurance of certain privacy for their applications. Objectives. In this study two types of instances for hosting applications in cloud application platforms, virtual machines and application containers, are comparatively analyzed. This investigation has the goal to expose advantages and disadvantages between the instances in order to determine which is more appropriate for being used in cloud application platforms, in terms of performance, scalability and user isolation. Methods. The comparison is done on a server running Linux Ubuntu 16.04. The virtual machine is created using Devstack, a development environment of Openstack, while the application container is hosted by Docker. Each instance is running an apache web server for handling HTTP requests. The comparison is done by using different benchmark tools for different key usage scenarios and simultaneously observing the resource usage in respective instance. Results. The results are produced by investigating the user isolation and resource occupation of respective instance, by examining the file system, active process handling and resource allocation after creation. Benchmark tools are executed locally on respective instance, for a performance comparison of the usage of physical resources. The amount of CPU operations executed within a given time is measured in order determine the processor performance, while the speed of read and write operations to the main memory is measured in order to determine the RAM performance. A file is also transmitted between host server and application in order to compare the network performance between respective instance, by examining the transfer speed of the file. Lastly a set of benchmark tools are executed on the host server to measure the HTTP server request handling performance and scalability of each instance. The amount of requests handled per second is observed, but also the resource usage for the request handling at an increasing rate of served requests and clients. Conclusions. The virtual machine is a better choice for applications where privacy is a higher priority, due to the complete isolation and abstraction from the rest of the physical server. Virtual machines perform better in handling a higher quantity of requests per second, while application containers is faster in transferring files through network. The container requires a significantly lower amount of resources than the virtual machine in order to run and execute tasks, such as responding to HTTP requests. When it comes to scalability the prefered type of instance depends on the priority of key usage scenarios. Virtual machines have quicker response time for HTTP requests but application containers occupy less physical resources, which makes it logically possible to run a higher quantity of containers than virtual machines simultaneously on the same physical server.
|
338 |
Automatic key discovery for Data Linking / Découverte des clés pour le Liage de DonnéesSymeonidou, Danai 09 October 2014 (has links)
Dans les dernières années, le Web de données a connu une croissance fulgurante arrivant à un grand nombre des triples RDF. Un des objectifs les plus importants des applications RDF est l’intégration de données décrites dans les différents jeux de données RDF et la création des liens sémantiques entre eux. Ces liens expriment des correspondances sémantiques entre les entités d’ontologies ou entre les données. Parmi les différents types de liens sémantiques qui peuvent être établis, les liens d’identité expriment le fait que différentes ressources réfèrent au même objet du monde réel. Le nombre de liens d’identité déclaré reste souvent faible si on le compare au volume des données disponibles. Plusieurs approches de liage de données déduisent des liens d’identité en utilisant des clés. Une clé représente un ensemble de propriétés qui identifie de façon unique chaque ressource décrite par les données. Néanmoins, dans la plupart des jeux de données publiés sur le Web, les clés ne sont pas disponibles et leur déclaration peut être difficile, même pour un expert.L’objectif de cette thèse est d’étudier le problème de la découverte automatique de clés dans des sources de données RDF et de proposer de nouvelles approches efficaces pour résoudre ce problème. Les données publiées sur le Web sont général volumineuses, incomplètes, et peuvent contenir des informations erronées ou des doublons. Aussi, nous nous sommes focalisés sur la définition d’approches capables de découvrir des clés dans de tels jeux de données. Par conséquent, nous nous focalisons sur le développement d’approches de découverte de clés capables de gérer des jeux de données contenant des informations nombreuses, incomplètes ou erronées. Notre objectif est de découvrir autant de clés que possible, même celles qui sont valides uniquement dans des sous-ensembles de données.Nous introduisons tout d’abord KD2R, une approche qui permet la découverte automatique de clés composites dans des jeux de données RDF pour lesquels l’hypothèse du nom Unique est respectée. Ces données peuvent être conformées à des ontologies différentes. Pour faire face à l’incomplétude des données, KD2R propose deux heuristiques qui per- mettent de faire des hypothèses différentes sur les informations éventuellement absentes. Cependant, cette approche est difficilement applicable pour des sources de données de grande taille. Aussi, nous avons développé une seconde approche, SAKey, qui exploite différentes techniques de filtrage et d’élagage. De plus, SAKey permet à l’utilisateur de découvrir des clés dans des jeux de données qui contiennent des données erronées ou des doublons. Plus précisément, SAKey découvre des clés, appelées "almost keys", pour lesquelles un nombre d’exceptions est toléré. / In the recent years, the Web of Data has increased significantly, containing a huge number of RDF triples. Integrating data described in different RDF datasets and creating semantic links among them, has become one of the most important goals of RDF applications. These links express semantic correspondences between ontology entities or data. Among the different kinds of semantic links that can be established, identity links express that different resources refer to the same real world entity. By comparing the number of resources published on the Web with the number of identity links, one can observe that the goal of building a Web of data is still not accomplished. Several data linking approaches infer identity links using keys. Nevertheless, in most datasets published on the Web, the keys are not available and it can be difficult, even for an expert, to declare them.The aim of this thesis is to study the problem of automatic key discovery in RDF data and to propose new efficient approaches to tackle this problem. Data published on the Web are usually created automatically, thus may contain erroneous information, duplicates or may be incomplete. Therefore, we focus on developing key discovery approaches that can handle datasets with numerous, incomplete or erroneous information. Our objective is to discover as many keys as possible, even ones that are valid in subparts of the data.We first introduce KD2R, an approach that allows the automatic discovery of composite keys in RDF datasets that may conform to different schemas. KD2R is able to treat datasets that may be incomplete and for which the Unique Name Assumption is fulfilled. To deal with the incompleteness of data, KD2R proposes two heuristics that offer different interpretations for the absence of data. KD2R uses pruning techniques to reduce the search space. However, this approach is overwhelmed by the huge amount of data found on the Web. Thus, we present our second approach, SAKey, which is able to scale in very large datasets by using effective filtering and pruning techniques. Moreover, SAKey is capable of discovering keys in datasets where erroneous data or duplicates may exist. More precisely, the notion of almost keys is proposed to describe sets of properties that are not keys due to few exceptions.
|
339 |
Cloud computing a jeho aplikace / Cloud Computing and its ApplicationsNěmec, Petr January 2010 (has links)
This diploma work is focused on the Cloud Computing and its possible applications in the Czech Republic. The first part of the work is theoretical and describes accessible sources analysis related to its topic. Historical circumstances are given in the relation to the evolution of this technology. Also few definitions are quoted, followed by the Cloud Computing basic taxonomy and common models of use. The chapter named Cloud Computing Architecture covers the generally accepted model of this technology in details. In the part focused on theory are mentioned some of the services opearating on this technology. At the end the theoretical part brings possibility of the Cloud Computing usage from the customer's and supplier's perspective. The practical part of the diploma work is divided into sections. The first one brings results of the questionnare research, performed by the author in the Czech Republic, focused on the usage of the Cloud Computing and virtualization services. In the sekond section is pre-feasibility study. The study is focused on the providing SaaS Services in the area of long-term and safe digital data store. Lastly there is an author's view on the Cloud Computing technology future and possible evolution.
|
340 |
Scalable Trajectory Approach for ensuring deterministic guarantees in large networks / Passage à l'échelle de l'approche par trajectoire dans de larges réseauxMedlej, Sara 26 September 2013 (has links)
Tout comportement défectueux d’un système temps-réel critique, comme celui utilisé dans le réseau avionique ou le secteur nucléaire, peut mettre en danger des vies. Par conséquent, la vérification et validation de ces systèmes est indispensable avant leurs déploiements. En fait, les autorités de sécurité demandent d’assurer des garanties déterministes. Dans cette thèse, nous nous intéressons à obtenir des garanties temporelles, en particulier nous avons besoin de prouver que le temps de réponse de bout-en-bout de chaque flux présent dans le réseau est borné. Ce sujet a été abordé durant de nombreuses années et plusieurs approches ont été développées. Après une brève comparaison entre les différentes approches existantes, une semble être un bon candidat. Elle s’appelle l’approche par trajectoire; cette méthode utilise les résultats établis par la théorie de l'ordonnancement afin de calculer une limite supérieure. En réalité, la surestimation de la borne calculée peut entrainer la rejection de certification du réseau. Ainsi une première partie du travail consiste à détecter les sources de pessimisme de l’approche adoptée. Dans le cadre d’un ordonnancement FIFO, les termes ajoutant du pessimisme à la borne calculée ont été identifiés. Cependant, comme les autres méthodes, l’approche par trajectoire souffre du problème de passage à l’échelle. En fait, l’approche doit être appliquée sur un réseau composé d’une centaine de commutateur et d’un nombre de flux qui dépasse les milliers. Ainsi, il est important qu’elle soit en mesure d'offrir des résultats dans un délai acceptable. La première étape consiste à identifier, dans le cas d’un ordonnancement FIFO, les termes conduisant à un temps de calcul important. L'analyse montre que la complexité du calcul est due à un processus récursif et itératif. Ensuite, en se basant toujours sur l’approche par trajectoire, nous proposons de calculer une limite supérieure dans un intervalle de temps réduit et sans perte significative de précision. C'est ce qu'on appelle l'approche par trajectoire scalable. Un outil a été développé permettant de comparer les résultats obtenus par l’approche par trajectoire et notre proposition. Après application sur un réseau de taille réduite (composé de 10 commutateurs), les résultats de simulations montrent que la durée totale nécessaire pour calculer les bornes des milles flux a été réduite de plusieurs jours à une dizaine de secondes. / In critical real-time systems, any faulty behavior may endanger lives. Hence, system verification and validation is essential before their deployment. In fact, safety authorities ask to ensure deterministic guarantees. In this thesis, we are interested in offering temporal guarantees; in particular we need to prove that the end-to-end response time of every flow present in the network is bounded. This subject has been addressed for many years and several approaches have been developed. After a brief comparison between the existing approaches, the Trajectory Approach sounded like a good candidate due to the tightness of its offered bound. This method uses results established by the scheduling theory to derive an upper bound. The reasons leading to a pessimistic upper bound are investigated. Moreover, since the method must be applied on large networks, it is important to be able to give results in an acceptable time frame. Hence, a study of the method’s scalability was carried out. Analysis shows that the complexity of the computation is due to a recursive and iterative processes. As the number of flows and switches increase, the total runtime required to compute the upper bound of every flow present in the network understudy grows rapidly. While based on the concept of the Trajectory Approach, we propose to compute an upper bound in a reduced time frame and without significant loss in its precision. It is called the Scalable Trajectory Approach. After applying it to a network, simulation results show that the total runtime was reduced from several days to a dozen seconds.
|
Page generated in 0.1541 seconds