161 |
Införandet av ny teknologi: Data Virtualization inom nordiska organisationer : En kvalitativ kartläggning över vilka faktorer som är avgörande för införandet av Data Virtualization / The adoption of new technology: Data Virtualization within Nordic organizations : A qualitative mapping of which factors are decisive for the adoption of Data VirtualizationHygstedt Falk, Engla, Wartmark, Erica January 2023 (has links)
Data är idag en central resurs i många organisationer och mängden data ökar för varje år. Detta kan innebära att organisationer påverkas till att söka efter nya teknologier som kan ge dem möjligheter att fatta datadrivna beslut i större omfattning. Den teknologi som denna studie undersöker är Data Virtualization (DV) av den anledning att det finns ett tydligt forskningsgap gällande varför organisationer väljer att implementera DV. Studien använder TOE-ramverket med syftet att kartlägga vad som gör att organisationer tar beslut om att implementera DV. Studiens datainsamlingsmetod är semistrukturerade intervjuer där fyra respondenter från olika företag i Norden intervjuades. Empirin analyserades för att identifiera vilka faktorer som var betydande för implementeringen av DV. Resultatet visade att nordiska organisationers beslut att implementera DV påverkades av många faktorer som redan har fastställts av tidigare forskning. Dessa faktorer är: relativ fördel, testbarhet, roller och kompetens, stöd från ledning och externt stöd. Slutligen har även två nya faktorer identifierats som betydande för implementationen av DV, vilket var storlek och spridning av data samt i specifika fall omorganisering. / Data is today a central resource in many organizations and the amount of data increases every year. This could result in organizations being influenced to search for new technologies that can give them opportunities to make data-driven decisions on a larger scale. The technology that this study investigates is Data Virtualization (DV) since there is a clear research gap regarding why organizations choose to implement DV. The study uses the TOE framework with the purpose of mapping what makes organizations decide to implement DV. The study's data collection method is semi-structured interviews where four respondents from different companies in the Nordics were interviewed. The empirical data was analyzed to identify which factors were important for the implementation of DV. The result showed that the adoption decision of Nordic organizations is influenced by factors that have already been determined by previous research. These factors are: relative advantage, testability, roles and skills, top management support and external support. Finally, two new factors have also been identified as important for the implementation of DV: size and distribution of data and, in specific cases, reorganization.
|
162 |
Traffic Load Predictions Using Machine Learning : Scale your Appliances a prioriXirouchakis, Michail January 2018 (has links)
Layer 4-7 network functions (NF), such as Firewall or NAPT, have traditionally been implemented in specialized hardware with little to no programmability and extensibility. The scientific community has focused on realizing this functionality in software running on commodity servers instead. Despite the many advancements over the years (e.g., network I/O accelerations), software-based NFs are still unable to guarantee some key service-level objectives (e.g., bounded latency) for the customer due to their reactive approach to workload changes. This thesis argues that Machine Learning techniques can be utilized to forecast how traffic patterns change over time. A network orchestrator can then use this information to allocate resources (network, compute, memory) in a timely fashion and more precisely. To this end, we have developed Mantis, a control plane network application which (i) monitors all forwarding devices (e.g., Firewalls) to generate performance-related metrics and (ii) applies predictors (moving average, autoregression, wavelets, etc.) to predict future values for these metrics. Choosing the appropriate forecasting technique for each traffic workload is a challenging task. This is why we developed several different predictors. Moreover, each predictor has several configuration parameters which can all be set by the administrator during runtime. In order to evaluate the predictive capabilities of Mantis, we set up a test-bed, consisting of the state-of-the-art network controller Metron [16], a NAPT NF realized in FastClick [6] and two hosts. While the source host was replaying real-world internet traces (provided by CAIDA [33]), our Mantis application was performing predictions in real time, using a rolling window for training. Visual inspection of the results indicates that all our predictors have good accuracy, excluding (i) the beginning of the trace where models are still being initialized and (ii) instances of abrupt change. Moreover, applying the discrete wavelet transform before we perform predictions can improve the accuracy further. / Nätverksfunktioner i lager 4-7 som t.ex. brandväggar eller NAPT har traditionellt implementeras på specialdesignad hårdvara med väldigt få programeringsegenskaper. Forskning inom datakomunikation har fokuserat på att istället möjliggöra dessa funktioner i mjukvara på standardhårdvara. Trots att många framsteg har gjorts inom området under de senaste åren (t.ex. nätverks I/O accelerering), kan inte mjukvarubaserade nätverksfunktioner garantera önskad tjänstenivå för kunderna (t.ex. begränsade latensvärden) p.g.a. det reaktiva tillvägagångsättet när arbetslasten ändras. Den här avhandlingen visar att med hjälp av maskininlärning så går det att förutse hur trafikflöden ändras över tid. Nätverksorkestrering kan sedan användas för att allokera resurser (bandbredd, beräkning, minne) i förväg samt mer precist. För detta ändamål har vi utvecklat Mantis, en nätverksapplikation i kontrolplanet som övervakar alla nätverksenheter för att generera prestandabaserade mätvärden och använder matematiska prediktorer (moving average, autoregression, wavelets, o.s.v.) för att förutse kommande ändringar i dessa värden. Det är en utmaning att välja rätt metod för att skapa prognosen för varje resurs. Därför har vi utvecklat flera olika prediktorer. Dessutom har varje prediktor flera konfigurationsvärden som kan ändras av administratören. För att utvärdera Mantis prognoser har vi satt upp ett testnätverk med en av marknadens ledande nätverkskontrollers, Metron [16], en NAPT nätverksfunktion implementerad med FastClick [6] och två testnoder. Den ena noden skickar data hämtad från verklig Internettrafik (erhållen från CAIDA [33]) samtidigt som vår applikation, Mantis, skapar prognoser i realtid. Manuell inspektion av resultaten tyder på att alla våra prediktorer har god precision, förutom början av en spårning då modellerna byggs upp eller vid abrupt ändring. Dessutom kan precisionen ökas ytterligare genom att använda diskret wavelet transformering av värdena innan prognosen görs.
|
163 |
3D för allmänheten : En kvalitativ studie om hur 3D-kartor påverkar användbarheten och förståelsen hos allmänheten i medborgardialoger inom samhällsbyggnadsprocessen. / 3D for the general public : A qualitative study on how 3D maps affect the usability and understanding of the public in citizen dialogues in the community building process.Zendeli, Ron, Åkerman, Olle January 2022 (has links)
Den svenska regeringen har beslutat om en strategi för IT-politiken, där målet är att Sverige ska vara bäst i världen på att använda digitaliseringens möjligheter. Dessa möjligheter kommer öppna upp för en enklare vardag för privatpersoner och företag. Flera kommuner har idag samarbeten med EU, regioner och kommuner där syftet är att digitalisera samhällsbyggnadsprocessen med hjälp av att utveckla innovativa digitala verktyg. Offentliga utredningar har påvisat brister i hur lösningarna presenteras och att det finns utvecklingspotential. I nuläget presenteras information för allmänheten i form av text och bilder, vilket ger en ofullständig bild av pågående och framtida byggprojekt. Tidigare studier inom andra områden har visat att 3D-teknik kan vara värdefullt på en bred front. Syftet med studien var att undersöka hur 3D-kartor påverkar användbarheten och förståelsen hos allmänheten i medborgardialoger inom samhällsbyggnadsprocessen. Studien berörde även hur processens virtualiserbarhet påverkas. Som teoretiskt ramverk och principer som underlag för studien, användes modellen Process Virtualization Theory (PVT) och användbarhet. Inom det sistnämnda används komponenter som effektivitet, fel, minnesbarhet, lärbarhet och tillfredsställelse för att mäta användbarheten av tjänsterna. PVT-modellen kan användas för att mäta hur lämpad en process är att vara digital och huruvida IT kan möjliggöra en bättre digital process eller inte. Studien har en genomgående kvalitativ ansats som undersökte användbarheten och hur lämpad processen är att vara digital. Användbarhetstest med kompletterande intervjuer utfördes under två separata tillfällen där det ena riktades mot en befintlig webbtjänst och den andra mot en ny webbtjänst med en 3D-karta. Resultatet i användandet av 3D-kartan påvisade att igenkänning och ovana är två återkommande teman där tillfredsställelse uppvisas i form av nyfikenhet och intresse. I jämförelse mellan webbtjänsterna så bidrog 3D-kartan till en ökad visuell förståelse av informationen som presenterades. Å andra sidan gav informationsdokumenten en djupare förståelse. 3D-kartan påverkade den undersökta processen positivt och var framförallt användbar för den visuella förståelsen. Processen var lämpad för att vara digital där representationen var en bidragande faktor för vad IT möjliggjorde. / The Swedish government has decided on a strategy for IT policy where the goal is for Sweden to be the best in the world at utilizing the possibilities of digitalisation. These opportunities will open up for a simpler everyday life for individuals and companies. Several municipalities today have collaborations with the EU, regions and municipalities where the purpose is to digitize the community building process with the help of developing innovative digital tools. Public investigations have shown shortcomings in how the solutions are presented and that there is development potential. At present, information is presented to the public in the form of text and images, which gives an incomplete picture of ongoing and future construction projects. Previous studies in other areas have shown that 3D technology can be valuable on a broad front. The purpose of the study was to investigate how 3D maps affect the public's usefulness and understanding in citizen dialogues in the community building process. The study also touched on how the virtualisability of the process is affected. As a theoretical framework and principles as a basis for the study, the model Process Virtualization Theory (PVT) and usability were used. Within usability, components such as efficiency, error, memorability, learnability and satisfaction are used to measure the usefulness of the services. The PVT model can be used to measure how appropriate a process is to be digital and whether IT can enable a better digital process or not. The study has a consistent qualitative approach that relates to examining usability and how appropriate the process is to be digital. Usability tests with supplementary interviews were performed on two separate occasions, where one is aimed at an existing web service and the other at a new web service with a 3D map. The results in the use of the 3D map showed that recognition and unfamiliarity are two recurring themes where satisfaction is shown in the form of curiosity and interest. In comparison between the web services, the 3D map contributed to an increased visual understanding of the information presented. However, the information documents provided a deeper understanding. The 3D map proved to have a positive effect on the investigated process and was above all useful for the visual understanding. The process also proved to be suitable for being digital, where representation was a contributing factor to what IT made possible.
|
164 |
Virtualized resource management in high performance fabric clustersRanadive, Adit Uday 07 January 2016 (has links)
Providing performance and isolation guarantees for applications running in virtualized
datacenter environments requires continuous management of the underlying physical
resources. For communication- and I/O-intensive applications running on such platforms,
the management methods must adequately deal with the shared use of the high-performance
fabrics these applications require. In particular, new classes of latency-sensitive and
data-intensive workloads running in virtualized environments rely on emerging fabrics
like 40+Gbps Ethernet and InfiniBand/RoCE with support for RDMA, VMM-bypass and
hardware-level virtualization (SR-IOV). However, the benefits provided by these technology
advances are offset by several management constraints: (i) the inability of the hypervisor
to monitor the VMs’ usage of these fabrics can affect the platform’s ability to provide
isolation and performance guarantees, (ii) the hypervisor cannot provide fine-grained
I/O provisioning or perform management decisions for VMs, thus reducing the degree of
consolidation that can be supported on the platforms, and (iii) without such support it
is harder to integrate these fabrics into emerging cloud computing platforms and
datacenter fabric management solutions. This is made particularly challenging for
workloads spanning multiple VMs, utilizing physical resources distributed across multiple
server nodes and the interconnection fabric.
This thesis addresses the problem of realizing a flexible, dynamic resource management
system for virtualized platforms with high performance fabrics. We make the following key
contributions:
(i) A lightweight monitoring tool, IBMon, integrated with the hypervisor to monitor VMs’
use of RDMA-enabled virtualized interconnects, using memory introspection techniques.
(ii) The design and construction of a resource management system that leverages IBMon
to provide latency-sensitive applications performance guarantees. This system is built
on microeconomic principles of supply and demand and can be deployed on a per-node
(Resource Exchange) or a multi-node (Distributed Resource Exchange) basis. Fine-grained
resource allocations can be enforced through several mechanisms, including CPU capping
or fabric-level congestion control.
(iii) Sphinx, a fabric management solution that leverages Resource Exchange to orchestrate
network and provide latency proportionality for consolidated workloads, based on
user/application-specified policies.
(iv) Implementation and experimental evaluation using InfiniBand clusters virtualized with
the Xen or KVM hypervisor, managed via the OpenFloodlight SDN controller, and using
representative data-intensive and latency-sensitive benchmarks.
|
165 |
Performance scalability of n-tier application in virtualized cloud environments: Two case studies in vertical and horizontal scalingPark, Junhee 27 May 2016 (has links)
The prevalence of multi-core processors with recent advancement in virtualization technologies has enabled horizontal and vertical scaling within a physical node achieving economical sharing of computing infrastructures as computing clouds. Through hardware virtualization, consolidated servers each with specific number of core allotment run on the same physical node in dedicated Virtual Machines (VMs) to increase overall node utilization which increases profit by reducing operational costs. Unfortunately, despite the conceptual simplicity of vertical and horizontal scaling in virtualized cloud environments, leveraging the full potential of this technology has presented significant scalability challenges in practice. One of the fundamental problems is the performance unpredictability in virtualized cloud environments (ranked fifth in the top 10 obstacles for growth of cloud computing). In this dissertation, we present two case studies in vertical and horizontal scaling to this challenging problem. For the first case study, we describe concrete experimental evidence that shows important source of performance variations: mapping of virtual CPU to physical cores. We then conduct an experimental comparative study of three major hypervisors (i.e., VMware, KVM, Xen) with regard to their support of n-tier applications running on multi-core processor. For the second case study, we present empirical study that shows memory thrashing caused by interference among consolidated VMs is a significant source of performance interference that hampers horizontal scalability of an n-tier application performance. We then execute transient event analyses of fine-grained experiment data that link very short bottlenecks with memory thrashing to the very long response time (VLRT) requests. Furthermore we provide three practical techniques such as VM migration, memory reallocation, soft resource allocation and show that they can mitigate the effects of performance interference among consolidate VMs.
|
166 |
Piktavališkos programinės įrangos virtualių mašinų aplinkoje aptikimo metodikos sudarymas ir tyrimas / Development and research of malicious software detection technique in virtual machines environmentRudzika, Darius 13 August 2010 (has links)
Saugos problemos virtualizuotose aplinkose tampa vis aktualesnės, todėl darbe nagrinėjama piktavališkos programinės įrangos virtualių mašinų aplinkoje aptikimo problematika. Darbe pateikiama: 1)piktavališkos programinės įrangos veikiančios virtualizuotose aplinkose analizė 2)metodikos, piktavališkos programinės įrangos virtualių mašinų aplinkoje aptikimui, sudarymas 3)piktavališkos programinės įrangos virtualioje mašinoje aptikimo, panaudojant sudaryta metodiką, eksperimento rezultatai ir jų priklausomybė nuo virtualios mašinos darbinės atminties dydžio. / In the context of virtual environment, The Security problems are highly important. The work presents analysis of malware types and it‘s presence in virtualized environments. Work also presents some results of experiments that have been carried out within the real virtual machine environment through modeling aiming to identify dependencies between the malware type, called Rootkits, detection time and the virtual machine memory size. Rootkits exploit kernel vulnerabilities and gain privileges (popularity) within any system, virtual or not. The basic result of the work is as follows: 1) the malware detection methodology for the virtual environment when the memory size of a virtual machine is changing; 2) dependences between the virtual machine memory size and Rootkit detection time.
|
167 |
Virtualizacijos technologijų pritaikymas debesyje (Cloud) / Virtualization in the cloudMardosas, Jonas 09 July 2011 (has links)
Šiame darbe aprašomos technologijos naudojamos debesų kompiuterijos platformose. Pilnai išanalizuojama nemokama debesies platforma Eucalyptus. Bandoma sukurti internetinių puslapių talpinimo paslaugą debesyje (PaaS paslauga), kuria naudotis galėtų daug vartotojų. Taip pat sudaromas planas kaip galėtų atrodyti panašių paslaugų perkėlimas į debesies infrastruktūras. Išnagrinėjus, kokios programinės įrangos reikia tokiai paslaugai teikti, paruošti pavyzdiniai instaliaciniai skriptai, nubraižytos schemos kaip tokia paslauga galėtų veikti ir kokias funkcijas, bei kokią naudą gauna galutinis vartotojas naudodamas tokią paslaugą. Suprojektuota sistema, kuri automatiškai turi rūpintis tokios paslaugos valdymu, bei stebėjimu. Pateikti tokios automatizuotos sistemos kodo pavyzdžiai. / This document describes the technologies used in cloud computing platforms. Also this work completely analyze cloud open free platform Eucalyptus. On this platform trying to create a web page hosting service in the cloud as a PaaS service, which could be used of many users. Also work describes the plan/scheme as it might be possible to transfer similiar services to the cloud infrastructure. Examination of which software must be provided the following services, preparing model system installation scripts, either as a scheme for such a service can operate and what functions and what benefits the final consumer gets using this service. Designed a system that automatically can provide such a service management and monitoring. Shows such an automated system code examples.
|
168 |
Towards a trusted grid architectureCooper, Andrew January 2010 (has links)
The malicious host problem is challenging in distributed systems such as grids and clouds. Rival organisations may share the same physical infrastructure. Administrators might deliberately or accidentally compromise users' data. The thesis concerns the development of a security architecture that allows users to place a high degree of trust in remote systems to process their data securely. The problem is tackled through a new security layer that ensures users' data can only be accessed within a trusted execution environment. Access to encrypted programs and data is authorised by a key management service using trusted computing attestation. Strong data integrity and confidentiality protection on remote hosts is provided by the job security manager virtual machine. The trusted grid architecture supports the enforcement of digital rights management controls. Subgrids allow users to define a strong trusted boundary for delegated grid jobs. Recipient keys enforce a trusted return path for job results to help users create secure grid workflows. Mandatory access controls allow stakeholders to mandate the software that is available to grid users. A key goal of the new architecture is backwards compatibility with existing grid infrastructure and data. This is achieved using a novel virtualisation architecture where the security layer is pushed down to the remote host, so it does not need to be pre-installed by the service provider. A new attestation scheme, called origin attestation, supports the execution of unmodified, legacy grid jobs. These features will ease the transition to a trusted grid and help make it practical for deployment on a global scale.
|
169 |
Erlebbarkeit von Anlagenkomponenten im Kontext Virtuelle Inbetriebnahme in virtuellen UmgebungenGeiger, Andreas, Rehfeld, Ingolf, Rothenburg, Uwe, Stark, Rainer 10 December 2016 (has links) (PDF)
Aus der Einleitung
"Der Einsatz von Virtual Reality (VR) Methoden in der Fabrikplanung und Absicherung ist bei großen produzierenden Unternehmen heute in allen Phasen des Produktentwicklungsprozesses (PEP) „State of the Art“ (Runde 2012). Virtual Reality ermöglicht die frühzeitige Visualisierung eines Entwicklungsstands in Originalgröße. Dadurch lassen sich Design- oder Konzeptentwürfe visualisieren, frühzeitig Fehler erkennen und Absicherungen hinsichtlich Ergonomie, oder Ein- und Ausbauuntersuchung durchführen (Rademacher, 2014). Diese Absicherungen, insbesondere die Prüfung von Produktionsanlagen wird heute vor allem mit statische Modellen durchgeführt (Westkämper & Runde 2006).
Weiterhin resultiert die zunehmende Vernetzung und Intelligenz von Produktionsanlagen im Kontext von Industrie 4.0 in hochkomplexen Anlagensteuerungen. Zur frühzeitigen Überprüfung der Datenquellen bzw. Planungsdaten für die reale Anlage hinsichtlich ihrer Korrektheit, Vollständigkeit und Konsistenz bereits während der Entwicklung werden daher zunehmend auch Techniken der funktionellen Virtualisierung eingesetzt. ..."
|
170 |
Massively parallel computing for particle physicsPreston, Ian Christopher January 2010 (has links)
This thesis presents methods to run scientific code safely on a global-scale desktop grid. Current attempts to harness the world’s idle desktop computers face obstacles such as donor security, portability of code and privilege requirements. Nereus, a Java-based architecture, is a novel framework that overcomes these obstacles and allows the creation of a globally-scalable desktop grid capable of executing Java bytecode. However, most scientific code is written for the x86 architecture. To enable the safe execution of unmodified scientific code, we created JPC, a pure Java x86 PC emulator. The Nereus framework is applied to two tasks, a trivially parallel data generation task, BlackMax, and a parallelization and fault tolerance framework, Mycelia. Mycelia is an implementation of the Map-Reduce parallel programming paradigm. BlackMax is a microscopic blackhole event generator, of direct relevance for the Large Hadron Collider (LHC). The Nereus based BlackMax adaptation dramatically speeds up the production of data, limited only by the number of desktop machines available.
|
Page generated in 0.0306 seconds