• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 8
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 40
  • 28
  • 28
  • 24
  • 22
  • 18
  • 18
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Regulace autonomních zbraňových systémů: Strategie EU a USA / Regulation of Autonomous Weapon Systems: EU and U.S. policy strategies

Ortmann, Matyáš January 2021 (has links)
This diploma thesis deals with the issue of autonomous weapon systems in connection with the phenomenon of artificial intelligence. Within the issue of AWS, the master's thesis addresses their potential regulation or complete ban. This burning topic is discussed based on an empirical analysis of international organizations and state institutions that deal with the matter. The main essence of the master's thesis is to approach the functioning of artificial intelligence and autonomous weapon systems, to map the development of AWS and to present the current situation in the context of AWS regulation. The secondary purpose of this thesis is to examine and analyze the international debate and to look at the arguments presented regarding the moral and ethical aspects of development and deployment of autonomous weapons. The diploma thesis concludes that at present times, there are still no fully autonomous weapon systems operating in the field, but their development is gaining momentum. Regarding the matter of regulatory measures of AWS discussions are taking place at present times. These discussions have so far resulted in individual agreements that correspond to the form of hybrid regulation. Individual countries approach the topic of AWS regulation based on their technological and economic capabilities...
2

Parent Perspectives of Adolescent Wisdom

Besecker, Zachary 22 April 2022 (has links)
No description available.
3

Large Scale ETL Design, Optimization and Implementation Based On Spark and AWS Platform

Zhu, Di January 2017 (has links)
Nowadays, the amount of data generated by users within an Internet product is increasing exponentially, for instance, clickstream for a website application from millions of users, geospatial information from GIS-based APPs of Android and IPhone, or sensor data from cars or any electronic equipment, etc. All these data may be yielded billions every day, which is not surprisingly essential that insights could be extracted or built. For instance, monitoring system, fraud detection, user behavior analysis and feature verification, etc.Nevertheless, technical issues emerge accordingly. Heterogeneity, massiveness and miscellaneous requirements for taking use of the data from different dimensions make it much harder when it comes to the design of data pipelines, transforming and persistence in data warehouse. Undeniably, there are traditional ways to build ETLs from mainframe [1], RDBMS, to MapReduce and Hive. Yet with the emergence and popularization of Spark framework and AWS, this procedure could be evolved to a more robust, efficient, less costly and easy-to-implement architecture for collecting, building dimensional models and proceeding analytics on massive data. With the advantage of being in a car transportation company, billions of user behavior events come in every day, this paper contributes to an exploratory way of building and optimizing ETL pipelines based on AWS and Spark, and compare it with current main Data pipelines from different aspects. / Mängden data som genereras internet-produkt-användare ökar lavinartat och exponentiellt. Det finns otaliga exempel på detta; klick-strömmen från hemsidor med miljontals användare, geospatial information från GISbaserade Android och iPhone appar, eller från sensorer på autonoma bilar.Mängden händelser från de här typerna av data kan enkelt uppnå miljardantal dagligen, därför är det föga förvånande att det är möjligt att extrahera insikter från de här data-strömmarna. Till exempel kan man sätta upp automatiserade övervakningssystem eller kalibrera bedrägerimodeller effektivt. Att handskas med data i de här storleksordningarna är dock inte helt problemfritt, det finns flertalet tekniska bekymmer som enkelt kan uppstå. Datan är inte alltid på samma form, den kan vara av olika dimensioner vilket gör det betydligt svårare att designa en effektiv data-pipeline, transformera datan och lagra den persistent i ett data-warehouse. Onekligen finns det traditionella sätt att bygga ETL’s på från mainframe [1], RDBMS, till MapReduce och Hive. Dock har det med upptäckten och ökade populariteten av Spark och AWS blivit mer robust, effektivt, billigare och enklare att implementera system för att samla data, bygga dimensions-enliga modeller och genomföra analys av massiva data-set. Den här uppsatsen bidrar till en ökad förståelse kring hur man bygger och optimerar ETL-pipelines baserade på AWS och Spark och jämför med huvudsakliga nuvarande Data-pipelines med hänsyn till diverse aspekter. Uppsatsen drar nytta av att ha tillgång till ett massivt data-set med miljarder användar-events genererade dagligen från ett bil-transport-bolag i mellanöstern.
4

Improving the performance of stream processing pipeline for vehicle data

Gu, Wenyu January 2020 (has links)
The growing amount of position-dependent data (containing both geo position data (i.e. latitude, longitude) and also vehicle/driver-related information) collected from sensors on vehicles poses a challenge to computer programs to process the aggregate amount of data from many vehicles. While handling this growing amount of data, the computer programs that process this data need to exhibit low latency and high throughput – as otherwise the value of the results of this processing will be reduced. As a solution, big data and cloud computing technologies have been widely adopted by industry. This thesis examines a cloud-based processing pipeline that processes vehicle location data. The system receives real-time vehicle data and processes the data in a streaming fashion. The goal is to improve the performance of this streaming pipeline, mainly with respect to latency and cost. The work began by looking at the current solution using AWS Kinesis and AWS Lambda. A benchmarking environment was created and used to measure the current system’s performance. Additionally, a literature study was conducted to find a processing framework that best meets both industrial and academic requirements. After a comparison, Flink was chosen as the new framework. A new solution was designed to use Fink. Next the performance of the current solution and the new Flink solution were compared using the same benchmarking environment and. The conclusion is that the new Flink solution has 86.2% lower latency while supporting triple the throughput of the current system at almost same cost. / Den växande mängden positionsberoende data (som innehåller både geo-positionsdata (dvs. latitud, longitud) och även fordons- / förarelaterad information) som samlats in från sensorer på fordon utgör en utmaning för datorprogram att bearbeta den totala mängden data från många fordon. Medan den här växande mängden data hanteras måste datorprogrammen som behandlar dessa datauppvisa låg latens och hög genomströmning - annars minskar värdet på resultaten av denna bearbetning. Som en lösning har big data och cloud computing-tekniker använts i stor utsträckning av industrin. Denna avhandling undersöker en molnbaserad bearbetningspipeline som bearbetar fordonsplatsdata. Systemet tar emot fordonsdata i realtid och behandlar data på ett strömmande sätt. Målet är att förbättra prestanda för denna strömmande pipeline, främst med avseende på latens och kostnad. Arbetet började med att titta på den nuvarande lösningen med AWS Kinesis och AWS Lambda. En benchmarking-miljö skapades och användes för att mäta det aktuella systemets prestanda. Dessutom genomfördes en litteraturstudie för att hitta en bearbetningsram som bäst uppfyller både industriella och akademiska krav. Efter en jämförelse valdes Flink som det nya ramverket. En nylösning designades för att använda Fink. Därefter jämfördes prestandan för den nuvarande lösningen och den nya Flink-lösningen med samma benchmarking-miljö och. Slutsatsen är att den nya Flink-lösningen har 86,2% lägre latens samtidigt som den stöder tredubbla kapaciteten för det nuvarande systemet till nästan samma kostnad.
5

Performance Evaluation of Apache Cassandra using AWS (Amazon Web Services) and GCP (Google Cloud Platform)

Alluri, Gayathri Thanuja January 2022 (has links)
Context: In the field of computer science and communication systems, cloud computing plays animportant role in Information and Technology industry, it allows users to start from small and increase resources when there is a demand. AWS (Amazon Web Services) and GCP (Google cloud Platform) are two different cloud platform providers. Many organizations are still relying onstructured databases like MySQL etc. Structured databases cannot handle huge requests and data efficiently when number of requests and data increase. To overcome this problem, the organizations shift to NoSQL unstructured databases like Apache cassandra, Mongo DB etc. Conclusions: From the literature review, I have gained knowledge regarding the cloud computing, problems existed in cloud, which leads to setup this research in evaluating the performance of cassandra on AWS and GCP. The conclusion from the experiment is that as the thread count increases throughput and latency has increased gradually till thread count 600 in both the clouds. By comparing both the clouds throughput values, AWS scales up compare to GCP. GCP scales up, when compared to AWS in terms of latency.  Keywords: Apache Cassandra, AWS, Google Cloud Platform, Cassandra Stress, Throughput, Latency
6

AWS Flap Detector: An Efficient way to detect Flapping Auto Scaling Groups on AWS Cloud

Chandrasekar, Dhaarini 07 June 2016 (has links)
No description available.
7

Performance Evaluation of MongoDB on Amazon Web Service and OpenStack

Avutu, Neeraj January 2018 (has links)
Context MongoDB is an open-source, scalable, NoSQL database that distributes the data over many commodity servers. It provides no single point of failure by copying and storing the data in different locations. MongoDB uses a master-slave design rather than the ring topology used by Cassandra. Virtualization is the technique used for accessing multiple machines in a single host and utilizing the various virtual machines. It is the fundamental technology, which allows cloud computing to provide resource sharing among the users. Objectives Studying and identifying MongoDB, Virtualization on AWS and OpenStack. Experiments were conducted to identify the CPU utilization associated when Mongo DB instances are deployed on AWS and physical server arrangement. Understanding the effect of Replication in the Mongo DB instances and its effect on MongoDB concerning throughput, CPU utilization and latency. Methods Initially, a literature review is conducted to design the experiment with the mentioned problems. A three node MongoDB cluster runs on Amazon EC2 and OpenStack Nova with Ubuntu 16.04 LTS as an operating system. Latency, throughput and CPU utilization were measured using this setup. This procedure was repeated for five nodes MongoDB cluster and three nodes production cluster with six types of workloads of YCSB. Results Virtualization overhead has been identified in terms of CPU utilization and the effects of virtualization on MongoDB are found out in terms of CPU utilization, latency and throughput. Conclusions It is concluded that there is a decrease in latency and increases throughput with the increase in nodes. Due to replication, increase in latency was observed.
8

Verifiering av WRF-modellen över Svalbard

Waxegård, Anna January 2011 (has links)
Glaciologer har under en längre tid observerat förändringar av glaciärer på Svalbard, att några minskar i storlek och att vissa växer. Avsmältning med ökade havsnivåer och potentiellt ändrad havscirkulation till följd är ett scenario som berör människor över hela värden. Dessa förändringar kan eventuellt förklaras genom att koppla de meteorologiska förhållandena i området till större cirkulationsförändringar. De meteorologiska förhållandena över Svalbard har simulerats med en regional klimatmodell, WRF (Weather Research Forecasting), för tre domäner med upplösningarna 24 km, 8 km och 2,7 km. Modellen har testats i två versioner, standard-WRF med förvalda processbeskrivningar och WRF med processbeskrivningar anpassade för polärt klimat och har drivits med ERA-Interim data, som är en återanalys av de globala väderförhållandena framtagen av ECMWF. Resultaten från WRF har verifierats mot observationer uppmätta av AWS-stationer (Automatic Weather Station). Följande parametrar ingår i studien: temperatur, vindhastighet, specifik fuktighet, kortvågig in och utstrålning samt långvågig instrålning.  Simulationer med standard-WRF underskattar samtliga strålningsparametrar. En felaktig strålningsbalans leder till att standard-WRF simulerar för låga temperaturer. Att mängden kortvågig och långvågig instrålning är för liten beror förmodligen på att standard-WRF simulerar för stor mängd höga moln och för liten mängd låga moln. För vindhastigheten och den långvågiga instrålningen ökar respektive minskar korrelationen när resultaten från nedskalning från 24 km till 8 km med standard-WRF analyseras. Bäst korrelation för vindsimuleringar fås med standard-WRF i upplösningen 8 km. För temperaturen ger ERA-Interim bättre korrelation mot observationer än simuleringar med standard-WRF. Ett test av polaroptimerade WRF visar att detta utförande av modellen bättre förutsäger strålningsbalansen över glaciärerna och som en följd av detta fås en mer överensstämmande temperaturmodellering. Polaroptimerade WRF simulerar en mindre mängd höga moln och en strörre mängd låga moln jämfört med standard-WRF. Bättre molnmodelleringarna i kombination med ett mer passande schema som beskriver mängden kortvågig strålning ger en förbättrad energibalans. Vindmodelleringar i upplösningen 2,7 km utförda av standard-WRF och polaroptimerade WRF ger minskad korrelation och ökad spridning jämfört med simuleringar i upplösningen 8 km. Denna rapport visar på att polaroptimerade WRF är ett bättre alternativ än standard-WRF när Svalbards meteorologiska parametrar ska simuleras.
9

Performance evaluation of Cassandra in AWS environment : An experiment

SUBBA REDDY GARI, AVINASH KUMAR REDDY January 2017 (has links)
Context. In the field of computer science, the concept of cloud computing plays a prominent role which can be hosted on the internet to store, manage and also to process the data. Cloud platforms enables the users to perform large number of computing tasks across the remote servers. There exist several cloud platform providers like Amazon, Microsoft, Google, Oracle and IBM. Several conventional databases are available in cloud service providers in order to handle the data. Cassandra is a NoSQL database system which can handle the unstructured data and can scale large number of operations per second even across multiple datacentres. Objectives. In this study, the performance evaluation of NoSQL database in AWS cloud service provider has been performed. The performance evaluation of a three node Cassandra cluster is performed for different configuration of EC2 instances. This performance has been evaluated using the metrics throughput and CPU utilization. The main aim of this thesis was to evaluate the performance of Cassandra under various configurations with the YCSB benchmarking tool. Methods. A literature review has been conducted to gain more knowledge about the current research area. The metrics required to evaluate the performance of Cassandra were identified through literature study. The experiment was conducted to compute the results for throughput and CPU utilization under the different configurations t2.micro, t2.medium and t2.small for 3 node and 6 node cluster using YCSB benchmarking tool. Results. The results of the experiment include the metrics, throughput and CPU utilization which were identified in the literature review. The results calculated were plotted as graphs to compare their performance for three different configurations. The results obtained were segregated as two different scenarios which were for 3 node and 6 node clusters. Conclusions. Based on the obtained values of throughput the optimal or sub-optimal configuration of a data centre running multiple instances of Cassandra such that the specific throughput requirements are satisfied.
10

AI Meeting Monitoring

Hansson, Andreas January 2020 (has links)
During the COVID-19 pandemic the questions of the efficiency around meetings has been in the forefront of some discussion inside companies. One way to measure efficiency is to measure the interactivity between different participants. In order to measure this the participants need to be identified. With the recent spike of Machine learning advancements, is this something that can be done using facial and voice recognition? Another field that has risen to the top is cloud computing. Can machine learning and cloud computing be used to evaluate and monitor a meeting, thus handling both audio and video streams in a real time environment? The conclusion of this thesis is that Artificial Intelligence(AI) can be used to monitor a meeting. To be able to do so Amazon Web Service (AWS) can be utilized. The choice of using a DeepLens was however not best choice. A hardware like DeepLens is required, but with better integration with cloud computing, as well with more freedom regarding the usage of several models for handling both feeds. With the usage of other models to automatic annotate data the time needed for training a new model can be reduced. The data generated during a single meeting is enough with the help of transfer learning from Amazon web service to build a model for facial identification and detection.

Page generated in 0.026 seconds