• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 60
  • 32
  • 22
  • 11
  • 9
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 303
  • 303
  • 66
  • 63
  • 42
  • 35
  • 32
  • 32
  • 32
  • 31
  • 31
  • 29
  • 28
  • 27
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Neural Networks for the Web Services Classification

Silva, Jesús, Senior Naveda, Alexa, Solórzano Movilla, José, Niebles Núẽz, William, Hernández Palma, Hugo 07 January 2020 (has links)
This article introduces a n-gram-based approach to automatic classification of Web services using a multilayer perceptron-type artificial neural network. Web services contain information that is useful for achieving a classification based on its functionality. The approach relies on word n-grams extracted from the web service description to determine its membership in a category. The experimentation carried out shows promising results, achieving a classification with a measure F=0.995 using unigrams (2-grams) of words (characteristics composed of a lexical unit) and a TF-IDF weight.
222

IT Product Realization Process towards Marketing Consultancy Automation – Case Study: Customer Value Sweden AB

Paralykidis, Georgios January 2015 (has links)
Management consulting is a growing field [1] that is of great importance for organisations to optimise their performance and develop plans for future improvements. In the recent years, many companies use experienced consultants in order to create a strong competitive advantage to withstand the competition and to easily find solutions suitable to different problems. Customer Value Sweden AB is a - start-up - consultancy firm that have been operating the last years in the Swedish market and offers customer database analyses to the e-commerce market. They become valuable tools for the retailers, by improving marketing activities directed toward present customers. By adopting a segmented marketing strategy, companies can target customers in a different way, thus enchanting their business [2]. This Master thesis work tries to introduce a new efficient and automated way for the company to increase its capacity and quality of the provided services. It is done by deploying an IT product realisation process that conceptualizes, designs and implements a new online web service that aims to automate the report generation process, which was carried out manually until now. This system will try to solve complexity, availability, and performance problems that occur in the current system and also open up now possibilities for future development and expansion. The key benefits of the new solution in contrast to the old one will be highlighted, as well as the process that was followed to realise it.
223

Administration System of Education Information

Bjärlind, Rikard, Stenlo, Alexander January 2015 (has links)
This thesis will cover Allastudier.se, a system for administration of educations in Sweden. The system is used at the news agency Metro, as one of its services, and is in need of improvement. Allastudier.se aims to gather all educations in Sweden on one site. A problem that the site faces is keeping the information of the educations up-to-date. In the current system Allastudier.se imports information about the educations from Skolverkets (the authority responsible for educations in Sweden) database. Educations that are not handled by Skolverket needs to be manually updated by the system administrators. In this thesis, a more efficient way of administrating education information is discovered, implemented and discussed. The new system allows customers to Allastudier.se to directly administer information about their educations through an appropriate web service. This system increases the effectiveness of the service Allastudier.se and improves the capabilities of keeping information up-to-date. The development of the system is done through an advanced development environment at Metro. The work method, the frameworks and development tools used are explained and discussed along with challenges encountered and what future work remains for the system. The outcome of the project is a potent: web based content management system, which allows for customers to Allastudier.se to administer the information about their educations directly. / Denna avhandling utreder Allastudier.se, ett system som administrerar utbildningar i Sverige. Systemet är en av tjänsterna som förmedlas av nyhetsbyrån Metro, och är i behov av förbättring. Allastudier.se ämnar samla alla Sveriges utbildningar på en plats. Ett problem som sidan möter är att hålla utbildningsinformationen uppdaterad. Allastudier.se importerar för nuvarande systemets utbildningsinformation från Skolverkets (den myndighet som är ansvarig för utbildningar i Sverige) databas. Utbildningar som inte administreras av Skolverket måste uppdateras av systemadministratörer. I denna avhandling beskrivs hur ett effektivare sätt att administrera utbildningsinformation upptäcks, implementeras och diskuteras. Det nya systemet tillåter kunder till Allastudier.se att direkt administrera information om deras utbildningar genom en webbtjänst. Detta system ökar effektiviteten hos tjänsten Allastudier.se och förbättrar möjligheterna för att hålla information uppdaterad. Systemet utvecklas genom en avancerad utvecklingsmiljö hos Metro. De arbetsmetoder, ramverk och utvecklingsverktyg som använts kommer förklaras och diskuteras tillsammans med de utmaningar som stöts på och det framtida arbete som återstår med systemet. Projektets utfall resulterade i ett kraftfullt webbaserat innehållshanteringssystem, som tillåter kunder till Allastudier.se att direkt administrera information om deras utbildningar.
224

Real-time face recognition using one-shot learning : A deep learning and machine learning project

Darborg, Alex January 2020 (has links)
Face recognition is often described as the process of identifying and verifying people in a photograph by their face. Researchers have recently given this field increased attention, continuously improving the underlying models. The objective of this study is to implement a real-time face recognition system using one-shot learning. “One shot” means learning from one or few training samples. This paper evaluates different methods to solve this problem. Convolutional neural networks are known to require large datasets to reach an acceptable accuracy. This project proposes a method to solve this problem by reducing the number of training instances to one and still achieving an accuracy close to 100%, utilizing the concept of transfer learning.
225

Choreographing Traffic Services for Driving Assistance

Neroutsos, Efthymios January 2017 (has links)
This thesis project presents the web service choreography approach used for the composition of web services. It leverages the CHOReVOLUTION platform, a future-oriented and scalable platform, that is used to design and deploy web service choreographies. By using this platform, a use case that falls into the ITS domain is developed. This use case highlights the benefits of the web service choreography when used for the development of ITS applications. The necessary web services are designed and their interactions are defined through a choreography diagram that graphically represents how the services should collaborate together to fulfill a specific goal. By using the choreography diagram as input to the platform and by registering the web services on a web server, the choreography is deployed over the platform. The resulted choreography is tested in terms of services coordination. It is demonstrated that the platform can generate specific components that are interposed between the services and are able to take care of the services coordination for the use case created. Moreover, the execution time required to complete the choreography is measured, analyzed and reported under different conditions. Finally, it is shown that the execution time varies depending on the data that the services have to process and that the processing of huge data sets may lead to high execution times. / Detta examensarbete behandlar hur man med hjälp koreografering av webbtjänster kan komponera webbtjänster. Det använder sig av CHOReVOLUTION plattformen, en framåtblickande och skalbar plattform, som används för att designa och verkställa koreografering av webbtjänster. Med denna plattform skapas ett användningsfall inom ITS-området. Detta fall belyser fördelarna med webbtjänskoreografi i samband med utveckling av ITS- applikationer. De nödvändiga webbtjänsterna designas och deras samspel definieras genom ett diagram för koreografin, som på ett grafiskt vis presenterar hur tjänsterna skall kollaborera för att nå ett specifikt mål. Genom att mata plattformen med data från diagrammet, och genom att registrera webbtjänster på en webbserver, verkställs koreografin. Med resultatet testas koordineringen av tjänsterna. I detta examensarbete visas det att plattformen kan skapa specifika komponenter som interagerar med tjänsterna, samt sköta koordineringen av tjänster som krävs för detta användningsfall. Exekveringstiden mäts, analyseras och rapporteras under flera olika omständigheter. Det demonstreras också att exekveringstiden varierar beroende på den data som tjänsterna måste behandla, och hur behandlingen av mycket stora datamängder kan leda till långa exekveringstider.
226

Sensor data computation in a heavy vehicle environment : An Edge computation approach

Vadivelu, Somasundaram January 2018 (has links)
In a heavy vehicle, internet connection is not reliable, primarily because the truck often travels to a remote location where network might not be available. The data generated from the sensors in a vehicle might not be sent to the internet when the connection is poor and hence it would be appropriate to store and do some basic computation on those data in the heavy vehicle itself and send it to the cloud when there is a good network connection. The process of doing computation near the place where data is generated is called Edge computing. Scania has its own Edge computation solution, which it uses for doing computations like preprocessing of sensor data, storing data etc. Scania’s solution is compared with a commercial edge computing platform called as AWS (Amazon Web Service’s) Greengrass. The comparison was in terms of Data efficiency, CPU load, and memory footprint. In the conclusion it is shown that Greengrass solution works better than the current Scania solution in terms of CPU load and memory footprint, while in data efficiency even though Scania solution is more efficient compared to Greengrass solution, it was shown that as the truck advances in terms of increasing data size the Greengrass solution might prove competitive to the Scania solution.One more topic that is explored in this thesis is Digital twin. Digital twin is the virtual form of any physical entity, it can be formed by obtaining real-time sensor values that are attached to the physical device. With the help of sensor values, a system with an approximate state of the device can be framed and which can then act as the digital twin. Digital twin can be considered as an important use case of edge computing. The digital twin is realized with the help of AWS Device shadow. / I ett tungt fordonsscenario är internetanslutningen inte tillförlitlig, främst eftersom lastbilen ofta reser på avlägsna platser nätverket kanske inte är tillgängligt. Data som genereras av sensorer kan inte skickas till internet när anslutningen är dålig och det är därför bra att ackumulera och göra en viss grundläggande beräkning av data i det tunga fordonet och skicka det till molnet när det finns en bra nätverksanslutning. Processen att göra beräkning nära den plats där data genereras kallas Edge computing. Scania har sin egen Edge Computing-lösning, som den använder för att göra beräkningar som förbehandling av sensordata, lagring av data etc. Jämförelsen skulle vara vad gäller data efficiency, CPU load och memory consumption. I slutsatsen visar det sig att Greengrass-lösningen fungerar bättre än den nuvarande Scania-lösningen när det gäller CPU-belastning och minnesfotavtryck, medan det i data-effektivitet trots att Scania-lösningen är effektivare jämfört med Greengrass-lösningen visades att när lastbilen går vidare i Villkor för att öka datastorleken kan Greengrass-lösningen vara konkurrenskraftig för Scania-lösningen. För att realisera Edge computing används en mjukvara som heter Amazon Web Service (AWS) Greengrass.Ett annat ämne som utforskas i denna avhandling är digital twin. Digital twin är den virtuella formen av någon fysisk enhet, den kan bildas genom att erhålla realtidssensorvärden som är anslutna till den fysiska enheten. Med hjälp av sensorns värden kan ett system med ungefärligt tillstånd av enheten inramas och som sedan kan fungera som digital twin. Digital twin kan betraktas som ett viktigt användningsfall vid kantkalkylering. Den digital twin realiseras med hjälp av AWS Device Shadow.
227

Testing Lifestyle Store Website Using JMeter in AWS and GCP

Tangella, Ankhit, Katari, Padmaja January 2022 (has links)
Background: As cloud computing has risen over the last decades, there are several cloud services accessible on the market, users may prefer to select those that are more flexible and efficient. Based on the preceding, we chose to research to evaluate cloud services in terms of which would be better for the user in terms ofgetting the needed data from the chosen website and utilizing JMeter for performance testing. If we continue our thesis study by assessing the performance of different sample users using JMeter as the testing tool, it is appropriate for our thesis research subject. In this case, the user interfaces of GCP and AWS are compared while doing several compute engine-related operations. Objectives: This thesis aims to test the website performance after deploying in two distinct cloud platforms.After the creation of instances in AWS, a domain in GCP and also the bucket, the website files are uploaded into the bucket. The GCP and AWS instances are connected to the lifestyle store website. The performance testing on the selected website is done on both services, and then comparison ofthe outcomes of our thesis research using the testing tool Jmeter is done. Methods: In these, we choose experimentation as our research methodology,and in this, the task is done in two cloud platforms in which the website will be deployed separately. The testing tool with performance testing is employed. JMeter is used to test a website’s performance in both possible services and then to gather our research results, and the visualization of the results are done in an aggregate graph, graphs and summary reports. The metrics are Throughput, average response time, median, percentiles and standard deviation. Results: The results are based on JMeter performance testing of a selected web-site between two cloud platforms. The results of AWS and GCP can be shown in the aggregate graph. The graph results are based on the testing tool to determine which service is best for users to obtain a response from the website for requested data in the shortest amount of time. We have considered 500 and 1000 users, and based on the results, we have compared the metrics throughput, average response time, standard deviation and percentiles. The 1000 user results are compared to know which cloud platform performs better. Conclusions: According to the results from the 1000 users, it can be concluded that AWS has a higher throughput than GCP and a less average response time.Thus, it can be said that AWS outperforms GCP in terms of performance.
228

Integrating Description Logics and Action Formalisms for Reasoning about Web Services

Baader, Franz, Lutz, Carsten, Miličić, Maja, Sattler, Ulrike, Wolter, Frank 31 May 2022 (has links)
Motivated by the need for semantically well-founded and algorithmically managable formalism that is based on description logics (DLs), but is also firmly grounded on research in the reasoning about action community. Our main contribution is an analysis of how the choice of the DL influences the complexity of standard reasoning tasks such as projection and executability, which are important for Web service discovery and composition.
229

Adaptiv bildladdning i en kontextmedveten webbtjänst

Halldén, Albin, Schönemann, Madeleine January 2014 (has links)
Information på webben konsumeras idag via en mängd heterogena enheter. Faktorer som nätverksunderlag och skärmupplösning påverkar vilken bild som är lämplig att leverera till klienten, då en bild i sitt originaltillstånd på en tekniskt begränsad enhet tar lång tid att hämta samt kräver en stor datamängd. Eftersom surfandet på mobila enheter via mobila nätverk förväntas att öka är en lösning för adaptiv bildladdning relevant. Syftet är att undersöka huruvida en webbtjänst, bestående av en klient och en server, kan avgöra bäst lämpad bildkvalitet att leverera till klienten, baserat på dennes aktuella nätverksprestanda och skärmupplösning. En enhet med lägre skärmupplösning och ett långsammare nätverk berättigar en bild i sämre kvalitet och lägre bildupplösning. Därmed förkortas hämtnings- tiden och datamängden reduceras, vilket bidrar till en förbättrad användarupplevelse.Uppsatsen presenterar och utvärderar flera lösningar för adaptiv bildladdning. Lös- ningarna baseras på två parametrar: bredden på klientens webbläsarfönster samt svarstid mellan klient och server, med hjälp av javascript. Dessa parametrar står till grund för den skalning av storlek och kvalitet som sedan appliceras på bilden. Bilden tillhandahålls klien- ten genom någon av de två leveransmetoderna fördefinierade bilder, där flera olika versioner av bilden lagras på servern, och dynamiska bilder, där bilderna i realtid renderas på servern genom gd-biblioteket i php utifrån på originalbilden. Tre typer av adaptiv bildladdning – kvalitetsadaption, storleksadaption och en kombination av de båda, undersöks med av- seende på tidsåtgång och levererad datamängd. Dessa utvärderas sedan i förhållande till basfallet bestående av originalbilderna.Att använda någon typ av adaptionsmetod är i 14 av 15 fall bättre än att enbart leverera originalbilder. Bäst resultat ger kombinerad adaption på enheter med mindre skärmupp- lösning och långsammare nätverk men är även gynnsamt för enheter med medelsnabba nätverk och enheter med stöd för högre skärmupplösning. Både fördefinierad och dyna- misk leveransmetod ger bra resultat men då den dynamiska leveransmetodens skalbarhet med flera parallella anslutningar inte är känd rekommenderas fördefinierade bilder. / Today, information on the web is consumed via a variety of heterogeneous devices. Factors, such as network connection and screen resolution, affects which image that is the most suitable to deliver to the client. An image in its original condition, in a technically limited device, takes a long time to download and requires a large amount of data. Since the number of devices browsing the internet via mobile networks are expected to increase, a solution for adaptive image loading is needed. The aim of this thesis is to explore whether a web service, consisting of a client and a server, can determine the best suited image that should be delivered to the client. This is based on the client’s current network connection and screen resolution. A device with a lower screen resolution and a slower network connection requires an image of lower quality and lower resolution. Thus, the download time can be shortened and the data volume reduced, contributing to improved user experience.Our adaptive solution is based on two measurements – the width of the client’s browser window and the latency between the client and the server – using javascript. These para- meters are the basis for the scaling of the size and quality which applies to the image. The image is provided to the client by one of the two delivery methods: “predefined images”, where several different versions of the image are stored on the server, and “dynamic images”, where the images are rendered on the server by the gd library in php, based on the original image. Three types of adaptive image loading – quality adaptation, size adaptation and a combination of both, are investigated considering delivery time and the amount of data delivered. These are then evaluated in relation to the base case consisting of the original images.Using some type of adaptation method is in 14 out of 15 cases better than simply delivering the original images. The best results are given by the combined adaption method on devices with smaller screen resolutions and slower network connections, but is also beneficial for devices with medium speed connections and devices that support higher screen resolutions. Both predefined and dynamic delivery methods shows good results, but since the dynamic delivery method’s scalability with multiple concurrent clients is not known, it is recommended to use predefined images.
230

Tackling the current limitations of bacterial taxonomy with genome-based classification and identification on a crowdsourcing Web service

Tian, Long 25 October 2019 (has links)
Bacterial taxonomy is the science of classifying, naming, and identifying bacteria. The scope and practice of taxonomy has evolved through history with our understanding of life and our growing and changing needs in research, medicine, and industry. As in animal and plant taxonomy, the species is the fundamental unit of taxonomy, but the genetic and phenotypic diversity that exists within a single bacterial species is substantially higher compared to animal or plant species. Therefore, the current "type"-centered classification scheme that describes a species based on a single type strain is not sufficient to classify bacterial diversity, in particular in regard to human, animal, and plant pathogens, for which it is necessary to trace disease outbreaks back to their source. Here we discuss the current needs and limitations of classic bacterial taxonomy and introduce LINbase, a Web service that not only implements current species-based bacterial taxonomy but complements its limitations by providing a new framework for genome sequence-based classification and identification independently of the type-centric species. LINbase uses a sequence similarity-based framework to cluster bacteria into hierarchical taxa, which we call LINgroups, at multiple levels of relatedness and crowdsources users' expertise by encouraging them to circumscribe these groups as taxa from the genus-level to the intraspecies-level. Circumscribing a group of bacteria as a LINgroup, adding a phenotypic description, and giving the LINgroup a name using the LINbase Web interface allows users to instantly share new taxa and complements the lengthy and laborious process of publishing a named species. Furthermore, unknown isolates can be identified immediately as members of a newly described LINgroup with fast and precise algorithms based on their genome sequences, allowing species- and intraspecies-level identification. The employed algorithms are based on a combination of the alignment-based algorithm BLASTN and the alignment-free method Sourmash, which is based on k-mers, and the MinHash algorithm. The potential of LINbase is shown by using examples of plant pathogenic bacteria. / Doctor of Philosophy / Life is always easier when people talk to each other in the same language. Taxonomy is the language that biologists use to communicate about life by 1. classifying organisms into groups, 2. giving names to these groups, and 3. identifying individuals as members of these named groups. When most scientists and the general public think of taxonomy, they think of the hierarchical structure of “Life”, “Domain”, “Kingdom”, “Phylum”, “Class”, “Order”, “Family”, “Genus” and “Species”. However, the basic goal of taxonomy is to allow the identification of an organism as a member of a group that is predictive of its characteristics and to provide a name to communicate about that group with other scientists and the public. In the world of micro-organism, taxonomy is extremely important since there are an estimated 10,000,000 to 1,000,000,000 different bacteria species. Moreover, microbiologists and pathologists need to consider differences among bacterial isolates even within the same species, a level, that the current taxonomic system does not even cover. Therefore, we developed a Web service, LINbase, which uses genome sequences to classify individual microbial isolates. The database at the backend of LINbase assigns Life Identification Numbers (LINs) that express how individual microbial isolates are related to each other above, at, and below the species level. The LINbase Web service is designed to be an interactive web-based encyclopedia of microorganisms where users can share everything they know about micro-organisms, be it individual isolates or groups of isolates, for professional and scientific purposes. To develop LINbase, efficient computer programs were developed and implemented. To show how LINbase can be used, several groups of bacteria that cause plant diseases were classified and described.

Page generated in 0.0425 seconds