• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 241
  • 142
  • 127
  • 117
  • 16
  • 15
  • 14
  • 12
  • 7
  • 6
  • 6
  • 5
  • 4
  • 2
  • 2
  • Tagged with
  • 773
  • 203
  • 164
  • 124
  • 108
  • 107
  • 104
  • 89
  • 82
  • 73
  • 71
  • 70
  • 69
  • 62
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Improving Back-End Service Data Collection

Spik, Charlotta, Ghourchian, Isabel January 2017 (has links)
This project was done for a company called Anchr that develops a location based mobile application for listing nearby hangouts in a specified area. For this, they integrate a number of services which they send requests to in order to see if there are any nearby locations listed for these services. One of these services is Meetup, which is an application where users can create social events and gatherings. The problem this project aims to solve is that a large number of requests are sent to Meetup’s service in order to get information about the events, so that they then can be displayed in the application. This is a problem since only a limited number of requests can be sent within a specified time period before the service is locked. This means that Meetup’s service cannot be integrated into the application as it is now implemented, as the feature will become useless if no requests can be sent. The purpose of this project is therefore to find an alternative way of collecting the events from the service without it locking. This would enable the service to be integrated into the application. The hypothesis is that instead of using the current method of sending requests to get events, implement a listener that listens for incoming events from Meetup’s stream, to directly get updates whenever an event is created or updated. The result of the project is that there now exists a system which listens for events instead of repeatedly sending requests. The issue with the locking of the service does not exist anymore since no requests are sent to Meetup’s service. / Detta projekt genomfördes för ett företag som heter Anchr som utvecklar en platsbaserad mobilapplikation för att lista närliggande sociala platser inom ett specificerat område. För detta integrerade de ett antal tjänster som de skickar förfrågningar till för att se om det finns några närliggande platser listade för dessa tjänster. En av dessa tjänster är Meetup som är en applikation där användare kan skapa sociala evenemang. Problemet detta examensarbete syftar till att lösa är att ett stort antal förfrågningar skickas till Meetups tjänst för att få information om evenemangen så att de kan visas i applikationen. Detta är ett problem då endast ett begränsat antal förfrågningar kan skickas till deras tjänst inom ett visst tidsintervall innan tjänsten spärras. Detta betyder att Meetups tjänst inte kan integreras in i applikationen såsom den är implementerad i nuläget, eftersom funktionen kommer bli oanvändbar om inga förfrågningar kan skickas. Syftet med detta projekt är därför att hitta ett alternativt sätt att samla in evenemang från tjänsten utan att den spärras. Detta skulle göra så tjänsten kan integreras in i applikationen. Hypotesen är att istället för att använda den nuvarande metoden som går ut på att skicka förfrågningar för att få nya händelser, implementera en lyssnare som lyssnar efter inkommande händelser från Meetups stream, för att direkt få uppdateringar när ett evenemang skapas eller uppdateras. Resultatet av detta är att det nu finns ett system som lyssnar efter evenemang istället för att upprepningsvis skicka förfrågningar. Problemet med låsningen av tjänsten existerar inte längre då inga förfrågningar skickas till Meetup’s tjänst.
82

Interconnection of Two Different Payment Systems / Sammankoppling av två olika betalningssystem

Ammouri, Kevin, Cho, Kangyoun January 2019 (has links)
Mobile money, a means of transferring payments via mobile devices, has become increasingly popular. The demand for convenient financial products or services is a crucial factor in why innovative developers want to incorporate mobile money into existing financial products/services. The goal is to provide convenient financial services that enable customers to quickly send and receive money between two mobile payment platforms. The Swedish blockchain company, Centiglobe, is searching for a system whereby payments can be made conveniently between two mobile payment platforms, specifically Alipay and M PESA. This thesis sought to develop such a system by utilizing the application programming interfaces (APIs) (provided by Alipay and M PESA) coupled with Centiglobe’s blockchain to facilitate payments between an Alipay user and an M PESA user. Solving this problem began with an initial literature study of previous work related to this topic and reading the extensive API documentation provided by Alipay and Daraja Safaricom (the developers of M PESA). Next, a flowchart was created and used as a guide throughout the development of the system. Testing the system entailed integration testing. The performance of the system was determined by measuring the execution time to make a cross system payment. A one-way transfer system was developed, as Alipay users can make a payment to M PESA users but not the reverse. The results of the integration testing shows that the system is a feasible solution. The execution time of a payment shows that it is relatively quick (~9.1 seconds); thus the performance is adequate.The conclusion is that this system is a viable solution for incorporating Alipay and M PESA as mobile payment services. Moreover, the system partially facilitates person-to-person payments between them – subject to the limitations of the Alipay API. In addition, this system provides a foundation for other inter-platform mobile payment solutions. / Mobila pengar, ett sätt att överföra betalningar via mobila enheter, har blivit alltmer populära. Efterfrågan på praktiska finansiella produkter eller tjänster är en avgörande faktor för varför innovativa utvecklare vill integrera mobila pengar i befintliga finansiella produkter / tjänster. Målet är att tillhandahålla praktiska finansiella tjänster som gör det möjligt för kunder att snabbt skicka och ta emot pengar mellan två mobila betalningsplattformar. Det svenska blockchainföretaget Centiglobe söker ett system där betalningar kan göras bekvämt mellan två mobila betalningsplattofrmar, särskilt Alipay och M-PESA. Denna avhandling försökte utveckla ett sådant system genom att använda applikationspogrammerinsgränssnitt (API) (tillhandahållet av Alipay och M-PESA) i kombination med Centiglobe’s blockchain för att underlätta betalningar mellan en Alipay-användare och en M-PESA-användare. Lösningen av detta problem började med en första litteraturstudie av tidigare arbeten relaterat till detta ämne samt en omfattande läsning av API-dokmentationen från Alipay och Daraja Safaricom (utvecklarna av M-PESA). Därefter skapades ett flödesschema och detta användes som en guide under hela utvecklingen av systemet. Testning av systemet medförde integrationstestning. Systemets prestanda bestämdes genom att mäta exekveringstiden för att göra ett tvärsystemsbetalning. Ett envägsöverföringssystem utvecklades, eftersom Alipay-användare kan göra en betalning till M-PESA-användare men inte tvärtom. Resultaten av integrationstestningen visar att systemet är en genomförbar lösning. Utbetalningstiden för en betalning visar att den är relativt snabb (~9.1 sekunder); därav en lagom prestanda. Slutsatsen är att detta system är en lönsam lösning för att integrera Alipay och M-PESA som mobila betalningstjänster. Dessutom underlättar systemet delvis personliga betalningar mellan dem – med förbehåll för begräsningarna i Alipay API. Dessutom erbjuder detta system en grund för andra mobila betalningslösningar mellan plattformarna.
83

A Performance Based Comparative Study of Different APIs Used for Reading and Writing XML Files

Gujarathi, Neha 08 October 2012 (has links)
No description available.
84

Masterdata och API / Masterdata and API

Alvin, Axel, Axelborn, Lukas January 2022 (has links)
Dagens samhälle är beroende av ett ständigt flöde av information och data. Företag och organisationer har ofta enorma mängder data som rör allt från kunder och personal till försäljningsstatistik och patientjournaler. Utvecklingen har gått mycket snabbt och många företag och organisationer har inte haft tid eller resurser för att hålla sina system uppdaterade för att hantera dessa enorma mängder data. I detta arbete har uppgiften varit att koppla samman databaser från flera olika system i syfte att göra underhåll och hantering av dessa enklare. Dessa system behandlar i regel samma typ av data (personaldata indelat i grupper i form av enheter) men den benämns på olika sätt, exempelvis med olika ID. Detta leder till att datan saknar relation på så vis att det är mycket svårt att avgöra vilka enheter som korresponderar med varandra då de saknar gemensamma nämnare. Som en lösning på detta skapades ytterligare två databaser sammankopplade med övriga genom ett API, där data kopplas samman genom att tilldelas ett gemensamt ID, ett master-id. På så vis kan användare och utvecklare enkelt söka efter ett objekt från ett system och få tillbaka all data för korresponderande objekt i andra system. Som tillägg skapades också ett semi-automatiserat system i form av ett användargränssnitt som används för sammankoppling av objekt. / Today’s society depends on a constant flow of information and data. Companies and organisations often hold huge amounts of data, ranging from customers and staff to sales statistics and patient records. The pace of change has been very fast and many companies and organisations have not had the time or resources to keep their systems up to date to handle these huge amounts of data. In this thesis, the task has been to link databases from multiple systems to make maintenance and management easier. These systems generally process the same type of data (personnel data divided into groups in the form of units) which are named in different ways, for example with different IDs. As a result, the data is unrelated in a way that makes it very difficult to determine which entities correspond to each other as they have no common denominator. As a solution to this, two additional databases were created and linked to each other through an API, where the data is linked by being assigned a common ID, or a master-ID. In this way, users and developers can easily search for an object from one system and get in return all the data for the corresponding objects in other systems. In addition, a semi-automated system was created in the form of a user interface used for linking objects.
85

Ransomware Detection Using Windows API Calls and Machine Learning

Karanam, Sanjula 31 May 2023 (has links)
Ransomware is an ever-growing issue that has been affecting individuals and corporations since its inception, leading to losses of the order of billions each year. This research builds upon the existing body of research pertaining to ransomware detection for Windows-based platforms through behavioral analysis using sandboxing techniques and classification using machine learning (ML), considering the various predefined function calls, known as API (Application Programming Interface) calls, made by ransomware and benign samples as classifying features. The primary aim of this research is to study the effect of the frequency of API calls made by ransomware samples spanning across a large number of ransomware families exhibiting varied behavior, and benign samples on the classification accuracy of various ML algorithms. Conducting an experiment based on this, a quantitative analysis of the ML classification algorithms was performed, for the frequency of API calls based input and binary input based on the existence of an API call, resulting in the conclusion that considering the frequency of API calls marginally improves the ransomware recall rate. The secondary research question posed by this research aims to justify the ML classification of ransomware by conducting behavioral analysis of ransomware and goodware in the context of the API calls that had a major effect on the classification of ransomware. This research was able to provide meaningful insights into the runtime behavior of ransomware and goodware, and how such behavior including API calls and their frequencies were in line with the MLbased classification of ransomware. / Master of Science / Ransomware is an ever-growing issue that has been affecting individuals and corporations since its inception, leading to losses of the order of billions each year. It infects a user machine, encrypts user files or locks the user out of their machine, or both, demanding ransom in exchange for decrypting or unlocking user data. Analyzing ransomware either statically or behaviorally is a prerequisite for building detection and countering mechanisms. Behavioral analysis of ransomware is the basis for this research, wherein ransomware is analyzed by executing it on a safe sandboxed environment such as a virtual machine to avoid infecting a real-user machine, and its runtime characteristics are extracted for analysis. Among these characteristics, the various predefined function calls, known as API (Application Programming Interface) calls, made to the system by ransomware will serve as the basis for the classification of ransomware and benign software. After analyzing ransomware samples across various families, and benign samples in a sandboxed environment, and considering API calls as features, the curated dataset was fed to a set of ML algorithms that have the capability to extract useful information from the dataset to take classification decisions without human intervention. The research will consider the importance of the frequency of API calls on the classification accuracy and also state the most important APIs for classification along with their potential use in the context of ransomware and goodware to justify ML classification. Zero-Day detection, which refers to testing the accuracy of trained ML models on unknown ransomware samples and families was also performed.
86

Integrationsmotor : En studie i datainhämtning och visualisering för Gävle kommun

Al-Hadeethi, Asaad January 2024 (has links)
Detta examensarbete undersöker datainhämtning och visualisering för integrationsmotorn TEIS (Tietoevry Integration Server) vid Gävle kommun. Syftet är att analysera och visualisera integrationerna som utförs av TEIS för att förbättra förståelsen av dess processer och dataflöden.  Arbetet började med att samla in data från TEIS API med hjälp av Postman. Genom att använda olika API-anrop kunde relevant information om arbetsytor, mappar, integrationer, processer och triggers extraheras. Därefter utvecklades en webbapplikation med hjälp av React för att visualisera denna data. Applikationen möjliggör en hierarkisk navigering genom arbetsytorna och presenterar detaljerad information om de olika komponenterna i TEIS.  För att effektivisera hanteringen av data och förbättra applikationens prestanda integrerades en databas som lagrar information om triggers, integrationer och processer. Detta reducerade tiden för att matcha triggers med respektive integrationer från flera minuter till bara några sekunder.  En enkätundersökning utfördes för att utvärdera webbapplikationen bland systemutvecklare vid Gävle kommun. Resultaten visade att applikationen uppfyllde användarnas behov och förväntningar, men några förbättringsförslag lyftes fram, såsom att införa filterfunktioner och klickbara diagram.  Slutligen konstaterades att metoden och lösningen för att samla in och visualisera data från TEIS var effektiv. Webbapplikationen ger en omfattande och tydlig bild av TEIS integrationer, vilket förbättrar Gävle kommuns förmåga att övervaka och förstå systemets prestanda och potentiella problemområden. / This thesis examines data collection and visualization for the TEIS (Tietoevry Integration Server) integration engine at Gävle Municipality. The aim is to analyze and visualize the integrations performed by TEIS to enhance the understanding of its processes and data flows.  The work began with collecting data from the TEIS API using Postman. By employing various API calls, relevant information about workspaces, folders, integrations, processes, and triggers was extracted. Subsequently, a web application was developed using React to visualize this data. The application allows hierarchical navigation through the workspaces and presents detailed information about the various components in TEIS.  To streamline data management and improve the application's performance, a database was integrated to store information about triggers, integrations, and processes. This reduced the time to match triggers with their respective integrations from several minutes to just a few seconds.  A survey was conducted to evaluate the web application among system developers at Gävle Municipality. The results showed that the application met the users' needs and expectations, but some suggestions for improvements were highlighted, such as adding filter functions and clickable charts.  In conclusion, the method and solution for collecting and visualizing data from TEIS were found to be effective. The web application provides a comprehensive and clear view of TEIS integrations, improving Gävle Municipality's ability to monitor and understand the system's performance and potential problem areas.
87

Integration av Lokal LLM i Webbläsartillägg för Förbättrad Personlig Dataanalys : Utveckling av Webbläsartillägg som Använder Lokal LLM för säker hantering av privat Information

Maeedi, Adam January 2024 (has links)
Denna uppsats presenterar utvecklingen av ett webbläsartillägg som integrerar en lokal språkmodell, specifikt GPT-4All, för att förbättra hantering och analys av privat och akademisk information direkt i användarens webbläsare. Syftet är att erbjuda en mer personlig och säker användarupplevelse genom att tillämpa avancerade teknologier för att skydda användarnas integritet och datasekretess. Genom tekniska förklaringar och metodiska framställningar beskrivs skapandet av ett funktionellt webbläsartillägg som använder teknologier som HTML, CSS, och JavaScript, samt en lokal installation av språkmodellen för att hantera data från Ladok. Resultaten indikerar att tillägget effektivt kan processa och analysera information, vilket bidrar till en förbättrad akademisk informationshantering. Projektet understryker potentialen hos lokala språkmodeller i utvecklingen av digitala verktyg, med särskild betoning på etiska och säkerhetsmässiga aspekter. Framtida forskning uppmanas utforska ytterligare applikationer och effekter av dessa teknologier inom olika användarområden. / This paper presents the development of a browser extension that integrates a local language model, specifically GPT-4All, to improve the management and analysis of private and academic information directly in the user’s browser. The aim is to offer a more personalized and secure user experience by applying advanced technologies to protect user privacy and data confidentiality. Through technical explanations and methodological presentations, the creation of a functional browser extension using technologies such as HTML, CSS, and JavaScript, as well as a local installation of the language model to handle data from Ladok, is described. The results indicate that the extension can efficiently process and analyze information, contributing to an improved academic information management. The project underlines the potential of local language models in the development of digital tools, with particular emphasis on ethical and security aspects. Future research is encouraged to explore further applications and impacts of these technologies in different user domains.
88

Concurrency Analysis and Mining Techniques for APIs

Santhiar, Anirudh January 2017 (has links) (PDF)
Software components expose Application Programming Interfaces (APIs) as a means to access their functionality, and facilitate reuse. Developers use APIs supplied by programming languages to access the core data structures and algorithms that are part of the language framework. They use the APIs of third-party libraries for specialized tasks. Thus, APIs play a central role in mediating a developer's interaction with software, and the interaction between different software components. However, APIs are often large, complex and hard to navigate. They may have hundreds of classes and methods, with incomplete or obsolete documentation. They may encapsulate concurrency behaviour that the developer is unaware of. Finding the right functionality in a large API, using APIs correctly, and maintaining software that uses a constantly evolving API are challenges that every developer runs into. In this thesis, we design automated techniques to address two problems pertaining to APIs (1) Concurrency analysis of APIs, and (2) API mining. Speci cally, we consider the concurrency analysis of asynchronous APIs, and mining of math APIs to infer the functional behaviour of API methods. The problem of concurrency bugs such as race conditions and deadlocks has been well studied for multi-threaded programs. However, developers have been eschewing a pure multi-threaded programming model in favour of asynchronous programming models supported by asynchronous APIs. Asynchronous programs and multi-threaded programs have different semantics, due to which existing techniques to analyze the latter cannot detect bugs present in programs that use asynchronous APIs. This thesis addresses the problem of concurrency analysis of programs that use asynchronous APIs in an end-to-end fashion. We give operational semantics for important classes of asynchronous and event-driven systems. The semantics are designed by carefully studying real software and serve to clarify subtleties in scheduling. We use the semantics to inform the design of novel algorithms to find races and deadlocks. We implement the algorithms in tools, and show their effectiveness by finding serious bugs in popular open-source software. To begin with, we consider APIs for asynchronous event-driven systems supporting pro-grammatic event loops. Here, event handlers can spin event loops programmatically in addition to the runtime's default event loop. This concurrency idiom is supported by important classes of APIs including GUI, web browser, and OS APIs. Programs that use these APIs are prone to interference between a handler that is spinning an event loop and another handler that runs inside the loop. We present the first happens-before based race detection technique for such programs. Next, we consider the asynchronous programming model of modern languages like C]. In spite of providing primitives for the disciplined use of asynchrony, C] programs can deadlock because of incorrect use of blocking APIs along with non-blocking (asynchronous) APIs. We present the rst deadlock detection technique for asynchronous C] programs. We formulate necessary conditions for deadlock using a novel program representation that represents procedures and continuations, control ow between them and the threads on which they may be scheduled. We design a static analysis to construct the pro-gram representation and use it to identify deadlocks. Our ideas have resulted in research tools with practical impact. Sparse Racer, our tool to detect races, found 13 previously unknown use-after-free bugs in KDE Linux applications. Dead Wait, our deadlock detector, found 43 previously unknown deadlocks in asynchronous C] libraries. Developers have fixed 43 of these races and deadlocks, indicating that our techniques are useful in practice to detect bugs that developers consider worth fixing. Using large APIs effectively entails finding the right functionality and calling the methods that implement it correctly, possibly composing many API elements. Automatically infer-ring the information required to do this is a challenge that has attracted the attention of the research community. In response, the community has introduced many techniques to mine APIs and produce information ranging from usage examples and patterns, to protocols governing the API method calling sequences. We show how to mine unit tests to match API methods to their functional behaviour, for the specific but important class of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations, and are supplied by many competing libraries. In contrast to obtaining usage examples or identifying correct call sequences, the challenge in this domain is to infer API methods required to perform a particular mathematical computation, and to compose them correctly. We let developers specify mathematical computations naturally, as a math expression in the notation of interpreted languages (such as Matlab). Our unit test mining technique maps subexpressions to math API methods such that the method's functional behaviour matches the subexpression's executable semantics, as defined by the interpreter. We apply our technique, called MathFinder, to math API discovery and migration, and validate it in a user study. Developers who used MathFinder nished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, and IDE code completion. We also demonstrate the use of MathFinder to assist in the migration of Weka, a popular machine learning library, to a different linear algebra library.
89

IHAL and Web Service Interfaces to Vendor Configuration Engines

Hamilton, John, Darr, Timothy, Fernandes, Ronald, Sulewski, Joe, Jones, Charles 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / In this paper, we present an approach towards achieving standards-based multi-vendor hardware configuration. This approach uses the Instrumentation Hardware Abstraction Language (IHAL) and a standardized web service Application Programming Interface (API) specification to allow any Instrumentation Support System (ISS) to control instrumentation hardware in a vendor neutral way without requiring non-disclosure agreements or knowledge of proprietary information. Additionally, we will describe a real-world implementation of this approach using KBSI‟s InstrumentMap application and an implementation of the web service API by L-3 Communications Telemetry East.
90

Behaviour-based virus analysis and detection

Al Amro, Sulaiman January 2013 (has links)
Every day, the growing number of viruses causes major damage to computer systems, which many antivirus products have been developed to protect. Regrettably, existing antivirus products do not provide a full solution to the problems associated with viruses. One of the main reasons for this is that these products typically use signature-based detection, so that the rapid growth in the number of viruses means that many signatures have to be added to their signature databases each day. These signatures then have to be stored in the computer system, where they consume increasing memory space. Moreover, the large database will also affect the speed of searching for signatures, and, hence, affect the performance of the system. As the number of viruses continues to grow, ever more space will be needed in the future. There is thus an urgent need for a novel and robust detection technique. One of the most encouraging recent developments in virus research is the use of formulae, which provides alternatives to classic virus detection methods. The proposed research uses temporal logic and behaviour-based detection to detect viruses. Interval Temporal Logic (ITL) will be used to generate virus specifications, properties and formulae based on the analysis of the behaviour of computer viruses, in order to detect them. Tempura, which is the executable subset of ITL, will be used to check whether a good or bad behaviour occurs with the help of ITL description and system traces. The process will also use AnaTempura, an integrated workbench tool for ITL that supports our system specifications. AnaTempura will offer validation and verification of the ITL specifications and provide runtime testing of these specifications.

Page generated in 0.0556 seconds