• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 34
  • 11
  • 10
  • 8
  • 6
  • 6
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Design and Implementation of Centrally-Coordinated Peer-to-Peer Live-streaming

Roverso, Roberto January 2011 (has links)
In this thesis, we explore the use of a centrally-coordinated peer-to-peer overlay as a possible solution to the live streaming problem. Our contribution lies in showing that such approach is indeed feasible given that a number of key challenges are met. The motivation behind exploring an alternative design is that, although a number of approaches have been investigated in the past, e.g. mesh-pull and tree-push, hybrids and best-of-both-worlds mesh-push, no consensus has been reached on the best solution for the problem of peer-to-peer live streaming, despite current deployments and reported successes. In the proposed system, we model sender/receiver peer assignments as an optimization problem. Optimized peer selection based on multiple utility factors, such as bandwidth availability, delays and connectivity compatibility, make it possible to achieve large source bandwidth savings and provide high quality of user experience. Clear benefits of our approach are observed when Network Address Translation constraints are present on the network. We have addressed key scalability issues of our platform by parallelizing the heuristic which is the core of our optimization engine and by implementing the resulting algorithm on commodity Graphic Processing Units (GPUs). The outcome is a Linear Sum Assignment Problem (LSAP) solver for time-constrained systems which produces near-optimal results and can be used for any instance of LSAP, i.e. not only in our system.   As part of this work, we also present our experience in working with Network Address Translators (NATs) traversal in peer-to-peer systems. Our contribution in this context is threefold. First, we provide a semi-formal model of state of the art NAT behaviors. Second, we use our model to show which NAT combinations can be theoretically traversed and which not. Last, for each of the combinations, we state which traversal technique should be used. Our findings are confirmed by experimental results on a real network. Finally, we address the problem of reproducibility in testing, debugging and evaluation of our peer-to-peer application. We achieve this by providing a software framework which can be transparently integrated with any already-existing software and which is able to handle concurrency, system time and network events in a reproducible manner. / QC 20110426
22

JavaFX Scene Graph Object Serialization

Khodabandehloo, Elmira January 2013 (has links)
Data visualization is used in order to analyze and perceive patterns in data. One of the use cases of visualization is to graphically represent and compare simulation results. At Ericsson Research, a visualization platform, based on JavaFX 2 is used to visualize simulation results. Three configuration files are required in order to create an application based on the visualization tool: XML, FXML, and CSS. The current problem is that, in order to set up a visualization application, the three configuration files must be written by hand which is a very tedious task. The purpose of this study is to reduce the amount of work which is required to construct a visualization application by providing a serialization function which makes it possible to save the layout (FXML) of the application at run-time based solely on the scene graph. In this master’s thesis, possible frameworks that might ease the implementation of a generic FXML serialization have been investigated and the most promising alternative according to a number of evaluation metrics has been identified. Then, using a design science research method, an algorithm is proposed which is capable of generic object/bean serialization to FXML based on a number of features or requirements. Finally, the implementation results are evaluated through a set of test cases. The evaluation is composed of an analysis of the serialization results & tests and a comparison of the expected result and the actual results using unit testing and test coverage measurements. Evaluation results for each serialization function show that the results of the serialization are similar to the original files and hence the proposed algorithm provides the desired serialization functionality for the specific features of FXML needed for this platform, provided that the tests considered every aspect of the serialization functionality. / Datavisualisering används för att analysera och uppfatta mönster i data. Ett användningsfall för visualisering är att grafiskt representera och jämföra simuleringsresultat. På Ericsson Research har en visualiseringplattform för att visualisera simuleringsresultat utvecklats som baserats på JavaFX 2. Tre konfigurationsfiler krävs för att skapa en applikation baserad på denna visualiseringsplattform: XML, FXML och CSS. Det nuvarande problemet är att för att utveckla en ny applikation så måste de tre konfigurationsfilerna skrivas för hand vilket är kräver mycket utvecklingstid. Syftet med denna studie är att minska mängden arbete som krävs för att konstruera en visualiseringapplikation genom att tillhandahålla en serialiseringsfunktion som gör det möjligt att spara applikationens layout till en FXML-fil medan programmet exekverar enbart genom att extrahera information ur det grafiska gränsnittets scengraf. I detta examensarbete har ett antal mjukvarubibliotek eller API: er som kan underlätta utvecklandet av en generisk FXML serialiseringsfunktion analyserats och de mest lovande alternativen enligt ett antal utvärderingsmetriker har identifierats. Med hjälp av en iterativ, design-orienterad forskningsmetod har en algoritm designats som är kapabel till att serialisera generiska Java-objekt, eller Java-bönor till FXML. Den föreslagna algoritmen har sedan utvärderats genom automatiserade mjukvarutester. Utvärderingen består av: analys av serialiseringsresultat, design av testfall, samt jämförelse av förväntade resultat och de faktiska resultaten med hjälp av enhetstest och uppmätt kodtäckning. Utvärderingen visar att serialiseringsalgoritmen ger resultat som motsvarar de ursprungliga FXML-filerna som utformats för att verifiera olika delar av FXML standarden. Därmed anses den föreslagna serialiseringsalgoritmen uppfylla de delar av FXML-specifikationen som kravställts och beaktats i detta examensarbete.
23

GRAPHITE: An Extensible Graph Traversal Framework for Relational Database Management Systems

Paradies, Marcus, Lehner, Wolfgang, Bornhövd, Christof 25 August 2022 (has links)
Graph traversals are a basic but fundamental ingredient for a variety of graph algorithms and graph-oriented queries. To achieve the best possible query performance, they need to be implemented at the core of a database management system that aims at storing, manipulating, and querying graph data. Increasingly, modern business applications demand native graph query and processing capabilities for enterprise-critical operations on data stored in relational database management systems. In this paper we propose an extensible graph traversal framework (GRAPHITE) as a central graph processing component on a common storage engine inside a relational database management system. We study the influence of the graph topology on the execution time of graph traversals and derive two traversal algorithm implementations specialized for different graph topologies and traversal queries. We conduct extensive experiments on GRAPHITE for a large variety of real-world graph data sets and input configurations. Our experiments show that the proposed traversal algorithms differ by up to two orders of magnitude for different input configurations and therefore demonstrate the need for a versatile framework to efficiently process graph traversals on a wide range of different graph topologies and types of queries. Finally, we highlight that the query performance of our traversal implementations is competitive with those of two native graph database management systems.
24

Increasing the Efficiency of CyberKnife Cancer Treatments by Faster Robot Traversal Paths / Förbättring av effektiviteten i CyberKnife-cancerbehandlingar genom snabbare robotvägar

Hagström, Theodor January 2023 (has links)
Cancer remains a significant global challenge, constituting one of the leading causes of death worldwide. With an aging population, the demand for cancer treatments is increasing. Nevertheless, due to technological advancements, cancer mortality rates are declining. This study contributes to these advancements, focusing specifically on radiation therapy, a crucial technology widely used today. Since the invention of radiation therapy, there has been significant research and progress in the field. One such advancement is the CyberKnife® system (Accuray Incorporated, Sunnyvale, CA, USA) - a fully robotic radiotherapy device that enables precise patient treatments. Its flexibility allows for the delivery of high-quality plans, but treatment times can be quite long, leading to adverse effects for both patients and healthcare providers. This thesis introduces algorithms aimed at reducing the robot traversal time of the CyberKnife technology. These algorithms are incorporated into an existing optimization framework for treatment planning, with their effectiveness evaluated across various patient cases. Significant reductions in treatment times for some patient cases were observed, while maintaining satisfactory plan quality, primarily due to more efficient traversal paths for the CyberKnife robot. The increased efficiency of the robot can also be leveraged to create treatment plans with more irradiation directions, increasing the treatment quality in some cases. / Cancer förblir en betydande global utmaning och är en av de främsta dödsorsakerna i världen. Med en åldrande befolkning ökar efterfrågan på cancerbehandlingar. Trots detta minskar cancerdödligheten tack vare teknologiska framsteg. Denna studie bidrar till dessa framsteg, med särskilt fokus på strålterapi, en avgörande teknologi som används i stor utsträckning idag.  Sedan uppfinningen av strålterapi har det gjorts betydande forskning och utveckling inom området. Ett sådant framsteg är CyberKnife®-systemet (Accuray Incorporated, Sunnyvale, CA, USA) - en helt robotiserad strålterapimaskin som möjliggör precisa behandlingar för patienter. Dess flexibilitet gör det möjligt att leverera högkvalitativa planer, men behandlingstiderna kan vara långa, vilket leder till negativa effekter för såväl patienter som sjukvården. Denna uppsats introducerar algoritmer som syftar till att minska traverseringstiden för CyberKnife-roboten. Dessa algoritmer integreras i ett befintligt optimeringsramverk för behandlingsplanering, med deras effektivitet utvärderad baserat på olika patientfall.  Betydande minskningar av behandlingstiderna observerades för vissa patientfall, samtidigt som tillfredsställande plankvalitet behölls, främst med anledning av mer effektiva traverseringsvägar för CyberKnife-roboten. Denna effektivisering möjliggör också skapandet av behandlingsplaner med fler strålriktningar, vilket förbättrade behandlingskvaliteten i vissa fall.
25

考慮網站結構之使用者網站漫遊行為的研究 / Efficient Mining of Web Traversal Walks with Site Topology

李華富, Lee, Hua-Fu Unknown Date (has links)
隨著全球資訊網的發展,網站吸引了大量的使用者.分析網站中大部分使用者共同的網站瀏覽行為,不但有助於網站結構的設計與更新,也可以對具有相同瀏覽行為的使用者,做有效的個人化服務。 目前有關使用者網站瀏覽行為的研究,所探勘出來的結果多為路徑瀏覽式樣或是網頁循序式樣。因此,我們提出一種新的網站瀏覽式樣:網站漫游,並且提出了兩個演算法AM與PM,來探勘出頻繁使用者網站漫遊行為式樣。 演算法AM是針對要處理的資料量非常龐大,而無法將全部資料存放入主記憶體中的情形所設計的。AM是利用演算法Apriori的精神,來探勘出頻繁使用者網站漫游行為。而演算法PM是針對資料經過轉換後可存放入主記憶體的情形而設計的。PM主要是利用在主記憶體中建立一個樹狀結構,以進一步來壓縮原有資料庫內的大量資料,並利用這個樹狀資料結構來逐步探勘出所有的使用者頻繁網站漫游。在實驗的假設條件下,演算法AM與PM皆展現了線性的執行效率與延展性。 / With progressive expansion in the size and complexity of web site on the World Wide Web, much research has been done on the discovery of useful and interesting Web traversal patterns.  Most existing approaches focus on mining of path traversal patterns or sequential patterns. In this paper, we present a new pattern, Web traversal walks, for mining of Web traversal pattern. A Web traversal walk is the complete trail of a user traversal behavior in a single Web site. Web traversal walk mining is more helpful to understand and predict the behavior of the Web site access patterns. Two efficient algorithms (i.e., AM and PM) are proposed to discover the Web traversal walks. The algorithm PM is used when the size of database is fit in main memory while AM is not. AM is developed based on the Apriori property to discover all the frequent Web traversal walks from Web logs. In the algorithm PM, a tree structure is constructed in memory from Web logs and the frequent Web traversal walks are generated from the tree structure. Experimental results show that the proposed methods perform well in efficiency and scalability.
26

Worst-case delay analysis of core-to-IO flows over many-cores architectures / Analyse des délais pire cas des flux entre coeur et interfaces entrées/sorties sur des architectures pluri-coeurs

Abdallah, Laure 05 April 2017 (has links)
Les architectures pluri-coeurs sont plus intéressantes pour concevoir des systèmes en temps réel que les systèmes multi-coeurs car il est possible de les maîtriser plus facilement et d’intégrer un plus grand nombre d’applications, potentiellement de différents niveau de criticité. Dans les systèmes temps réel embarqués, ces architectures peuvent être utilisées comme des éléments de traitement au sein d’un réseau fédérateur car ils fournissent un grand nombre d’interfaces Entrées/Sorties telles que les contrôleurs Ethernet et les interfaces de la mémoire DDR-SDRAM. Aussi, il est possible d’y allouer des applications ayant différents niveaux de criticités. Ces applications communiquent entre elles à travers le réseau sur puce (NoC) du pluri coeur et avec des capteurs et des actionneurs via l’interface Ethernet. Afin de garantir les contraintes temps réel de ces applications, les délais de transmission pire cas (WCTT) doivent être calculés pour les flux entre les coeurs ("inter-core") et les flux entre les coeurs et les interfaces entrées/sorties ("core-to-I/O"). Plusieurs réseaux sur puce (NoCs) ciblant les systèmes en temps réel dur ont été conçus en s’appuyant sur des extensions matérielles spécifiques. Cependant, aucune de ces extensions ne sont actuellement disponibles dans les architectures de réseaux sur puce commercialisés, qui se basent sur la commutation wormhole avec la stratégie d’arbitrage par tourniquet. En utilisant cette stratégie de commutation, différents types d’interférences peuvent se produire sur le réseau sur puce entre les flux. De plus, le placement de tâches des applications critiques et non critiques a un impact sur les contentions que peut subir les flux "core-to-I/O". Ces flux "core-to-I/O" parcourent deux réseaux de vitesses différentes: le NoC et Ethernet. Sur le NoC, la taille des paquets autorisés est beaucoup plus petite que la taille des trames Ethernet. Ainsi, lorsque la trame Ethernet est transmise sur le NoC, elle est divisée en plusieurs paquets. La trame sera supprimée de la mémoire tampon de l’interface Ethernet uniquement lorsque la totalité des données aura été transmise. Malheureusement, la congestion du NoC ajoute des délais supplémentaires à la transmission des paquets et la taille de la mémoire tampon de l’interface Ethernet est limitée. En conséquence, ce comportement peut aboutir au rejet des trames Ethernet. L’idée donc est de pouvoir analyser les délais de transmission pire cas sur les NoC et de réduire leurs délais afin d’éviter ce problème de rejet. Dans cette thèse, nous montrons que le pessimisme de méthodes existantes de calcul de WCTT et les stratégies de placements existantes conduisent à rejeter des trames Ethernet en raison d’une congestion interne sur le NoC. Des propriétés des réseaux utilisant la commutation "wormhole" ont été définies et validées afin de mieux prendre en compte les conflits entre les flux. Une stratégie de placement de tâches qui prend en compte les communications avec les I/O a été ensuite proposée. Cette stratégie vise à diminuer les contentions des flux qui proviennent de l’I/O et donc de réduire leurs WCTTs. Les résultats obtenus par la méthode de calcul définie au cours de cette thèse montrent que les valeurs du WCTT des flux peuvent être réduites jusqu’à 50% par rapport aux valeurs de WCTT obtenues par les méthodes de calcul existantes. En outre, les résultats expérimentaux sur des applications avioniques réelles montrent des améliorations significatives des délais de transmission des flux "core-to-I/O", jusqu’à 94%, sans impact significatif sur ceux des flux "intercore". Ces améliorations sont dues à la stratégie d’allocation définie qui place les applications de manière à réduire l’impact des flux non critiques sur les flux critiques. Ces réductions de WCTT des flux "core-to-I/O" évitent le rejet des trames Ethernet. / Many-core architectures are more promising hardware to design real-time systems than multi-core systems as they should enable an easier mastered integration of a higher number of applications, potentially of different level of criticalities. In embedded real-time systems, these architectures will be integrated within backbone Ethernet networks, as they mostly provide Ethernet controllers as Input/Output(I/O) interfaces. Thus, a number of applications of different level of criticalities could be allocated on the Network-on-Chip (NoC) and required to communicate with sensors and actuators. However, the worst-case behavior of NoC for both inter-core and core-to-I/O communications must be established. Several NoCs targeting hard real-time systems, made of specific hardware extensions, have been designed. However, none of these extensions are currently available in commercially available NoC-based many-core architectures, that instead rely on wormhole switching with round-robin arbitration. Using this switching strategy, interference patterns can occur between direct and indirect flows on many-cores. Besides, the mapping over the NoC of both critical and non-critical applications has an impact on the network contention these core-to-I/O communications exhibit. These core-to-I/O flows (coming from the Ethernet interface of the NoC) cross two networks of different speeds: NoC and Ethernet. On the NoC, the size of allowed packets is much smaller than the size of Ethernet frames. Thus, once an Ethernet frame is transmitted over the NoC, it will be divided into many packets. When all the data corresponding to this frame are received by the DDR-SDRAM memory on the NoC, the frame is removed from the buffer of the Ethernet interface. In addition, the congestion on the NoC, due to wormhole switching, can delay these flows. Besides, the buffer in the Ethernet interface has a limited capacity. Then, this behavior may lead to a problem of dropping Ethernet frames. The idea is therefore to analyze the worst case transmission delays on the NoC and reduce the delays of the core-to-I/O flows. In this thesis, we show that the pessimism of the existing Worst-Case Traversal Time (WCTT) computing methods and the existing mapping strategies lead to drop Ethernet frames due to an internal congestion in the NoC. Thus, we demonstrate properties of such NoC-based wormhole networks to reduce the pessimism when modeling flows in contentions. Then, we propose a mapping strategy that minimizes the contention of core-to-I/O flows in order to solve this problem. We show that the WCTT values can be reduced up to 50% compared to current state-of-the-art real-time packet schedulability analysis. These results are due to the modeling of the real impact of the flows in contention in our proposed computing method. Besides, experimental results on real avionics applications show significant improvements of core-to-I/O flows transmission delays, up to 94%, without significantly impacting transmission delays of core-to-core flows. These improvements are due to our mapping strategy that allocates the applications in such a way to reduce the impact of non-critical flows on critical flows. These reductions on the WCTT of the core-to-I/O flows avoid the drop of Ethernet frames.
27

Efficient multi-class objet detection with a hierarchy of classes / Détection efficace des objets multi-classes avec une hiérarchie des classes

Odabai Fard, Seyed Hamidreza 20 November 2015 (has links)
Dans cet article, nous présentons une nouvelle approche de détection multi-classes basée sur un parcours hiérarchique de classifieurs appris simultanément. Pour plus de robustesse et de rapidité, nous proposons d’utiliser un arbre de classes d’objets. Notre modèle de détection est appris en combinant les contraintes de tri et de classification dans un seul problème d’optimisation. Notre formulation convexe permet d’utiliser un algorithme de recherche pour accélérer le temps d’exécution. Nous avons mené des évaluations de notre algorithme sur les benchmarks PASCAL VOC (2007 et 2010). Comparé à l’approche un-contre-tous, notre méthode améliore les performances pour 20 classes et gagne 10x en vitesse. / Recent years have witnessed a competition in autonomous navigation for vehicles boosted by the advances in computer vision. The on-board cameras are capable of understanding the semantic content of the environment. A core component of this system is to localize and classify objects in urban scenes. There is a need to have multi-class object detection systems. Designing such an efficient system is a challenging and active research area. The algorithms can be found for applications in autonomous driving, object searches in images or video surveillance. The scale of object classes varies depending on the tasks. The datasets for object detection started with containing one class only e.g. the popular INRIA Person dataset. Nowadays, we witness an expansion of the datasets consisting of more training data or number of object classes. This thesis proposes a solution to efficiently learn a multi-class object detector. The task of such a system is to localize all instances of target object classes in an input image. We distinguish between three major efficiency criteria. First, the detection performance measures the accuracy of detection. Second, we strive low execution times during run-time. Third, we address the scalability of our novel detection framework. The two previous criteria should scale suitably with the number of input classes and the training algorithm has to take a reasonable amount of time when learning with these larger datasets. Although single-class object detection has seen a considerable improvement over the years, it still remains a challenge to create algorithms that work well with any number of classes. Most works on this subject extent these single-class detectors to work accordingly with multiple classes but remain hardly flexible to new object descriptors. Moreover, they do not consider all these three criteria at the same time. Others use a more traditional approach by iteratively executing a single-class detector for each target class which scales linearly in training time and run-time. To tackle the challenges, we present a novel framework where for an input patch during detection the closest class is ranked highest. Background labels are rejected as negative samples. The detection goal is to find the highest scoring class. To this end, we derive a convex problem formulation that combines ranking and classification constraints. The accuracy of the system is improved by hierarchically arranging the classes into a tree of classifiers. The leaf nodes represent the individual classes and the intermediate nodes called super-classes group recursively these classes together. The super-classes benefit from the shared knowledge of their descending classes. All these classifiers are learned in a joint optimization problem along with the previouslymentioned constraints. The increased number of classifiers are prohibitive to rapid execution times. The formulation of the detection goal naturally allows to use an adapted tree traversal algorithm to progressively search for the best class but reject early in the detection process the background samples and consequently reduce the system’s run-time. Our system balances between detection performance and speed-up. We further experimented with feature reduction to decrease the overhead of applying the high-level classifiers in the tree. The framework is transparent to the used object descriptor where we implemented the histogram of orientated gradients and deformable part model both introduced in [Felzenszwalb et al., 2010a]. The capabilities of our system are demonstrated on two challenging datasets containing different object categories not necessarily semantically related. We evaluate both the detection performance with different number of classes and the scalability with respect to run-time. Our experiments show that this framework fulfills the requirements of a multi-class object detector and highlights the advantages of structuring class-level knowledge.
28

Efektivn­ metoda Äten­ adresovch poloek v souborov©m syst©mu Ext4 / An Efficient Way to Allocate and Read Directory Entries in the Ext4 File System

Pazdera, Radek January 2013 (has links)
C­lem t©to prce je zvit vkon sekvenÄn­ho prochzen­ adres v souborov©m syst©mu ext4. Datov struktura HTree, jen je v souÄasn© dobÄ pouita k implementaci adresu v ext4 zvld velmi dobe nhodn© p­stupy do adrese, avak nen­ optimalizovna pro sekvenÄn­ prochzen­. Tato prce pin­ analzu tohoto probl©mu. Nejprve studuje implementaci souborov©ho syst©mu ext4 a dal­ch subsyst©mu Linuxov©ho jdra, kter© s n­m souvis­. Pro vyhodnocen­ vkonu souÄasn© implementace adresov©ho indexu byla vytvoena sada test. Na zkladÄ vsledk tÄchto test bylo navreno een­, kter© bylo nslednÄ implementovno do Linuxov©ho jdra. V zvÄru t©to prce naleznete vyhodnocen­ p­nosu a porovnn­ vkonu nov© implementace s dal­mi souborovmi syst©my v Linuxu.
29

Web Penetration testing : Finding and evaluating vulnerabilities in a web page based on C#, .NET and Episerver

Lundquist Amir, Ameena, Khudur, Ivan January 2022 (has links)
Today’s society is highly dependent on functional and secure digital resources, to protect users and to deliver different kinds of services. To achieve this, it is important to evaluate the security of such resources, to find vulnerabilities and handle them before they are exploited. This study aimed to see if web applications based on C#, .NET and Episerver had vulnerabilities, by performing different penetration tests and a security audit. The penetration tests utilized were SQL injection, Cross Site Scripting, HTTP request tampering and Directory Traversal attacks. These attacks were performed using Kali Linux and the Burp Suite tool on a specific web application. The results showed that the web application could withstand the penetration tests without disclosing any personal or sensitive information. However, the web application returned many different types of HTTP error status codes, which could potentially reveal areas of interest to a hacker. Furthermore, the security audit showed that it was possible to access the admin page of the web application with nothing more than a username and password. It was also found that having access to the URL of a user’s invoice file was all that was needed to access it. / Dagens samhälle är starkt beroende av funktionella och säkra digitala resurser, för att skydda användare och för att leverera olika typer av tjänster. För att uppnå detta är det viktigt att utvärdera säkerheten för sådana resurser för att hitta sårbarheter och hantera dem innan de utnyttjas. Denna studie syftar till att se om webapplikationer baserade på C#, .NET och Episerver har sårbarheter, genom att utföra olika penetrationstester och genom att göra en säkerhetsgranskning. Penetrationstesterna som användes var SQL-injektion, Cross Site Scripting, HTTP-förfrågningsmanipulering och Directory Traversal-attacker. Dessa attacker utfördes med Kali Linux och Burp Suite-verktygen på en specifik webbapplikation. Resultaten visade att webbapplikationen klarade penetrationstesterna utan att avslöja någon personlig eller känslig information. Webbapplikationen returnerade dock många olika typer av HTTP-felstatuskoder, som potentiellt kan avslöja områden av intresse för en hackare. Vidare visade säkerhetsgranskningen att det var möjligt att komma åt webbapplikationens adminsida med inget annat än ett användarnamn och lösenord. Det visade sig också att allt som behövdes för att komma åt en användares fakturafiler var webbadressen.
30

Approximate Action Selection For Large, Coordinating, Multiagent Systems

Sosnowski, Scott T. 27 May 2016 (has links)
No description available.

Page generated in 0.0541 seconds