Spelling suggestions: "subject:"codegeneration"" "subject:"vasodegeneration""
161 |
Robust Code Generation using Large Language Models : Guiding and Evaluating Large Language Models for Static VerificationAl-Mashahedi, Ahmad, Ljung, Oliver January 2024 (has links)
Background: Generative AI has achieved rapid and widespread acclaim over a short period since the inception of recent models that have opened up opportunities not possible before. Large Language Models (LLMs), a subset of generative AI, have become an essential part of code generation for software development. However, there is always a risk that the generated code does not fulfill the programmer's intent and contains faults or bugs that can go unnoticed. To that end, we propose that verification of generated code should increase its quality and trust. Objectives: This thesis aims to research generation of code that is both functionally correct and verifiable by implementing and evaluating four prompting approaches and a reinforcement learning solution to increase robustness within code generation, using unit-test and verification rewards. Methods: We used a Rapid Literature Review (RLR) and Design Science methodology to get a solid overview of the current state of robust code generation. From the RLR and related works, we evaluated the following four prompting approaches: Base prompt, Documentation prompting, In-context learning, and Documentation + In-context learning on the two datasets: MBPP and HumanEval. Moreover, we fine-tuned one model using Proximal Policy Optimization (PPO) for the novel task. Results: We measured the functional correctness and static verification success rates, amongst other metrics, for the four proposed approaches on eight model configurations, including the PPO fine-tuned LLM. Our results show that for the MBPP dataset, on average, In-context learning had the highest functional correctness at 29.4% pass@1, Documentation prompting had the highest verifiability at 8.48% verfiable@1, and finally, In-context learning had the highest functionally correct verifiable code at 3.2% pass@1 & verifiable@1. Moreover, the PPO fine-tuned model showed an overall increase in performance across all approaches compared to the pre-trained base model. Conclusions: We found that In-context learning on the PPO fine-tuned model yielded the best overall results across most metrics compared to the other approaches. The PPO fine-tuned with In-context learning resulted in 32.0% pass@1, 12.8% verifiable@1, and 5.0% pass@1 & verifiable@1. Documentation prompting was better for verifable@1 on MBPP. However, it did not perform as well for the other metrics. Documentation prompting + In-context learning was performance-wise between Documentation prompting and In-context learning, while Base prompt performed the worst overall. For future work, we envision several improvements to PPO training, including but not limited to training on Nagini documentation and utilizing expert iteration to create supervised fine-tuning datasets to improve the model iteratively. / Bakgrund: Generativ AI har uppnått snabb och utbredd popularitet under en kort tid sedan lanseringen av språk- och bildmodeller som har öppnat upp nya möjligheter. Large Language Models (LLMs), en del av generativ AI, har blivit en viktig del inom mjukvaruutveckling för kodgenerering. Det finns dock alltid en risk att den genererade koden inte uppfyller programmerarens avsikt och innehåller fel eller buggar som kan förbli oupptäckta. För att motverka detta föreslår vi formell verifiering av den genererade koden, vilket bör öka dess kvalitet och därmed förtroendet för den. Syfte: Detta examensarbetets syfte är att undersöka generering av kod som är bååde funktionellt korrekt och verifierbar genom att implementera och utvärdera fyra prompt-metoder samt en ny lösning genom reinforcement learning. Detta för att öka robusthet inom kodgenerering genom unit-test och verifieringsbelöningar. Metoder: Vi använde Rapid Literature Review (RLR) och Design Science metodik för att få en solid översikt över det nuvarande tillståndet för robust kodgenerering. Från RLR:en och relaterade arbeten utvärderade vi följande fyra prompt-metoder: Base prompt, Documentation prompting, In-context learning och Documentation + In-context learning. Dessutom fine-tune:ade vi en modell med Proximal Policy Optimization (PPO) för denna uppgift. Resultat: Vi mätte funktionell korrekthet- och verifieringsvinst-statistiken samt andra mätvärden för de fyra föreslagna prompten på åtta modellkonfigurationer, inklusive den PPO fine-tune:ade LLM:en. Våra resultat visar på MBPP datasetet att i genomsnitt hade In-context learning den högsta funktionella korrektheten vid 29,4% pass@1, Documentation prompting hade den högsta verifierbarheten vid 8,48% verifiable@1, och slutligen hade In-context learning mest funktionellt korrekta verifierbara kod vid 3.2% pass@1 & verifiable@1. Utöver detta visade den PPO fine-tune:ade modellen konsekventa förbättringar gentemot den förtränade basmodellen. Slutsatser: Vi fann att In-context learning med den fine-tune:ade PPO-modellen gav de bästa övergripande resultaten över de flesta mätvärden jämfört med de andra metoderna. Den PPO fine-tune:ade modellen med In-context learning resulterade i 32.0% pass@1, 12.8% verifiable@1, och 5.0% pass@1 & verifiable@1. Documentation prompting va bättre för verifable@1, men den fungerade inte lika bra för de andra mätvärdena. Documentation + In-context learning hamnade mellan Documentation prompting och In-context learning prestationsmässigt. Base prompt presterade sämst av de utvärderade metoderna. För framtida arbete ser vi flera förbättringar av träningen av PPO-modellen. Dessa innefattar, men är inte begränsade till, träning med Nagini dokumentation samt användning av expert iteration för att bygga ett dataset i syfte att iterativt förbättra modellen.
|
162 |
A requirements engineering approach for the development of web applicationsValderas Aranda, Pedro José 07 May 2008 (has links)
Uno de los problemas más importantes que se propuso solucionar cuando apareció la
Ingeniería Web fue la carencia de técnicas para la especificación de requisitos de
aplicaciones Web.
Aunque se han presentado diversas propuestas que proporcionan soporte metodológico
al desarrollo de aplicaciones Web, la mayoría de ellas se centran básicamente en definir
modelos conceptuales que permiten representar de forma abstracta una aplicación Web;
las actividades relacionadas con la especificación de requisitos son vagamente tratadas
por estas propuestas. Además, las técnicas tradicionales para la especificación de
requisitos no proporcionan un soporte adecuado para considerar características propias
de las aplicaciones Web como la Navegación.
En esta tesis, se presenta una aproximación de Ingeniería de Requisitos para especificar
los requisitos de las aplicaciones Web. Esta aproximación incluye mecanismos basados
en la metáfora de tarea para especificar no sólo los requisitos relacionados con aspectos
estructurales y de comportamiento de una aplicación Web sino también los requisitos
relacionados con aspectos navegacionales.
Sin embargo, una especificación de requisitos es poco útil si no somos capaces de
transformarla en los artefactos software adecuados. Este es un problema clásico que la
comunidad de Ingeniería del Software ha tratado de resolver desde sus inicios: cómo
pasar del espacio del problema (requisitos de usuario) al espacio de la solución (diseño
e implementación) siguiendo una guía metodológica clara y precisa.
En esta tesis, se presenta una estrategia que, basándose en transformaciones de grafos,
y estando soportada por un conjunto de herramientas, nos permite realizar de forma
automática transformaciones entre especificaciones de requisitos basadas en tareas y
esquemas conceptuales Web. Además, esta estrategia se ha integrado con un método
de Ingeniería Web con capacidades de generación automática de código. Esta
integración nos permite proporcionar un mecanis / Valderas Aranda, PJ. (2008). A requirements engineering approach for the development of web applications [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1997
|
163 |
A web engineering approach for the development of business process-driven web applicationsTorres Bosch, María Victoria 04 August 2008 (has links)
Actualmente, la World Wide Web se ha convertido en la plataforma más común para llevar a cabo el desarrollo de aplicaciones corporativas. Estas aplicaciones reciben el nombre de aplicaciones Web y entre otras funciones, deben de dar soporte a los Procesos de Negocio (PN) definidos por las corporaciones.
Esta tesis presenta un método de Ingeniería Web que permite el modelado y la construcción sistemática de aplicaciones Web que soportan la ejecución de PN. En este trabajo se conciben los PN desde un punto de vista más amplio que el abordado por otros métodos de Ingeniería Web. El tipo de PN abordados incluye tanto procesos cortos como largos. A grosso modo, esta concepción más amplia permite considerar procesos que involucran diferentes participantes (personas y/o sistemas) los cuales cooperan para llevar a cabo un objetivo particular. Además, dependiendo del tipo de proceso que se esté ejecutando (corto o largo), la interacción del usuario con el sistema deberá adaptarse a cada caso.
El método presentado en esta tesis ha sido desarrollado basándose en el Desarrollo de Software Dirigido por Modelos. De esta forma, el método propone un conjunto de modelos que permiten representar los diferentes aspectos que caracterizan las aplicaciones Web que soportan la ejecución de PN. Una vez el sistema ha sido representado en los modelos correspondientes, mediante la aplicación de transformación de modelos se obtiene otros modelos (transformaciones de modelo-a-modelo) e incluso el código que representa el sistema modelado en términos de un lenguaje de implementación (transformaciones de modelo-a-texto).
El método propuesto en esta tesis está soportado por una herramienta llamada BIZZY. Esta herramienta ha sido desarrollada en el entorno de Eclipse y cubre el proceso de desarrollo desde la fase de modelado hasta la generación de código. En particular, el código generado corresponde con el framework Web Tapestry (framework que genera aplicaciones Web en Java) y con WS-BPEL, / Torres Bosch, MV. (2008). A web engineering approach for the development of business process-driven web applications [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/2933
|
164 |
Code Generation from Large API Specifications with Open Large Language Models : Increasing Relevance of Code Output in Initial Autonomic Code Generation from Large API Specifications with Open Large Language ModelsLyster Golawski, Esbjörn, Taylor, James January 2024 (has links)
Background. In software systems defined by extensive API specifications, auto- nomic code generation can streamline the coding process by replacing repetitive, manual tasks such as creating REST API endpoints. The use of large language models (LLMs) for generating source code comprehensively on the first try requires refined prompting strategies to ensure output relevancy, a challenge that grows as API specifications become larger. Objectives. This study aims to develop and validate a prompting orchestration solution for LLMs that generates more relevant, non-duplicated code compared to a single comprehensive prompt, without refactoring previous code. Additionally, the study evaluates the practical value of the generated code for developers at Ericsson familiar with the target application that uses the same API specification. Methods. Employing a prototyping approach, we develop a solution that produces more relevant, non-duplicated code compared to a single prompt with local-hosted LLMs for the target API at Ericsson. We perform a controlled experiment running the developed solution and a single prompt to collect the outputs. Using the results, we conduct interviews with Ericsson developers about the value of the AI-generated code. Results. The study identified a prompting orchestration method that generated 427 relevant lines of code (LOC) on average in the best-case scenario compared to 66 LOC with a single comprehensive prompt. Additionally, 66% of the developers interviewed preferred using the AI-generated code as a starting point over starting from scratch when developing applications for Ericsson, and 66% preferred starting from the AI-generated code over code generated from the same API specification via Swagger CodeGen. Conclusions. Increasing the extent locally hosted LLMs can generate relevant code from large API specifications without refactoring the generated code in comparison to a single comprehensive prompt is possible with the right prompting orchestration method. The value of the generated code is that it can currently be used as a good starting point for further software development.
|
165 |
Formatabhängige hochdynamische Bewegungen mit Servoantrieben / Automatic SPS source code generation for high-speed motionsNolte, Rainer 08 June 2017 (has links) (PDF)
OPTIMUS MOTUS (R) ist ein grafischer Editor, um komplexe Bewegungsabläufe zu modellieren, zu optimieren, zu testen und schließlich als Funktionsbausteine für die SPS-Welt zu exportieren. So können SPS-Bewegungsprogramme erheblich schneller entwickelt und geändert werden als bei manueller Programmentwicklung. Die aus der Kurventechnik bekannte Bewegungsqualität kommt damit auch bei Servoantrieben zum Tragen. Das Debugging entfällt, weil die Quelltexte maschinell erzeugt werden.
|
166 |
ANNarchy: a code generation approach to neural simulations on parallel hardwareVitay, Julien, Dinkelbach, Helge Ülo, Hamker, Fred Henrik 07 October 2015 (has links) (PDF)
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
|
167 |
Desenvolvimento formal de aplica??es para smartcardsGomes, Bruno Emerson Gurgel 01 June 2012 (has links)
Made available in DSpace on 2014-12-17T15:46:59Z (GMT). No. of bitstreams: 1
BrunoEGG_TESE.pdf: 2215931 bytes, checksum: 5d86c012a04f884e6dec73c92c1d88ef (MD5)
Previous issue date: 2012-06-01 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Smart card applications represent a growing market. Usually this kind of application
manipulate and store critical information that requires some level of security, such as financial
or confidential information. The quality and trustworthiness of smart card software can
be improved through a rigorous development process that embraces formal techniques of
software engineering. In this work we propose the BSmart method, a specialization of the
B formal method dedicated to the development of smart card Java Card applications. The
method describes how a Java Card application can be generated from a B refinement process
of its formal abstract specification. The development is supported by a set of tools, which
automates the generation of some required refinements and the translation to Java Card client
(host) and server (applet) applications. With respect to verification, the method development
process was formalized and verified in the B method, using the Atelier B tool [Cle12a]. We
emphasize that the Java Card application is translated from the last stage of refinement, named
implementation. This translation process was specified in ASF+SDF [BKV08], describing the
grammar of both languages (SDF) and the code transformations through rewrite rules (ASF).
This specification was an important support during the translator development and contributes
to the tool documentation. We also emphasize the KitSmart library [Dut06, San12], an essential
component of BSmart, containing models of all 93 classes/interfaces of Java Card API 2:2:2,
of Java/Java Card data types and machines that can be useful for the specifier, but are not part
of the standard Java Card library. In other to validate the method, its tool support and the
KitSmart, we developed an electronic passport application following the BSmart method. We
believe that the results reached in this work contribute to Java Card development, allowing the
generation of complete (client and server components), and less subject to errors, Java Card
applications. / As aplica??es para smart cards representam um mercado que cresce a cada ano. Normalmente,
essas aplica??es manipulam e armazenam informa??es que requerem garantias
de seguran?a, tais como valores monet?rios ou informa??es confidenciais. A qualidade e a
seguran?a do software para cart?es inteligentes pode ser aprimorada atrav?s de um processo
de desenvolvimento rigoroso que empregue t?cnicas formais da engenharia de software. Neste
trabalho propomos o m?todo BSmart, uma especializa??o do m?todo formal B dedicada ao
desenvolvimento de aplica??es para smart cards na linguagem Java Card. O m?todo descreve,
em um conjunto de etapas, como uma aplica??o smart card pode ser gerada a partir de
refinamentos em sua especifica??o formal. O desenvolvimento ? suportado por um conjunto de
ferramentas, automatizando a gera??o de parte dos refinamentos e a tradu??o para as aplica??es
Java Card cliente (host) e servidora (applet). Ressalta-se que o processo de especifica??o e refinamento
descrito no m?todo foi formalizado e verificado utilizando o pr?prio m?todo B, com
o aux?lio da ferramenta Atelier B [Cle12a]. Destaca-se que a aplica??o Java Card ? traduzida a
partir do ?ltimo passo de refinamento, denominado de implementa??o. A especifica??o dessa
tradu??o foi feita na linguagem ASF+SDF [BKV08]. Inicialmente, descreveu-se as gram?ticas
das linguagens B e Java (SDF) e, em uma etapa posterior, especificou-se as transforma??es
de B para Java Card atrav?s de regras de reescrita de termos (ASF). Essa abordagem foi um
importante aux?lio durante o processo de tradu??o, al?m de servir ao prop?sito de document?lo.
Cumpre destacar a biblioteca KitSmart [Dut06, San12], componente essencial ao m?todo
BSmart, que inclui modelos em B de todas as 93 classes/interfaces da API Java Card na
vers?o 2:2:2, dos tipos de dados Java e Java Card e de m?quinas que podem ser ?teis ao
especificador, mas que n?o est?o presentes na API padr?o. Tendo em vista validar o m?todo,
seu conjunto de ferramentas e a biblioteca KitSmart, procedeu-se com o desenvolvimento, seguindo
o m?todo BSmart, de uma aplica??o de passaporte eletr?nico. Os resultados alcan?ados
neste trabalho contribuem para o desenvolvimento smart card, na medida em que possibilitam
a gera??o de aplica??es Java Card completas (cliente e servidor) e menos sujeitas a falhas.
|
168 |
Sub-Polyhedral Compilation using (Unit-)Two-Variables-Per-Inequality Polyhedra / Compilation sous-polyédrique reposant sur des systèmes à deux variables par inégalitéUpadrasta, Ramakrishna 13 March 2013 (has links)
Notre étude de la compilation sous-polyédrique est dominée par l’introduction de la notion l’ordonnancement affine sous-polyédrique, pour laquelle nous proposons une technique utilisant des sous-polyèdres (U)TVPI. Dans ce cadre, nous introduisons des algorithmes capables de construire des sous-approximations de systèmes de contraintes résultant de problèmes d’ordonnancement affine. Cette technique repose sur des algorithmes polynomiaux simples pour approcher un polyèdre quelconque par un polyèdre (U)TVPI. Nos algorithmes sont suffisamment génériques pour s’appliquer à de nombreux problèmes d’ordonnancement, de parallélisation, et d’optimisation de boucles, réduisant leur complexité temporelle à des fonctions polynomiales. Nous introduisons également une méthode pour la génération de code utilisant des algorithmes sous-polyédriques, tirant parti de la faible complexité des sous-polyèdres (U)TVPI. Dans ce cadre, nous montrons comment réduire la complexité associée aux générateurs de code les plus populaires, ramenant la complexité de plusieurs facteurs exponentiels à des fonctions polynomiales. Nombre de ces techniques sont évaluées expérimentalement. Pour cela, nous avons réalisé une version modifiée du compilateur PLuTo, capable de paralléliser et d’optimiser des nids de boucles pour des architectures multi-cœurs à l’aide de transformations affines, et notamment de partitionnement (tiling). Nous montrons qu’une majorité des noyaux de calcul de la suite Polybench (2.0) peut être manipulée à l’aide de notre technique d’ordonnancement, en préservant la faisabilité des polyèdres lors des sous-approximations. L’utilisation des systèmes approchés par des sous-polyèdres conduit à des gains asymptotiques en complexité, qui se traduit par des réductions significatives en temps de compilation, par rapport à un solveur de programmation linéaire de référence. Nous vérifions également que le code généré par notre prototype de parallélisation sous-polyédrique est compétitif par rapport à la performance du code généré par Pluto. / The goal of this thesis is to design algorithms that run with better complexity when compiling or parallelizing loop programs. The framework within which our algorithms operate is the polyhedral model of compilation which has been successful in the design and implementation of complex loop nest optimizers and parallelizing compilers. The algorithmic complexity and scalability limitations of the above framework remain one important weakness. We address it by introducing sub-polyhedral compilation by using (Unit-)Two-Variable-Per-Inequality or (U)TVPI Polyhedra, namely polyhedrawith restricted constraints of the type ax_{i}+bx_{j}\le c (\pm x_{i}\pm x_{j}\le c). A major focus of our sub-polyhedral compilation is the introduction of sub-polyhedral scheduling, where we propose a technique for scheduling using (U)TVPI polyhedra. As part of this, we introduce algorithms that can be used to construct under-aproximations of the systems of constraints resulting from affine scheduling problems. This technique relies on simple polynomial time algorithms to under approximate a general polyhedron into (U)TVPI polyhedra. The above under-approximation algorithms are generic enough that they can be used for many kinds of loop parallelization scheduling problems, reducing each of their complexities to asymptotically polynomial time. We also introduce sub-polyhedral code-generation where we propose algorithms to use the improved complexities of (U)TVPI sub-polyhedra in polyhedral code generation. In this problem, we show that the exponentialities associated with the widely used polyhedral code generators could be reduced to polynomial time using the improved complexities of (U)TVPI sub-polyhedra. The above presented sub-polyhedral scheduling techniques are evaluated in an experimental framework. For this, we modify the state-of-the-art PLuTo compiler which can parallelize for multi-core architectures using permutation and tiling transformations. We show that using our scheduling technique, the above under-approximations yield polyhedra that are non-empty for 10 out of 16 benchmarks from the Polybench (2.0) kernels. Solving the under-approximated system leads to asymptotic gains in complexity, and shows practically significant improvements when compared to a traditional LP solver. We also verify that code generated by our sub-polyhedral parallelization prototype matches the performance of PLuTo-optimized code when the under-approximation preserves feasibility.
|
169 |
Řízení stejnosměrného bezkartáčového motoru za podmínek ztráty napájecího napětí / Brushless dc motor control under power loss conditionŠedivý, Jozef January 2014 (has links)
Táto diplomová práca sa zaoberá implementáciou bezpečnostnej funkcie pre elektrický aktuátor, ktorá spočíva v riadení BLDC motora, po výpadku napájacieho napätia, keď je aktuátor poháňaný vstavanou pružinou.. Celé riadenie motora je navrhnuté v prostredí Matlab-Simulink technikou nazývanou návrh systému z modelu. Následne pomocou automatického generovania kódu bol získaný zdrojový kód, ktorý bol použitý v reálnom aktuátore a odtestovaný v reálnych podmienkach. Cieľom týchto testov bolo overiť reálnu možnosť nasadenia vyvinutých algoritmov v reálnych, komerčne dostupných produktoch.
|
170 |
Home Devices Mediation using ontology alignment and code generation techniques / La médiation d'interaction entre les équipements domestiques basés sur l'alignement d'ontologies et la génération du codeEl Kaed, Charbel 13 January 2012 (has links)
Les protocoles plug-and-play couplés avec les architectures logicielles rendent nos maisons ubiquitaires. Les équipements domestiques qui supportent ces protocoles peuvent être détectés automatiquement, configurés et invoqués pour une tâche donnée. Actuellement, plusieurs protocoles coexistent dans la maison, mais les interactions entre les dispositifs ne peuvent pas être mises en action à moins que les appareils supportent le même protocole. En plus, les applications qui orchestrent ces dispositifs doivent connaître à l'avance les noms des services et dispositifs. Or, chaque protocole définit un profil standard par type d'appareil. Par conséquent, deux appareils ayant le même type et les mêmes fonctions mais qui supportent un protocole différent publient des interfaces qui sont souvent sémantiquement équivalentes mais syntaxiquement différentes. Ceci limite alors les applications à interagir avec un service similaire. Dans ce travail, nous présentons une méthode qui se base sur l'alignement d'ontologie et la génération automatique de mandataire pour parvenir à une adaptation dynamique de services. / Ubiquitous systems imagined by Mark Weiser are emerging thanks to the development of embedded systems and plug-n-play protocols like the Universal Plug aNd Play (UPnP), the Intelligent Grouping and Resource Sharing (IGRS), the Device Pro le for Web Services (DPWS) and Apple Bonjour. Such protocols follow the service oriented architecture (SOA) paradigm and allow an automatic device and service discovery in a home network. Once devices are connected to the local network, applications deployed for example on a smart phone, a PC or a home gateway, discover the plug-n-play devices and act as control points. The aim of such applications is to orchestrate the interactions between the devices such as lights, TVs and printers, and their corresponding hosted services to accomplish a specific human daily task like printing a document or dimming a light. Devices supporting a plug-n-play protocol announce their hosted services each in its own description format and data content. Even similar devices supporting the same services represent their capabilities in a different representation format and content. Such heterogeneity along with the protocols layers diversity, prevent applications to use any available equivalent device on the network to accomplish a specific task. For instance, a UPnP printing application cannot interacts with an available DPWS printer on the network to print a document. Designing applications to support multiple protocols is time consuming since developers must implement the interaction with each device pro le and its own data description. Additionally, the deployed application must use multiple protocols stacks to interact with the device. More over, application vendors and telecoms operators need to orchestrate devices through a common application layer, independently from the protocol layers and the device description. To accomplish interoperability between plug-n-play devices and applications, we propose a generic approach which consists in automatically generating proxies based on an ontology alignment. The alignment contains the correspondences between two equivalent devices descriptions. Such correspondences actually represent the proxy behaviour which is used to provide interoperability between an application and a plug and play device. For instance, the generated proxy will announce itself on the network as a UPnP standard printer and will control the DPWS printer. Consequently, the UPnP printing application will interact transparently with the generated proxy which adapts and transfers the invocations to the real DPWS printer. We implemented a prototype as a proof of concept that we evaluated on several real UPnP and DPWS equivalent devices.
|
Page generated in 0.101 seconds