• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 86
  • 29
  • 16
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 187
  • 104
  • 29
  • 24
  • 21
  • 21
  • 21
  • 17
  • 17
  • 16
  • 15
  • 14
  • 14
  • 14
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Robotic 3D Printing of sustainable structures / Robot 3D-printing med hållbara strukturer

Alkhatib, Tammam January 2023 (has links)
This bachelor thesis aims to integrate and evaluate a 3D printing robotic cell at the SmartIndustry Group – SIG lab at Linnaeus University (LNU).A sustainable structure consisting of wood fiber polymer composites was 3D printed withan industrial robot. Sustainable 3D printing material can be recycled or burned for energyafterwards. The 3D printing material used in this thesis stems from certificated forests. The objective is to utilise this technology in manufacturing courses and research projectsat the SIG lab at LNU. This objective is achieved by creating an operation manual and avideo tutorial in this thesis.The integration and evaluation process will involve offline robot programming,simulation, and practical experiments on the 3D printing robotic cell.
162

Simulating Large Scale Memristor Based Crossbar for Neuromorphic Applications

Uppala, Roshni 03 June 2015 (has links)
No description available.
163

On the Viability of Digital Cash in Offline Payments

Holgersson, Joakim, Enarsson, John January 2022 (has links)
Background. As the financial systems around the world become more digitized with the use of a card and mobile payments - we see a decrease in willingness to accept cash payments in many countries. These digital payments require a stable network connection to be processed in real-time. In rural areas or during times of crisis where these network connections may be unavailable there is a need to resort to some payment method that works offline. Paper cash is preferred by some because of its anonymous nature and with the realization of blind signatures the concept of digital cash was constructed. Digital cash is a digitized version of the traditional paper cash that values payer privacy and can be spent while both parties are offline with the use of smart cards or other mobile devices. Unlike physical paper cash, digital cash is without additional mitigations easily copied and forged as they only consist of information. Objectives. The objective of this work is to determine the viability of digital cash as a replacement or complement to today’s paper cash. The results will describe our findings on what technologies are necessary to securely exchange digital cash offline, as well as our findings on whether arbitrary payment amounts can be exchanged efficiently as well as exchanged between users of different banks. Methods. This work consists of threat modeling to identify the necessary technologies to securely exchange digital cash and what they accomplish. An extensive literature study and theoretical evaluations of state-of-the-art digital cash schemes are also part of the work. Results. The results show that digital cash can be constructed and exchanged securely with various optional features that make it more or less resemble its physical counterpart. With payer anonymity in the center and the inevitable risk of fraudulent users’ double-spending coins - the identified technologies do their best to reduce the cost-effectiveness of double-spending. Cryptographic solutions, as well as hard-to-tamper-with hardware, are the two key technologies for this. Advancements in cryptography have enabled more efficient storage and spending of digital cash with compact wallets and divisible digital cash. Conclusions. Digital cash has been a theoretical concept for almost four decades and is becoming more secure and efficient by being reconstructed using more modern cryptographic solutions. Depending on the requirements of the payment system, some schemes support arbitrary amount payment exchanges in constant time, be-tween users of different banks, transferability and some can run efficiently on privacy assuring hard-to-tamper with hardware. No scheme can do it all, but this work shines a light on some important considerations useful for future practical implementation of digital cash. / Bakgrund. Samtidigt som betalningar sker mer digitalt med hjälp av betalkort och mobiltelefoner ser vi hur färre försäljare accepterar kontanter som betalningsmedel. Det är här digitala betalningarna kräver stabil nätverksuppkoppling för att genom-föras och på avlägsna platser och under krissituationer kan den här uppkopplingen bli otillgänglig - vilket leder till ett behov för offline-betalningar. Kontanter används av några på grund av dess anonyma natur och med förverkligandet av blinda signaturer växte konceptet om digitala kontanter fram. Digitala kontanter är som det låter, en digital variant av kontanter som försöker uppnå samma anonymitet samt kunna överföras medan båda parter är offline med hjälp av betalkort eller andra mobila enheter. Till skillnad från fysiska kontanter kan dessa digitala mynt utan speciella åtgärder lätt kopieras och förfalskas eftersom de enbart består av information. Syfte. Syftet med det här arbetet är att ta redo på huruvida digitala kontanter kan ersätta eller fungera som ett komplement till dagens kontanter, samt ta redo på vilka möjligheter det finns för en implementation av ett sådant system idag. Resultatet ska beskriva våra upptäckter om vilka tekniker som behövs för att på ett säkert sätt kunna överföra digitala kontanter offline, samt våra upptäckter om huruvida godtyckliga summor kan överföras på ett effektivt sätt och mellan kunder av olika banker. Metod. Metoden vi använder består av att konstruera en hotmodell för att identifiera nödvändiga tekniker för att på ett säkert sätt kunna överföra digitala kontanter och kunna redogöra vad de uppfyller för funktioner. Arbetet innefattar även en omfattande litteraturstudie och teoretiska utvärderingar av toppmoderna digitala kontantsystem. Resultat. Resultatet visar att digitala kontanter kan konstrueras för att överföras säkert med flera frivilliga funktioner som gör att överföringarna mer eller mindre liknar sin fysiska motsvarighet. Genom att värna om ärliga betalares anonymitet och med en oundviklig risk för dubbelspendering gör de identifierade teknikerna sittbästa för att minska betalningstider och incitamentet att dubbelspendera med hjälp av kryptering och speciell svårmanipulerad hårdvara. Slutsatser. Digitala kontanter har funnits som ett teoretiskt koncept i snart fyradecennier och blir snabbt säkrare samt effektivare när de byggs om och baseras på nya krypteringslösningar. Beroende på vilka krav man har på sitt betalningssystem kan de byggas för att överföra godtyckliga summor i konstant tidskomplexitet, mellan användare av olika banker, överföras flera gånger likt vanliga kontanter eller med hjälp av svårmanipulerad hårdvara. Inget system kan göra allt idag och det här arbetet kan hjälpa den som vill bygga ett produktionssystem med vilka avväganden som kan göras.
164

The trends in the offline password-guessing field : Offline guessing attack on Swedish real-life passwords / Trenderna inom fältet för offline-gissning av lösenord : Offline-gissningsattack på svenska verkliga lösenord

Zarzour, Yasser, Alchtiwi, Mohamad January 2023 (has links)
Password security is one of the most critical aspects of IT security, as password-based authentication is still the primary authentication method. Unfortunately, our passwords are subject to different types of weaknesses and various types of password-guessing attacks. The first objective of this thesis is to provide a general perception of the trends in offline password-guessing tools, methods, and techniques. The study shows that the most cited tools are Hashcat, John the Ripper, Ordered Markov ENumerator (OMEN), and PassGan. Methods are increasingly evolving and becoming more sophisticated by emerging Deep Learning and Neural Networks. Unlike methods and tools, techniques are not subject to significant development, noting that dictionary and rule-based attacks are at the top of used techniques. The second objective of this thesis is to explore to what extent Swedish personal names are used in real-life passwords. Hence, an experiment is conducted for this purpose. The experiment results show that about 26% of Swedish users use their personal names when they create passwords, making them vulnerable to easy guessing by password-guessing tools. Furthermore, a simple analysis of the resulting password recovery file is performed in terms of password length and complexity. The resulting numbers show that more than half of guessed passwords are shorter than eight characters, indicating incompliance with the recommendations from standard organizations. In addition, results show a weak combination of letters, digits, and special characters, indicating that many Swedish users do not maintain sufficient diversity when composing their passwords. This means less password complexity, making passwords an easy target to guess. This study may serve as a quick reference to getting an overview of trends in the password-guessing field. On the other side, the resulting rate of Swedish personal names in Swedish password leaks may draw the attention of active social actors regarding information security to improve password security measures in Sweden. / Lösenordssäkerhet är en av de mest kritiska aspekterna av IT-säkerhet eftersom  lösenordsbaserad autentisering fortfarande är den viktigaste metoden för autentisering. Tyvärr är våra lösenord föremål för olika typer av svagheter och olika typer av lösenordsgissningsattacker. Det första syftet med detta arbete är att ge en allmän uppfattning om trenderna inom verktyg,metoder och tekniker angående offline lösenordsgissning. Studien visar att Hashcat, John the Ripper, Ordered Markov ENumerator OMEN och PassGan är de mest citerade verktygen. Medan metoderna alltmer utvecklas och blir mer sofistikerade genom framväxande “DeepLearning”, och “Neural Networks”. Till skillnad från metoder och verktyg är tekniker inte föremål för stor utveckling, och notera att “dictionary” attacker och “rule-based” attacker är överst bland använda tekniker. Det andra syftet är att utforska i vilken utsträckning svenska personnamn används i verkliga lösenord. Därför genomförs ett experiment för detta ändamål. Resultaten av experimentet visar att cirka 26 % av svenska användare använder sina personnamn när de skapar lösenord, vilket gör lösenord sårbara för enkel gissning med hjälp av lösenordsgissningsverktyg. Dessutom utförs en enkel analys av den resulterande lösenordsåterställningsfilen vad gäller lösenordslängd och komplexitet. De resulterande siffrorna visar att mer än hälften av de gissade lösenorden är kortare än åtta tecken, vilket är en indikation på att de inte följer rekommendationerna från standardorganisationer. Resultaten visar också en svag kombination av bokstäver, siffror och specialtecken vilket indikerar att många svenskar inte upprätthåller tillräcklig variation när de komponerar sina lösenord. Detta innebär mindre lösenordskomplexitet, vilket gör lösenord till ett mål för enkel gissning. Arbetet kan fungera som en snabbreferens för att få en överblick över trender inom lösenordsgissningsfältet. Å andra sidan kan den resulterande andelen svenska personnamn i  svenska lösenordsläckor uppmärksamma de aktiva aktörerna i samhället gällande informationssäkerhet för att förbättra lösenordssäkerhetsåtgärderna i Sverige.
165

Offline Reinforcement Learning for Downlink Link Adaption : A study on dataset and algorithm requirements for offline reinforcement learning. / Offline Reinforcement Learning för nedlänksanpassning : En studie om krav på en datauppsättning och algoritm för offline reinforcement learning

Dalman, Gabriella January 2024 (has links)
This thesis studies offline reinforcement learning as an optimization technique for downlink link adaptation, which is one of many control loops in Radio access networks. The work studies the impact of the quality of pre-collected datasets, in terms of how much the data covers the state-action space and whether it is collected by an expert policy or not. The data quality is evaluated by training three different algorithms: Deep Q-networks, Critic regularized regression, and Monotonic advantage re-weighted imitation learning. The performance is measured for each combination of algorithm and dataset, and their need for hyperparameter tuning and sample efficiency is studied. The results showed Critic regularized regression to be the most robust because it could learn well from any of the datasets that were used in the study and did not require extensive hyperparameter tuning. Deep Q-networks required careful hyperparameter tuning, but paired with the expert data it managed to reach rewards equally as high as the agents trained with Critic Regularized Regression. Monotonic advantage re-weighted imitation learning needed data from an expert policy to reach a high reward. In summary, offline reinforcement learning can perform with success in a telecommunication use case such as downlink link adaptation. Critic regularized regression was the preferred algorithm because it could perform great with all the three different datasets presented in the thesis. / Denna avhandling studerar offline reinforcement learning som en optimeringsteknik för nedlänks länkanpassning, vilket är en av många kontrollcyklar i radio access networks. Arbetet undersöker inverkan av kvaliteten på förinsamlade dataset, i form av hur mycket datan täcker state-action rymden och om den samlats in av en expertpolicy eller inte. Datakvaliteten utvärderas genom att träna tre olika algoritmer: Deep Q-nätverk, Critic regularized regression och Monotonic advantage re-weighted imitation learning. Prestanda mäts för varje kombination av algoritm och dataset, och deras behov av hyperparameterinställning och effektiv användning av data studeras. Resultaten visade att Critic regularized regression var mest robust, eftersom att den lyckades lära sig mycket från alla dataseten som användes i studien och inte krävde omfattande hyperparameterinställning. Deep Q-nätverk krävde noggrann hyperparameterinställning och tillsammans med expertdata lyckades den nå högst prestanda av alla agenter i studien. Monotonic advantage re-weighted imitation learning behövde data från en expertpolicy för att lyckas lära sig problemet. Det datasetet som var mest framgångsrikt var expertdatan. Sammanfattningsvis kan offline reinforcement learning vara framgångsrik inom telekommunikation, specifikt nedlänks länkanpassning. Critic regularized regression var den föredragna algoritmen för att den var stabil och kunde prestera bra med alla tre olika dataseten som presenterades i avhandlingen.
166

Test and Validation of Web Services

Cao, Tien Dung 06 December 2010 (has links)
Nous proposons dans cette thèse les approches de test pour la composition de services web. Nous nous intéressons aux test unitaire et d’intégration d’une orchestration de services web. L’aspect de vérification d’exécution en-ligne est aussi consideré. Nous définissons une plateforme de test unitaire pour l’orchestration de services web qui compose une architecture de test, une relation de conformité et deux approches de test basés sur le modèle de machine à l’états finis étendues temporisés: l’approche offline où les activités de test comme la génération de cas de test temporisé, l’exécution de test et l’assignement de verdict sont appliquées en séquentielle tandis que ces activités sont appliquées en parallèle dans l’approche online. Pour le test d’intégration d’une orchestration, nous combinons deux approches: active et passive.Au debut, l’approche active est utilisée pour activer une nouvelle session d’orchestration par l’envoi d’un message de requête SOAP. Après, tous les messages d’entré et de sortie de l’orchestration sont collectés et analysés par l’approche passive.Pour l’aspect de vérification d’exécution en-ligne, nous nous intéressons à la vérification d’une trace qui respecte un ensemble des constraintes, noté règles, ou pas. Nous avons proposé extendre le langage Nomad en définissant des constraintes sur chaque action atomique et un ensemble de corrélation de données entre les actions pour définir des règles pour le service web. Ce langage nous permet de définir des règles avec le temps futur et passé, et d’utiliser des opérations NOT, AND, OR pour combiner quelque conditions dans le contexte de la règle. Ensuite, nous proposons un algorithme pour vérifier l’exactitude d’une séquence des messages en parallèle avec le moteur de collecte de trace. / In this thesis, we propose the testing approaches for web service composition. We focus on unit, integrated testing of an orchestration of web services and also the runtime verification aspect. We defined an unit testing framework for an orchestration that is composed of a test architecture, a conformance relation and two proposed testing approaches based on Timed Extended Finite State Machine (TEFSM) model: offline which test activities as timed test case generation, test execution and verdict assignment are applied in sequential, and online which test activities are applied in parallel. For integrated testing of an orchestration, we combines of two approaches: active and passive. Firstly, active approach is used to start a new session of the orchestration by sending a SOAP request. Then all communicating messages among services are collected and analyzed by a passive approach. On the runtime verification aspect, we are interested in the correctness of an execution trace with a set of defined constraints, called rules. We have proposed to extend the Nomad language, by defining the constraints on each atomic action (fixed conditions) and a set of data correlations between the actions to define the rules for web services. This language allows us to define a rule with future and past time, and to use the operations: NOT, AND, OR to combines some conditions into a context of the rule. Afterwards, we proposed an algorithm to check correctness of a message sequence in parallel with the trace collection engine. Specifically, this algorithm verifies message by message without storing them.
167

Methodology to estimate building energy consumption using artificial intelligence / Méthodologie pour estimer la consommation d’énergie dans les bâtiments en utilisant des techniques d’intelligence artificielle

Paudel, Subodh 22 September 2016 (has links)
Les normes de construction pour des bâtiments de plus en plus économes en énergie (BBC) nécessitent une attention particulière. Ces normes reposent sur l’amélioration des performances thermiques de l’enveloppe du bâtiment associé à un effet capacitif des murs augmentant la constante de temps du bâtiment. La prévision de la demande en énergie de bâtiments BBC est plutôt complexe. Ce travail aborde cette question par la mise en œuvre d’intelligence artificielle(IA). Deux approches de mise en œuvre ont été proposées : « all data » et « relevant data ». L’approche « all data » utilise la totalité de la base de données. L’approche « relevant data » consiste à extraire de la base de données un jeu de données représentant le mieux possible les prévisions météorologiques en incluant les phénomènes inertiels. Pour cette extraction, quatre modes de sélection ont été étudiés : le degré jour (HDD), une modification du degré jour (mHDD) et des techniques de reconnaissance de chemin : distance de Fréchet (FD) et déformation temporelle dynamique (DTW). Quatre techniques IA sont mises en œuvre : réseau de neurones (ANN), machine à support de vecteurs (SVM), arbre de décision (DT) et technique de forêt aléatoire (RF). Dans un premier temps, six bâtiments ont été numériquement simulés (de consommation entre 86 kWh/m².an à 25 kWh/m².an) : l’approche « relevant data » reposant sur le couple (DTW, SVM) donne les prévisions avec le moins d’erreur. L’approche « relevant data » (DTW, SVM) sur les mesures du bâtiment de l’Ecole des Mines de Nantes reste performante. / High-energy efficiency building standards (as Low energy building LEB) to improve building consumption have drawn significant attention. Building standards is basically focused on improving thermal performance of envelope and high heat capacity thus creating a higher thermal inertia. However, LEB concept introduces alarge time constant as well as large heat capacity resulting in a slower rate of heat transfer between interior of building and outdoor environment. Therefore, it is challenging to estimate and predict thermal energy demand for such LEBs. This work focuses on artificial intelligence (AI) models to predict energy consumptionof LEBs. We consider two kinds of AI modeling approaches: “all data” and “relevant data”. The “all data” uses all available data and “relevant data” uses a small representative day dataset and addresses the complexity of building non-linear dynamics by introducing past day climatic impacts behavior. This extraction is based on either simple physical understanding: Heating Degree Day (HDD), modified HDD or pattern recognition methods: Frechet Distance and Dynamic Time Warping (DTW). Four AI techniques have been considered: Artificial Neural Network (ANN), Support Vector Machine (SVM), Boosted Ensemble Decision Tree (BEDT) and Random forest (RF). In a first part, numerical simulations for six buildings (heat demand in the range [25 – 85 kWh/m².yr]) have been performed. The approach “relevant data” with (DTW, SVM) shows the best results. Real data of the building “Ecole des Mines de Nantes” proves the approach is still relevant.
168

Research and realization of assistant off-line programming system for thermal spraying / Recherche et réalisation du système assistant de la programmation hors ligne en projection thermique

Chen, Chaoyue 16 December 2016 (has links)
La technologie de programmation hors-ligne permet la génération de la trajectoire complexe en projection thermique. Dans le laboratoire du LERMPS, une extension logicielle appelée « Thermal Spray Toolkit » (T.S.T.) a été développée pour assister la programmation hors-ligne dans le domaine de projection thermique. Cependant, les efforts sont encore attendus pour améliorer sa fonctionnalité. Donc, l'objectif de cette thèse est d'améliorer l'application de la programmation hors-ligne en projection thermique. Selon la procédure d'application, les travaux de cette thèse se composent de trois parties.Premièrement, les efforts sont dévoués à l'amélioration du module « PathKit » dans T.S.T., afin d'optimiser la fonctionnalité de la génération de la trajectoire. L'algorithme pour la génération de la trajectoire sur le substrat courbe a été étudié pour assurer le pas de balayage constant. Une nouvelle trajectoire « spirale d'Archimède » a été développé pour réparer les défauts par la projection à froid. La réparation sur une pièce d'aluminium avec un défaut a été réalisé pour valider la trajectoire spirale d'Archimède. Deuxièmement, les modélisations ont été développées pour simuler l'épaisseur du dépôt en 2D et en 3D. Puis, Ils sont intégrés dans le logiciel RobotStudioTM comme un module individuel dit « ProfileKit ». Dans le « ProfileKit 2D », il peut évaluer les effets des paramètres opératoires sur le profil du dépôt et puis optimiser les paramètres. Dans le « ProfileKit 3D », l'épaisseur du dépôt peut être simulée selon la trajectoire du robot et la cinématique du robot.Les fonctionnalités sont validées par un dépôt de forme trapézoïdal élaboré par la projection à froid avec les pas debalayage variés.Dernièrement, l'analyse cinématique du robot a été étudiée pour optimiser la performance du robot pendant le processus de projection. Afin de mieux évaluer la performance du robot, un paramètre « overall parameter » (OP), la moyenne pondérée d'écart-type de la vitesse articulaire est introduit pour mesurer la complexité de la trajectoire du robot. Ensuite, l'optimisation du montage de la torche ainsi que l'optimisation de la disposition de la pièce sont étudiées par l'analyse cinématique du robot et le paramètre OP. Le résultat montre que l'optimisation cinématique peut efficacement améliorer la performance du robot pour maintenir la vitesse prédéfinie. / The offline programming technology provides the possibility to generate complex robot trajectories in thermal spray process. In the laboratory of LERMPS, an add-in software called “Thermal SprayToolkit” (T.S.T.) has been developed to assist the offline programming in the field of thermal spray.However, efforts are still expected to improve the functionality of this software. The aim of this study is to improve the application of offline programming technology in the thermal spray process. According to the procedure of the offline programming in thermal spray, the work of this thesis consists of three parts.Firstly, efforts have been dedicated to improve the module “PathKit” in T.S.T., which aim to improve the functionality of trajectory generation. The algorithm of trajectory generation for the curved substrate surface was improved to maintain a constant scan step. A novel Archimedean spiral trajectory was developed for damage component recovery application by cold spray. The experiment of an Al5056 coating depositing on a manually manufactured workpiece with a crater defect was carried out to validate the effects of spiral trajectory with adapted nozzle speed.Secondly, numerical models were developed to simulate the coating thickness distribution in 2D and 3D, and then integrated in the RobotStudio™ as an individual module named “ProfileKit”. In the “ProfileKit 2D”, it is able to evaluate the effects of operating parameters on coating profile and optimize the parameters. In the “ProfileKit 3D”, coating thickness distribution can be simulated based on the nozzle trajectory and robot kinematics data. The functionalities were validated by the trapezoid coldsprayed coating.At last, kinematic analysis was used to provide the optimization methods for a better robot performance in thermal spraying. In order to better evaluate the robot performance, an overall parameter (OP) that is the weighted mean of standard deviation of joint speed, was introduced to measure the complexity of a robot trajectory. Afterwards, the optimal nozzle mounting method as well as the optimal workpiece placement were investigated by the kinematic analysis and the overall parameter. The result shows that the kinematic optimization can effectively improve the robot performance to maintain the predefined speed.
169

Factors influencing the choice to shop online : a psychological study in a South African context

De Swardt, Maray Annelise 25 November 2008 (has links)
As the Internet and online shopping is growing at a very fast pace worldwide, investigating this phenomenon within a South African context is crucial considering that it is a relatively new trend in this country. Typical of new trends and phenomena is the absence of research already conducted, resulting in a lack of existing literature. Very few studies have examined the factors and reasons that entice South Africans to utilise this modern shopping channel, and even less have used an in-depth, qualitative approach. To assist in filling this void, this research study examines people’s reasons for taking up or not taking up online shopping, from a South African perspective. A snowball sampling method was used to identify participants fitting the predetermined sample criteria and in-depth qualitative interviews were conducted with all participants. The theoretical approach used in the analysis was social constructionism. Findings are presented by means of constructions identified during the data analysis, and these indicated that saving time, the convenience of products being increasingly available and accessible and being able to make price comparisons easily are the main advantages of online shopping. Main disadvantages were not being able to touch and feel products, and the absence of a salesperson. Limitations of the research are discussed, along with recommendations for online retailers and future research. / Dissertation (MA)--University of Pretoria, 2008. / Psychology / unrestricted
170

Una aproximación offline a la evaluación parcial dirigida por narrowing

Ramos Díaz, J. Guadalupe 06 May 2008 (has links)
La evaluación parcial dirigida por narrowing (NPE: Narrowing-driven Partial Evaluation) es una técnica potente para la especialización de sistemas de reescritura, i.e., para el componente de primer orden de muchos lenguajes declarativos (lógico) funcionales como Haskell, Curry o Toy. Los evaluadores parciales se clasifican en dos grandes categorías: online y offline, de acuerdo al momento temporal en que se consideran los aspectos de terminación del proceso de especialización. Los evaluadores parciales online son usualmente más precisos ya que tienen más información disponible. Los evaluadores parciales offline proceden comúnmente en dos etapas; la primera etapa procesa un programa (e.g., para identificar aquellas llamadas a función que se pueden desplegar sin riesgo de no terminación) e incluye anotaciones para guiar las computaciones parciales; entonces, una segunda etapa, la de evaluación parcial propiamente dicha, sólo tiene que obedecer las anotaciones y por tanto el especializador es mucho más rápido que en la aproximación online. En esta tesis se presenta un nuevo esquema de evaluación parcial dirigido por narrowing, más eficiente y que asegura la terminación siguiendo el estilo offline. Para ello, identificamos una caracterización de programas cuasi-terminantes a los que llamamos "no crecientes". En tales programas, las computaciones por narrowing necesario presentan sólo un conjunto finito de términos diferentes (módulo renombramiento de variables). La propiedad de la cuasi-terminación es importante toda vez que su presencia es regularmente una condición suficiente para la terminación del proceso de especialización. Sin embargo, la clase de programas cuasi-terminantes es muy restrictiva, por lo que introducimos un algoritmo que acepta programas inductivamente secuenciales---una clase mucho más amplia sobre la que está definido el narrowing necesario---y anota aquellas partes que violan la caracterización de programas no crecientes. Para procesar de mane / Ramos Díaz, JG. (2007). Una aproximación offline a la evaluación parcial dirigida por narrowing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1888 / Palancia

Page generated in 0.0501 seconds