• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Developing learner autonomy through the use of a revised learner training programme (RLTP) in King Mongkut's University of Technology Thornburi /

Darasawang, Pornapit. Unknown Date (has links) (PDF)
Thesis (Ph.D.)--University of Edinburgh, 2000.
2

Joint Learning of Syntax and Semantics / Joint Learning of Syntax and Semantics

Ercegovcevic, Milos January 2013 (has links)
Diplomová práce se zabývá problémem strojového učení nepozorovaných úrovní abstrakce mělké sémantické reprezentace. Odstraňujeme předpoklady, které se při sémantické anotaci lingvistických zdrojů obvykle činí, např. pevný počet sémantických rolí v PropBanku, a učíme se klíčové lingvistické prvky této ano- tace (sémantické rámce, slovesa, lexikální a syntaktické třídy) s různou mírou ab- strakce. Model implementujeme pomocí latentních gramatik a získané struktury je možné použít pro úlohu značkování sémantických rolí (semantic role labeling, SRL) v několika jazycích s přesností srovnatelnou s jinými současnými přístupy. Navíc ukazujeme, že tyto struktury jsou velmi blízké abstrakcím, které je možné pozorovat ve FrameNetu. Celkovým výsledkem je tak jazykově-nezávislý model sémantické informace bez rysů, který produkuje interpretovatelné struktury a jeho použitelnost je na úloze SRL empiricky ověřena.
3

Att lära sig svenska i Spanien : En kvalitativ studie om lärarnas resonemang kring elevers utveckling av det svenska språket vid en svensk utlandsskola

Åberg, Anna January 2012 (has links)
The aim of this paper is to investigate how teachers in Swedish schools abroad reason about their work developing the Swedish language as the students live in a non-Swedish speaking country although going to a Swedish school. Furthermore the study intends to examine the teachers' attitudes towards the impact the Swedish language has upon the school. Research questions: What thoughts do the teachers signify in reference to the language development of the students? How do the teachers reason about the role of the Swedish language in the school? In order to answer these questions, qualitative measures has been used in terms of observations and interviews. The study is based upon a social constructivist perspective with the idea that reality is socially constructed. I am inspired by Vygotskijs theories regarding children's language development which presents a socio-cultural perspective.  The result shows that the Swedish language has a great influence in school and that the teachers' attitudes towards the Swedish language is of major importance in order to encourage the students' language development. Teachers and the principal highlight the emphasis of working with the Swedish language since the students live in a non-Swedish speaking country and therefore mainly come in contact with it in the school or at home. The teachers' work actively to develop the students language skills through reading Swedish literature, producing own texts and consistently speaking Swedish.
4

Ανάπτυξη συστήματος αναγνώρισης πολυ-γλωσσικών φωνημάτων για τις ανάγκες της αυτόματης αναγνώρισης γλώσσας

Γιούρα, Ευδοκία 04 February 2008 (has links)
Στα πλάισια της ανάλυσης ομιλίας, η παρούσα διατριβή παρουσιάζει έναν εύρωστο Αναγνωριστή Φωνημάτων ανεξαρτήτου Γλώσσας για τις ανάγκες της Αυτόματης Αναγνώρισης Γλώσσας. Η υλοποίηση του Συστήματος βασίζεται στους MFCC συντλεστές οι οποίοι αποτελούν τους χαρακτηριστικούς περιγραφείς ομιλίας, στη διαδικασία διανυσματικής κβαντοποίησης κατά την οποία δημιουργούνται τα κωδικά βιβλία εκπαίδευσης του Συστήματος και στα πιθανοτικά νευρωνικά δίκτυα (Propabilistic Neural Networks) για την εκπαίδευση του Συστήματος και την αναγνώριση των άγνωστων φωνημάτων. / In our thesis we present a language-independent phoneme recognizer for the needs of Automatic Language Identification. The system is based on: 1. the MFCCs for acoustic-spectral representation of phonemes, 2. vector quantization for creating the training codebooks of the system and 3. the Propabilistic Neural Networks (PNNs) for the training of the system and the classification of unknown phonemes.
5

Codificação automática das causas de morte e seleção da causa básica de morte: a adaptação para o Brasil do software Iris / Automatic coding cause of death and selection for underlying cause of death: an adaptation of Iris software to Brazil

Martins, Renata Cristófani 27 July 2012 (has links)
Introdução - Uma das formas de se aumentar a qualidade das informações sobre causas de morte é automatizar o processo de sua elaboração. O software Iris é um dos mecanismos disponíveis para esse fim. Suas principais características é que ele segue as regras internacionais de mortalidade e ele é independente de idioma. Objetivo - Elaborar para o Iris um dicionário na língua portuguesa e avaliar a sua completitude para a codificação das causas de morte. Método - O dicionário criado com dados do arquivo eletrônico do volume 1 da CID-10 e com o Tesauro da Classificação Internacional de Atenção Primária. Foi utilizado o Iris V4.0.34 e, como codificação manual, o que o Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) da Secretaria Municipal de Saúde de São Paulo escreveu nas declarações de óbito. Caso o Iris não codificasse as causas de morte, ajustes eram feitos no dicionário ou na tabela de padronização. Resultado - O Iris é capaz de codificar as causas de morte e selecionar a causa básica de morte, ambas automaticamente, é um software recente que está em constantes adequações, é independente de idioma e para usá-lo em cada país é necessário realizar somente um dicionário de causas de morte. No teste para avaliação da primeira versão do dicionário em português, o Iris apresentou um desempenho satisfatório. Foi capaz de codificar diretamente 50,6 por cento das declarações de óbito e, após ajustes e acréscimos no dicionário e na tabela de padronização, o software codificou todas as linhas em 94,44 por cento das declarações. Das declarações não codificadas completamente 89,19 por cento delas tinham algum diagnóstico contido no capítulo XX da CID-10. O Iris apresentou 63,1 por cento de concordância nas declarações de óbito pareadas considerando todas as causas de morte com códigos completos de 4 caracteres da CID-10. Conclusão - A realização dos ajustes no dicionário ou na padronização faz parte do processo de desenvolvimento do dicionário e que esse processo é continuo. Com as novas versões do Iris e atualizações e aprimoramento da codificação das causas externas, avanços serão feitos para que ele seja mais compatível com a realidade brasileira. Somado a isso, as futuras versões do Iris com um dicionário mais desenvolvido podem satisfazer as necessidades de codificação automática e melhorar a precisão dos dados de causa morte paras estudos de saúde pública. / Introduction - One way to increase the quality of causes-of-death statistics is to use computers for applying the rules systematically. Iris software is an available system for this purpose. Its main characteristics are that it follows international rules of mortality and it is language independent. Objective - Produce a Portuguese dictionary for Iris and assess its completeness of coding of causes of death. Methods - The creation of the dictionary used two sources: the electronic file of volume 1 of ICD-10 and Thesaurus of Classificação Internacional de Atenção Primária. Was used Iris V4.0.34 and for manual coding the codes at the Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) of Secretaria Municipal de Saúde of São Paulo has written on death certificates. If Iris couldnt codify the causes of death, adjustments were made in the dictionary or standardization table. Results - Iris is able to encode causes of death and select the underlying cause of death, either automatically; is a recent software that is in constant adjustments; is a language independent software and to use it in your country you need only dictionary of causes of death. In the test for evaluation the first version of the Portuguese dictionary Iris showed satisfactory performance. He was able to code directly for 50.6 per cent of death certificates and, after adjustments and additions in the dictionary and standardization table, the software coded all lines in 94.44 per cent of death certificate. The statements do not fully coded 89.19 per cent had a diagnosis contained in Chapter XX of ICD-10. Iris presented 63,1 per cent agreement on paired death certificates considering all causes of death and 4-digit ICD-10 code level. Conclusion - making adjustments in the dictionary or the standardization is part of the development process of the dictionary and that this process is ongoing. With new Iris versions and updates in the management of the coding external causes, progress will be made to make it more compatible with the Brazilian reality. Added to this, future versions of Iris with a dictionary more developed can meet the needs of auto-tagging and improve the accuracy of data causes death to public health studies.
6

Codificação automática das causas de morte e seleção da causa básica de morte: a adaptação para o Brasil do software Iris / Automatic coding cause of death and selection for underlying cause of death: an adaptation of Iris software to Brazil

Renata Cristófani Martins 27 July 2012 (has links)
Introdução - Uma das formas de se aumentar a qualidade das informações sobre causas de morte é automatizar o processo de sua elaboração. O software Iris é um dos mecanismos disponíveis para esse fim. Suas principais características é que ele segue as regras internacionais de mortalidade e ele é independente de idioma. Objetivo - Elaborar para o Iris um dicionário na língua portuguesa e avaliar a sua completitude para a codificação das causas de morte. Método - O dicionário criado com dados do arquivo eletrônico do volume 1 da CID-10 e com o Tesauro da Classificação Internacional de Atenção Primária. Foi utilizado o Iris V4.0.34 e, como codificação manual, o que o Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) da Secretaria Municipal de Saúde de São Paulo escreveu nas declarações de óbito. Caso o Iris não codificasse as causas de morte, ajustes eram feitos no dicionário ou na tabela de padronização. Resultado - O Iris é capaz de codificar as causas de morte e selecionar a causa básica de morte, ambas automaticamente, é um software recente que está em constantes adequações, é independente de idioma e para usá-lo em cada país é necessário realizar somente um dicionário de causas de morte. No teste para avaliação da primeira versão do dicionário em português, o Iris apresentou um desempenho satisfatório. Foi capaz de codificar diretamente 50,6 por cento das declarações de óbito e, após ajustes e acréscimos no dicionário e na tabela de padronização, o software codificou todas as linhas em 94,44 por cento das declarações. Das declarações não codificadas completamente 89,19 por cento delas tinham algum diagnóstico contido no capítulo XX da CID-10. O Iris apresentou 63,1 por cento de concordância nas declarações de óbito pareadas considerando todas as causas de morte com códigos completos de 4 caracteres da CID-10. Conclusão - A realização dos ajustes no dicionário ou na padronização faz parte do processo de desenvolvimento do dicionário e que esse processo é continuo. Com as novas versões do Iris e atualizações e aprimoramento da codificação das causas externas, avanços serão feitos para que ele seja mais compatível com a realidade brasileira. Somado a isso, as futuras versões do Iris com um dicionário mais desenvolvido podem satisfazer as necessidades de codificação automática e melhorar a precisão dos dados de causa morte paras estudos de saúde pública. / Introduction - One way to increase the quality of causes-of-death statistics is to use computers for applying the rules systematically. Iris software is an available system for this purpose. Its main characteristics are that it follows international rules of mortality and it is language independent. Objective - Produce a Portuguese dictionary for Iris and assess its completeness of coding of causes of death. Methods - The creation of the dictionary used two sources: the electronic file of volume 1 of ICD-10 and Thesaurus of Classificação Internacional de Atenção Primária. Was used Iris V4.0.34 and for manual coding the codes at the Programa de Aprimoramento das Informações de Mortalidade no Município de São Paulo (PRO-AIM) of Secretaria Municipal de Saúde of São Paulo has written on death certificates. If Iris couldnt codify the causes of death, adjustments were made in the dictionary or standardization table. Results - Iris is able to encode causes of death and select the underlying cause of death, either automatically; is a recent software that is in constant adjustments; is a language independent software and to use it in your country you need only dictionary of causes of death. In the test for evaluation the first version of the Portuguese dictionary Iris showed satisfactory performance. He was able to code directly for 50.6 per cent of death certificates and, after adjustments and additions in the dictionary and standardization table, the software coded all lines in 94.44 per cent of death certificate. The statements do not fully coded 89.19 per cent had a diagnosis contained in Chapter XX of ICD-10. Iris presented 63,1 per cent agreement on paired death certificates considering all causes of death and 4-digit ICD-10 code level. Conclusion - making adjustments in the dictionary or the standardization is part of the development process of the dictionary and that this process is ongoing. With new Iris versions and updates in the management of the coding external causes, progress will be made to make it more compatible with the Brazilian reality. Added to this, future versions of Iris with a dictionary more developed can meet the needs of auto-tagging and improve the accuracy of data causes death to public health studies.
7

Resource Lean and Portable Automatic Text Summarization

Hassel, Martin January 2007 (has links)
Today, with digitally stored information available in abundance, even for many minor languages, this information must by some means be filtered and extracted in order to avoid drowning in it. Automatic summarization is one such technique, where a computer summarizes a longer text to a shorter non-rendundant form. Apart from the major languages of the world there are a lot of languages for which large bodies of data aimed at language technology research to a high degree are lacking. There might also not be resources available to develop such bodies of data, since it is usually time consuming and requires substantial manual labor, hence being expensive. Nevertheless, there will still be a need for automatic text summarization for these languages in order to subdue this constantly increasing amount of electronically produced text. This thesis thus sets the focus on automatic summarization of text and the evaluation of summaries using as few human resources as possible. The resources that are used should to as high extent as possible be already existing, not specifically aimed at summarization or evaluation of summaries and, preferably, created as part of natural literary processes. Moreover, the summarization systems should be able to be easily assembled using only a small set of basic language processing tools, again, not specifically aimed at summarization/evaluation. The summarization system should thus be near language independent as to be quickly ported between different natural languages. The research put forth in this thesis mainly concerns three computerized systems, one for near language independent summarization – The HolSum summarizer; one for the collection of large-scale corpora – The KTH News Corpus; and one for summarization evaluation – The KTH eXtract Corpus. These three systems represent three different aspects of transferring the proposed summarization method to a new language. One aspect is the actual summarization method and how it relates to the highly irregular nature of human language and to the difference in traits among language groups. This aspect is discussed in detail in Chapter 3. This chapter also presents the notion of “holistic summarization”, an approach to self-evaluative summarization that weighs the fitness of the summary as a whole, by semantically comparing it to the text being summarized, before presenting it to the user. This approach is embodied as the text summarizer HolSum, which is presented in this chapter and evaluated in Paper 5. A second aspect is the collection of large-scale corpora for languages where few or none such exist. This type of corpora is on the one hand needed for building the language model used by HolSum when comparing summaries on semantic grounds, on the other hand a large enough set of (written) language use is needed to guarantee the randomly selected subcorpus used for evaluation to be representative. This topic briefly touched upon in Chapter 4, and detailed in Paper 1. The third aspect is, of course, the evaluation of the proposed summarization method on a new language. This aspect is investigated in Chapter 4. Evaluations of HolSum have been run on English as well as on Swedish, using both well established data and evaluation schemes (English) as well as with corpora gathered “in the wild” (Swedish). During the development of the latter corpora, which is discussed in Paper 4, evaluations of a traditional sentence ranking text summarizer, SweSum, have also been run. These can be found in Paper 2 and 3. This thesis thus contributes a novel approach to highly portable automatic text summarization, coupled with methods for building the needed corpora, both for training and evaluation on the new language. / Idag, med ett överflöd av digitalt lagrad information även för många mindre språk, är det nära nog omöjligt att manuellt sålla och välja ut vilken information man ska ta till sig. Denna information måste istället filteras och extraheras för att man inte ska drunkna i den. En teknik för detta är automatisk textsammanfattning, där en dator sammanfattar en längre text till en kortare icke-redundant form. Vid sidan av de stora världsspråken finns det många små språk för vilka det saknas stora datamängder ämnade för språkteknologisk forskning. För dessa saknas det också ofta resurser för att bygga upp sådana datamängder då detta är tidskrävande och ofta dessutom kräver en ansenlig mängd manuellt arbete. Likväl behövs automatisk textsammanfattning för dessa språk för att tämja denna konstant ökande mängd elektronsikt producerad text. Denna avhandling sätter således fokus på automatisk sammanfattning av text med så liten mänsklig insats som möjligt. De använda resurserna bör i så hög grad som möjligt redan existera, inte behöva vara skapade för automatisk textsammanfattning och helst även ha kommit till som en naturlig del av en litterär process. Vidare, sammanfattningssystemet bör utan större ansträngning kunna sättas samman med hjälp av ett mindre antal mycket grundläggande språkteknologiska verktyg, vilka inte heller de är specifikt ämnade för textsammanfattning. Textsammanfattaren bör således vara nära nog språkoberoende för att det med enkelhet kunna att flyttas mellan ett språk och ett annat. Den forskning som läggs fram i denna avhandling berör i huvudsak tre datorsystem, ett för nära nog språkoberoende sammanfattning – HolSum; ett för insamlande av stora textmängder – KTH News Corpus; och ett för utvärdering av sammanfattning – KTH eXtract Corpus. Dessa tre system representerar tre olika aspekter av att föra över den framlagda sammanfattningsmetoden till ett nytt språk. En aspekt är den faktiska sammanfattningsmetoden och hur den påverkas av mänskliga språks högst oregelbundna natur och de skillnader som uppvisas mellan olika språkgrupper. Denna aspekt diskuteras i detalj i kapitel tre. I detta kapitel presenteras också begreppet “holistisk sammanfattning”, en ansats tillsjälvutvärderande sammanfattning vilken gör en innehållslig bedömning av sammanfattningen som en helhet innan den presenteras för användaren. Denna ansats förkroppsligas i textsammanfattaren HolSum, som presenteras i detta kapitel samt utvärderas i artikel fem. En andra aspekt är insamlandet av stora textmängder för språk där sådana saknas. Denna typ av datamängder behövs dels för att bygga den språkmodell som HolSum använder sig av när den gör innehållsliga jämförelser sammanfattningar emellan, dels behövs dessa för att ha en tillräckligt stor mängd text att kunna slumpmässigt extrahera en representativ delmängd lämpad för utvärdering ur. Denna aspekt berörs kortfattat i kapitel fyra och i mer önskvärd detalj i artikel ett. Den tredje aspekten är, naturligtvis, utvärdering av den framlagda sammanfattningsmetoden på ett nytt språk. Denna aspekt ges en översikt i kapitel 4. Utvärderingar av HolSum har utförts både med väl etablerade datamängder och utvärderingsmetoder (för engelska) och med data- och utvärderingsmängder insamlade specifikt för detta ändamål (för svenska). Under sammanställningen av denna senare svenska datamängd, vilken beskrivs i artikel fyra, så utfördes även utvärderingar av en traditionell meningsextraherande textsammanfattare, SweSum. Dessa återfinns beskrivna i artikel två och tre. Denna avhandling bidrar således med ett nydanande angreppssätt för nära nog språkoberoende textsammanfattning, uppbackad av metoder för sammansättning av erforderliga datamängder för såväl modellering av som utvärdering på ett nytt språk. / QC 20100712
8

Konzistence lingvistických anotací / Consistency of Linguistic Annotation

Aggarwal, Akshay January 2020 (has links)
Thesis Abstract Akshay Aggarwal July 2020 This thesis attempts at correction of some errors and inconsistencies in dif- ferent treebanks. The inconsistencies can be related to linguistic constructions, failure of the guidelines of annotation, failure to understand the guidelines on annotator's part, or random errors caused by annotators, among others. We propose a metric to attest the POS annotation consistency of different tree- banks in the same language, when the annotation guidelines remain the same. We offer solutions to some previously identified inconsistencies in the scope of the Universal Dependencies Project, and check the viability of a proposed in- consistency detection tool in a low-resource setting. The solutions discussed in the thesis are language-neutral, intended to work with multiple languages with efficiency. 1
9

Genetic Algorithms in the Brill Tagger : Moving towards language independence

Bjerva, Johannes January 2013 (has links)
The viability of using rule-based systems for part-of-speech tagging was revitalised when a simple rule-based tagger was presented by Brill (1992). This tagger is based on an algorithm which automatically derives transformation rules from a corpus, using an error-driven approach. In addition to performing on par with state of the art stochastic systems for part-of-speech tagging, it has the advantage that the automatically derived rules can be presented in a human-readable format. In spite of its strengths, the Brill tagger is quite language dependent, and performs much better on languages similar to English than on languages with richer morphology. This issue is addressed in this paper through defining rule templates automatically with a search that is optimised using Genetic Algorithms. This allows the Brill GA-tagger to search a large search space for templates which in turn generate rules which are appropriate for various target languages, which has the added advantage of removing the need for researchers to define rule templates manually. The Brill GA-tagger performs significantly better (p<0.001) than the standard Brill tagger on all 9 target languages (Chinese, Japanese, Turkish, Slovene, Portuguese, English, Dutch, Swedish and Icelandic), with an error rate reduction of between 2% -- 15% for each language. / Da Brill (1992) presenterte sin enkle regelbaserte ordklasse-tagger ble det igjen aktuelt å bruke regelbaserte system for tagging av ordklasser. Taggerens grunnlag er en algoritme som automatisk lærer seg transformasjonsregler fra et korpus. I tillegg til at taggeren yter like bra som moderne stokastiske metoder for ordklasse-tagging har Brill-taggeren den fordelen at reglene den lærer seg kan presenteres i et format som lett kan oppfattes av mennesker. Til tross for sine styrker er Brill-taggeren relativt språkavhengig ettersom den fungerer mye bedre for språk som ligner engelsk enn språk med rikere morfologi. Denne oppgaven forsøker å løse dette problemet gjennom å definere regelmaler automatisk med et søk som er optimert med Genetiske Algoritmer. Dette lar Brill GA-taggeren søke gjennom et mye større område enn den ellers kunne ha gjort etter maler som i sin tur genererer regler som er tilpasset målspråket, hvilket også har fordelen at forskere ikke trenger å definere regelmaler manuelt. Brill GA-taggeren yter signifikant bedre (p<0.001) enn Brill-taggeren på alle 9 målspråk (Kinesisk, Japansk, Tyrkisk, Slovensk, Portugisisk, Engelsk, Nederlandsk, Svensk og Islandsk), med en feilprosent som er mellom 2% og 15% lavere i alle språk. / När Brill (1992) presenterade sin enkla regelbaserade ordklasstaggare blev det återigen aktuellt att använda regelbaserade system för taggning av ordklasser. Taggaren är baserad på en algoritm som automatiskt lär sig transformationsregler från en korpus. Bortsett från att taggaren fungerar lika bra som moderna stokastiska metoder för ordklasstaggning har den också fördelen att reglerna som den lär sig kan presenteras i ett format som lätt kan läsas av människor. Trots sina styrkor är Brill-taggeren relativt språkberoende i och med att den fungerar mycket bättre för språk som liknar engelska än för språk med rikare morfologi. Den här uppsatsen försöker att lösa detta problem genom att definiera regelmallar automatiskt med en sökning som är optimerad med Genetiska Algoritmer. Detta gör att Brill GA-taggaren kan söka genom ett mycket större område än den annars skulle ha kunnat göra efter mallar som i sin tur genererar regler som är anpassade för målspråket. Detta har också fördelen att forskare inte behöver definiera regelmallar manuellt. Brill GA-taggeren får signifikant bättre träffsäkerhet (p<0.001) än Brill-taggeren på alla 9 målspråken (Kinesiska, Japanska, Turkiska, Slovenska, Portugisiska, Engelska, Nederländska, Svenska och Isländska), med en felprocent som är mellan 2% och 15% lägre för alla språk.
10

A Secure Infrastructural Strategy for Safe Autonomous Mobile Agents

Giansiracusa, Michelangelo Antonio January 2005 (has links)
Portable languages and distributed paradigms have driven a wave of new applications and processing models. One of the most promising, certainly from its early marketing, but disappointing (from its limited uptake)is the mobile agent execution and data processing model. Mobile agents are autonomous programs which can move around a heterogeneous network such as the Internet, crossing through a number of different security domains, and perform some work at each visited destination as partial completion of a mission for their agent user. Despite their promise as a technology and paradigm to drive global electronic services (i.e.any Internet-driven-and-delivered service, not solely e-commerce related activities), their up take on the Internet has been very limited. Chief among the reasons for the paradigm's practical under-achievement is there is no ubiquitous frame work for using Internet mobile agents, and non-trivial security concerns abound for the two major stake holders (mobile agent users and mobile agent platform owners). While both stake holders have security concerns with the dangers of the mobile agent processing model, most investigators in the field are of the opinion that protecting mobile agents from malicious agent platforms is more problematic than protecting agent platforms from malicious mobile agents. Traditional cryptographic mechanisms are not well-suited to counter the bulk of the threats associated with the mobile agent paradigm due to the untrusted hosting of an agent and its intended autonomous, flexible movement and processing. In our investigation, we identified that the large majority of the research undertaken on mobile agent security to date has taken a micro-level perspective. By this we mean research focused solely on either of the two major stakeholders, and even then often only on improving measures to address one security issue dear to the stake holder - for example mobile agent privacy (for agent users) or access control to platform resources (for mobile agent platform owners). We decided to take a more encompassing, higher-level approach in tackling mobile agent security issues. In this endeavour, we developed the beginnings of an infrastructural-approach to not only reduce the security concerns of both major stakeholders, but bring them transparently to a working relationship. Strategic utilisation of both existing distributed system trusted-third parties (TTPs) and novel mobile agent paradigm-specific TTPs are fundamental in the infrastructural framework we have devised. Besides designing an application and language independent frame work for supporting a large-scale Internet mobile agent network, our Mobile Agent Secure Hub Infrastructure (MASHIn) proposal encompasses support for flexible access control to agent platform resources. A reliable means to track the location and processing times of autonomous Internet mobile agents is discussed, withfault-tolerant handling support to work around unexpected processing delays. Secure,highly-effective (incomparison to existing mechanisms) strategies for providing mobile agent privacy, execution integrity, and stake holder confidence scores were devised - all which fit comfortably within the MASHIn framework. We have deliberately considered the interests - withoutbias -of both stake holders when designing our solutions. In relation to mobile agent execution integrity, we devised a new criteria for assessing the robustness of existing execution integrity schemes. Whilst none of the existing schemes analysed met a large number of our desired properties for a robust scheme, we identified that the objectives of Hohl's reference states scheme were most admirable - particularly real - time in - mission execution integrity checking. Subsequently, we revised Hohl's reference states protocols to fit in the MASHIn framework, and were able to overcome not only the two major limitations identified in his scheme, but also meet all of our desired properties for a robust execution integrity scheme (given an acceptable decrease in processing effiency). The MASHIn offers a promising new perspective for future mobile agent security research and indeed a new frame work for enabling safe and autonomous Internet mobile agents. Just as an economy cannot thrive without diligent care given to micro and macro-level issues, we do not see the security prospects of mobile agents (and ultimately the prospects of the mobile agent paradigm) advancing without diligent research on both levels.

Page generated in 0.0655 seconds