• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 6
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Säkerhetsutvärdering certifikatserver i stället för aktiva kort / Security evaluation certificate server instead of smartcard

Jensen, Jonas January 2005 (has links)
<p>Business and organizations use computer network in a greater extension than ever before, especially for business-critical use. That increase the demand of security for all systems, both against internal and external threats. The demand on the authentication method used today increases. Today they normally uses password or some kind of smart card. </p><p>I will performa literature study that will investigate the possibility to increase the security in authentication of users without the use of extra hardware. The method uses a server that stores all cryptographic keys for the user centrally to achieve stronger security. This report is based on a previous report which tested to implement this solution, in this report I will question the security of this system. I will then give an architecture proposal where this method is used to authenticate and allow cryptographic recourses for the user. </p><p>The conclusions you can get from this report is that the possibilities with comparable ease increase the security without investing in new hardware. But the solution will not be comparable by a ``smart card solution''in security levels. That means that the method described in this thesis is suitable for organizations that either do not need that strong security as smart card give or want a good solution without being forced to use some external hardware.</p>
2

Säkerhetsutvärdering certifikatserver i stället för aktiva kort / Security evaluation certificate server instead of smartcard

Jensen, Jonas January 2005 (has links)
Business and organizations use computer network in a greater extension than ever before, especially for business-critical use. That increase the demand of security for all systems, both against internal and external threats. The demand on the authentication method used today increases. Today they normally uses password or some kind of smart card. I will performa literature study that will investigate the possibility to increase the security in authentication of users without the use of extra hardware. The method uses a server that stores all cryptographic keys for the user centrally to achieve stronger security. This report is based on a previous report which tested to implement this solution, in this report I will question the security of this system. I will then give an architecture proposal where this method is used to authenticate and allow cryptographic recourses for the user. The conclusions you can get from this report is that the possibilities with comparable ease increase the security without investing in new hardware. But the solution will not be comparable by a ``smart card solution''in security levels. That means that the method described in this thesis is suitable for organizations that either do not need that strong security as smart card give or want a good solution without being forced to use some external hardware.
3

Human-like Behaviour in Real-Time Strategy Games : An Experiment With Genetic Algorithms

Olofsson, Fredrik, Andersson, Johan W. January 2003 (has links)
If a computer game company wants to stay competitive they must offer something extra. For many years, this extra has often been synonymous with better graphics. Lately, and thanks to the Internet, the focus has shifted in favour of more multi-player support. This also means that the requirements of one-player games increases. Our proposal, to meet these new requirements, is that future game AI is made more human-like. One way to achieve this is believed to be the use of learning AI techniques, such as genetic algorithms and neural networks. In this thesis we will present the results from an experiment aiming at testing strategy game AI. Test persons played against traditional strategy game AI, a genetic algorithm AI, and other humans to see if they experienced any differences in the behaviour of the opponents.
4

Compression automatique de phrases : une étude vers la génération de résumés / Automatic sentence compression : towards abstract summarization

Molina Villegas, Alejandro 30 September 2013 (has links)
Cette étude présente une nouvelle approche pour la génération automatique de résumés, un des principaux défis du Traitement de la Langue Naturelle. Ce sujet, traité pendant un demi-siècle par la recherche, reste encore actuel car personne n’a encore réussi à créer automatiquement des résumés comparables, en qualité, avec ceux produits par des humains. C’est dans ce contexte que la recherche en résumé automatique s’est divisée en deux grandes catégories : le résumé par extraction et le résumé par abstraction. Dans le premier, les phrases sont triées de façon à ce que les meilleures conforment le résumé final. Or, les phrases sélectionnées pour le résumé portent souvent des informations secondaires, une analyse plus fine s’avère nécessaire.Nous proposons une méthode de compression automatique de phrases basée sur l’élimination des fragments à l’intérieur de celles-ci. À partir d’un corpus annoté, nous avons créé un modèle linéaire pour prédire la suppression de ces fragments en fonction de caractéristiques simples. Notre méthode prend en compte trois principes : celui de la pertinence du contenu, l’informativité ; celui de la qualité du contenu, la grammaticalité, et la longueur, le taux de compression. Pour mesurer l’informativité des fragments,nous utilisons une technique inspirée de la physique statistique : l’énergie textuelle.Quant à la grammaticalité, nous proposons d’utiliser des modèles de langage probabilistes.La méthode proposée est capable de générer des résumés corrects en espagnol.Les résultats de cette étude soulèvent divers aspects intéressants vis-à- vis du résumé de textes par compression de phrases. On a observé qu’en général il y a un haut degré de subjectivité de la tâche. Il n’y a pas de compression optimale unique mais plusieurs compressions correctes possibles. Nous considérons donc que les résultats de cette étude ouvrent la discussion par rapport à la subjectivité de l’informativité et son influence pour le résumé automatique. / This dissertation presents a novel approach to automatic text summarization, one of the most challenging tasks in Natural Language Processing (NLP). Until now, no one had ever created a summarization method capable of producing summaries comparable in quality with those produced by humans. Even many of state-of-the-art approaches form the summary by selecting a subset of sentences from the original text. Since some of the selected sentences might still contain superfluous information, a finer analysis is needed. We propose an Automatic Sentence Compression method based on the elimination of intra-phrase discourse segments. Using a manually annotated big corpus, we have obtained a linear model that predicts the elimination probability of a segment on the basis of three simple three criteria: informativity, grammaticality and compression rate. We discuss the difficulties for automatic assessment of these criteria in documents and phrases and we propose a solution based on existing techniques in NLP literature, one applying two different algorithms that produce summaries with compressed sentences. After applying both algorithms in documents in Spanish, our method is able to produce high quality results. Finally, we evaluate the produced summaries using the Turing test to determine if human judges can distinguish between human-produced summaries and machine-produced summaries. This dissertation addresses many previously ignored aspects of NLP, namely the subjectivity of informativity, the sentence compression in Spanish documents, and the evaluation of NLP using the Turing test.
5

Spam on the phone - VoIP and its biggest weakness : Studies about the users’ willingness to offer personal information in order to avoid VoIP spam

Putz, Daniel Robert January 2007 (has links)
<p>It is very probable that VoIP will soon replace the ordinary telephone. Beside all advantages of the digital voice-connection it is linked to the danger of spam on the telephone. A lot of approaches have been developed to solve the problem of VoIP spam. Because some of these solutions are based on access to personal information of its users, a broad discussion about the best and most ethical approach has started.</p><p>This thesis analyzes the users’ point of view towards the VoIP spam problem and the extent of users’ willingness to offer private information in order to avoid VoIP spam. It presents results from a qualitative and a quantitative research as well as approaches for a most realistic- and most promising VoIP solution. These new approaches are based on the results of the research.</p><p>The main points of the results showed that users were not willing to offer private information to companies and that they were not willing to pay any amount of money for VoIP spam solutions. Users held governmental organisations and telephone operators responsible for finding a solution against VoIP spam.</p>
6

Autonomous Systems in Society and War : Philosophical Inquiries

Johansson, Linda January 2013 (has links)
The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers IV and V, has to do with agency and responsibility. Another issue, somewhat aside from the philosophical, has to do with coping with future technologies, and developing methods for dealing with potentially disruptive technologies. This is discussed in papers VI and VII. Paper I systemizes some ethical issues surrounding the use of UAVs in war, with the laws of war as a backdrop. It is suggested that the laws of war are too wide and might be interpreted differently depending on which normative moral theory is used. Paper II is about future, more advanced autonomous robots, and whether the use of such robots can undermine the justification for killing in war. The suggestion is that this justification is substantially undermined if robots are used to replace humans to a high extent. Papers I and II both suggest revisions or additions to the laws or war. Paper III provides a discussion on one normative moral theory – ethics of care – connected to care robots. The aim is twofold: first, to provide a plausible and ethically relevant interpretation of the key term care in ethics of care, and second, to discuss whether ethics of care may be a suitable theory to implement in care robots. Paper IV discusses robots connected to agency and responsibility, with a focus on consciousness. The paper has a functionalistic approach, and it is suggested that robots should be considered agents if they can behave as if they are, in a moral Turing test. Paper V is also about robots and agency, but with a focus on free will. The main question is whether robots can have free will in the same sense as we consider humans to have free will when holding them responsible for their actions in a court of law. It is argued that autonomy with respect to norms is crucial for the agency of robots. Paper VI investigates the assessment of socially disruptive technological change. The coevolution of society and potentially disruptive technolgies makes decision-guidance on such technologies difficult. Four basic principles are proposed for such decision guidance, involving interdisciplinary and participatory elements. Paper VII applies the results from paper VI – and a workshop – to autonomous systems, a potentially disruptive technology. A method for dealing with potentially disruptive technolgies is developed in the paper. / <p>QC 20130911</p>
7

Spam on the phone - VoIP and its biggest weakness : Studies about the users’ willingness to offer personal information in order to avoid VoIP spam

Putz, Daniel Robert January 2007 (has links)
It is very probable that VoIP will soon replace the ordinary telephone. Beside all advantages of the digital voice-connection it is linked to the danger of spam on the telephone. A lot of approaches have been developed to solve the problem of VoIP spam. Because some of these solutions are based on access to personal information of its users, a broad discussion about the best and most ethical approach has started. This thesis analyzes the users’ point of view towards the VoIP spam problem and the extent of users’ willingness to offer private information in order to avoid VoIP spam. It presents results from a qualitative and a quantitative research as well as approaches for a most realistic- and most promising VoIP solution. These new approaches are based on the results of the research. The main points of the results showed that users were not willing to offer private information to companies and that they were not willing to pay any amount of money for VoIP spam solutions. Users held governmental organisations and telephone operators responsible for finding a solution against VoIP spam.
8

Umělá inteligence a kompozice vážné hudby / Artificial Intelligence and Classical Music Composition

Jouza, Vojtěch January 2020 (has links)
This thesis deals with artificial intelligence composing classical music and ways of evaluating its performance by listeners. The text provides the first overview of the conducted experiments based on the so-called Turing test and, set up on the analysis of primary sources, it suggests possible improvements in terms of methodology. In the end, we propose an alternative test, which rejects the philosophical implications of the Turing test and, in contrast to the original experiment, also provides room for a music-theoretical analysis of the generated works.
9

Computation as Strange Material : Excursions into Critical Accidents

Lagerkvist, Love January 2021 (has links)
Waking up in a world where everyone carries a miniature supercomputer, interaction designers find themselves in their forerunners dreams. Faced with the reality of planetary-scale we have to confront the task of articulating approaches responsive this accidental ubiquity of computation. This thesis attempts such a formulation by defining computation as a strange material, a plasticity shaped equally by its technical properties and the mode of production by which is its continuously re-produced. The definition is applied through a methodology of excursions — participatory explorations into two seemingly disparate sites of computation, connected in they ways they manifest a labor of care. First, we visit the social infrastructures that constitute the Linux kernel, examining strangle entanglements of programming and care in the world's largest design process. This is followed by a tour into the thorny lands of artificial intelligence, situated in the smart replies of LinkedIn. Here, we investigate the fluctuating border between the artificial and the human with participants performing AI, formulating new Turing tests in the process. These excursions afford an understanding of computation as fundamentally re-produced through interaction, a strange kind of affective work the understanding of which is crucial if we ambition to disarm the critical accidents of our present future.
10

Validation of individual consciousness in strong artificial intelligence : an African theological contribution

Forster, Dion Angus 30 June 2006 (has links)
The notion of identity has always been central to the human person's understanding of self. The question "who am I?" is fundamental to human being. Answers to this question have come from a wide range of academic disciplines. Philosophers, theologians, scientists, sociologists and anthropologists have all sought to offer some insight. The question of individual identity has traditionally been answered from two broad perspectives. The objectivist approach has sought to answer the question through empirical observation - you are a mammal, you are a homo-sapien, you are male, you are African etc. The subjectivist approach has sought to answer the question through phenomenological exploration - I understand myself to be sentient, I remember my past, I feel love etc. A recent development in the field of computer science has however shown a shortcoming in both of these approaches. Ray Kurzweil, a theorist in strong artificial intelligence, suggests the possibility of an interesting identity crisis. He suggests that if a machine could be programmed and built to accurately and effectively emulate a person's conscious experience of being `self' it could lead to a crisis of identity. In an instance where the machine and the person it is emulating cannot be either objectively distinguished (i.e., both display the same characteristics of the person in question), or subjectively distinguish themselves (i.e., both believe themselves to be the `person in question' since both have an experience of being that person. This experience could be based on memory, emotion, understanding and other subjective realities) how is the true identity of the individual validated? What approach can be employed in order to distinguish which of the two truly is the `person in question' and which is the `emulation of that person'? This research investigates this problem and presents a suggested solution to it. The research begins with an investigation of the claims of strong artificial intelligence and discusses Ray Kurzweil's hypothetical identity crisis. It also discusses various approaches to consciousness and identity, showing both their value and shortfall within the scope of this identity conundrum. In laying the groundwork for the solution offered in this thesis, the integrative theory of Ken Wilber is presented as a model that draws on the strengths of the objectivist and subjectivist approaches to consciousness, yet also emphasises the need for an approach which is not only based on individual data (i.e., the objectivist - you are, or subjectivist - I am). Rather, it requires an intersubjective knowing of self in relation to others. The outcome of this research project is an African Theological approach to self-validating consciousness in strong artificial intelligence. This takes the form of an African Theology of relational ontology. The contribution falls within the ambit of Christian anthropology and Trinitarian theology - stressing the Christian belief that true identity is both shaped by, and discovered in, relationship with others. The clearest expression of this reality is to be found in the African saying Umuntu ngumuntu ngabantu (A person is a person through other persons). / Systematic Theology / D. Th.

Page generated in 0.4672 seconds