• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 16
  • 13
  • 9
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 49
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Emploging and improving machinelearning of detection of Phishing URLs

Yaitskyi, Andrii January 2022 (has links)
Background: Phishing is one type of the social engineering techniques to fool users by pretending tobe a trusted person and stealing users personal data. Quite often, Phishing spreads to email services, and browsers are not always able to block Phishing URLs. The problem of Phishing continues to exist and does not decrease, so there are still issues in this problem that need to be addressed. Objectives: The object of research is the method of processing and detecting Phishing URLs. This study is intended to conduct a study to identify the possible assumptions for the method of automating the processing and detection of Phishing URLs, as well as to find out how the efficiency can be improved, and the detection of Phishing URLs, in addition, this study is also intended to understand which of machine learning algorithms are best suited for detecting Phishing URLs. Methods: In this study, the method of machine learning is used, a study was also carried out, on the basis of which it was decided that these data are not enough and that a better result could be achieved if more efficient methods were used. Therefore, in this case, it was decided to use the machine learning method, and aquantitative study was carried out to understand which machine learning algorithm is better to use in furtherwork.The subject of research - methods and means of processing and detecting Phishing URLs. Also, the research methods in this study, is analysis, observation, modeling, and experimental research Results: The result shows a higher percentage compared to the algorithm comparison. Also, the result shows that the automation procedure has been achieved, and the accuracy of Phishing URL detection hasimproved a lot, showing an accuracy of 98.417%. Compared to manual analysis of Phishing URLs, and otheralgorithms, this is a better result. Conclusions: There are some challenges in handling Phishing URLs as well as efficiency and betterdetection. However, further research is needed in this case to find out how to further improve the detection of Phishing URLs.
32

Möjligheten till föreläggande mot Internetleve-rantörer enligt 53 b § URL : – Under vilka omständigheter kan en Internetleverantör åläggas att blockera sina kunders tillgång till tjänster/webbsidor som används för att begå immaterialrättsintrång? / The possibility for injunction towards Internet service providers accor-ding to 53 b § URL : – In which circumstances is it possible to enjoin an Internet service provider to block their customers’ access to services/webbpages which commit intellectual property infringement?

Wallin, Alexander, Brynolf, Daniel January 2016 (has links)
I denna uppsats utreds dels om en Internetleverantör har ett medverkansansvar enligt BrB, när den i sin roll som mellanhand, via tjänsten Internetuppkoppling ger sina kunder tillgång till tjänster/webbsidor som används för att begå immaterialrättsintrång enligt gällande rätt, och dels huruvida det svenska rättsläget är förenligt med de direktiv som har utfärdats av EU på området.  I uppsatsen undersöks möjligheten till vitesförbud enligt 53 b § URL, i förhållande till lagstift-ning, förarbeten och EU-direktiv. Vidare undersöks både svensk praxis på området och ett för-handsavgörande från EU-domstolen. I ett försök att förstå gällande svensk rätt görs även en kortare jämförelse med dansk rätt, och dess domstolars möjlighet att utfärda vitesförbud mot Internetleverantörer. Utifrån bestämmelsen om vitesförbud i 53 b § URL torde det inte vara möjligt för en svensk domstol att utfärda ett vitesförbud mot en Internetleverantör som enbart tillhandahåller tjänsten Internetuppkoppling åt sina kunder. Det framkommer av de svenska förarbetena att det, för att utfärda ett sådant förbud, krävs både vetskap om intrången samt ett avtalsliknande förhållande mellan Internetleverantören och intrångsgöraren.  Vi kommer slutligen fram till att även om artikel 8.3 i Infosoc-direktivet kan anses ha imple-menterats på rätt sätt enligt dess ordalydelse är tillämpningen av 53 b § URL, i enlighet med förarbetena, i dagsläget sådan att den torde strida mot syftet med artikel 8.3. Det anser vi bör leda till att en direktivkonform tolkning av artikeln inte bara kan, utan även ska tillämpas av svensk domstol. Således kan, enligt oss, en Internetleverantör, genom att förbjudas medverka till intrång, åläggas att blockera sina kunders tillgång till tjänster/webbsidor som används för att begå immaterialrättsintrång enligt 53 b § URL även i de fall där leverantörerna endast med-verkar genom att tillhandahålla tjänsten Internetuppkoppling.
33

Analyse du DNS et analyse sémantique pour la détection de l'hameçonnage / DNS and semantic analysis for phishing detection

Marchal, Samuel 22 June 2015 (has links)
L’hameçonnage est une escroquerie moderne qui cible les utilisateurs de communications électroniques et vise à les convaincre de réaliser des actions pour le bénéfice d’un individu nommé hameçonneur. Les attaques d’hameçonnage s’appuient essentiellement sur de l’ingénierie sociale et la plupart de ces attaques utilisent des liens représentés par des noms de domaine et des URLs. Nous proposons donc dans cette thèse de nouvelles solutions, reposant sur une analyse lexicale et sémantique de la composition des noms de domaine et des URLs, pour combattre l’hameçonnage. Ces deux types de pointeurs sont créés et offusqués par les hameçonneurs pour piéger leurs victimes. Ainsi, nous démontrons que les noms de domaine et les URLs utilisés dans des attaques d’hameçonnage présentent des similitudes dans leur composition lexicale et sémantique, et que celles-ci sont différentes des caractéristiques présentées par les noms de domaine et les URL légitimes. Nous utilisons ces caractéristiques pour construire des modèles représentant la composition des URLs et des noms de domaine d’hameçonnage en utilisant des techniques d’apprentissage automatique et des méthodes de traitement du langage naturel. Les modèles construits sont utilisés pour des applications telles que l’identification de noms de domaine et des URLs d’hameçonnage, la notation des URLs et la prédiction des noms de domaine utilisés dans les attaques d’hameçonnage. Les techniques proposées sont évaluées sur des données réelles et elles montrent leur efficacité en répondant aux exigences de vitesse, d’universalité et de fiabilité / Phishing is a kind of modern swindles that targets electronic communications users and aims to persuade them to perform actions for a another’s benefit. Phishing attacks rely mostly on social engineering and that most phishing vectors leverage directing links represented by domain names and URLs, we introduce new solutions to cope with phishing. These solutions rely on the lexical and semantic analysis of the composition of domain names and URLs. Both of these resource pointers are created and obfuscated by phishers to trap their victims. Hence, we demonstrate in this document that phishing domain names and URLs present similarities in their lexical and semantic composition that are different form legitimate domain names and URLs composition. We use this characteristic to build models representing the composition of phishing URLs and domain names using machine learning techniques and natural language processing models. The built models are used for several applications such as the identification of phishing domain names and phishing URLs, the rating of phishing URLs and the prediction of domain names used in phishing attacks. All the introduced techniques are assessed on ground truth data and show their efficiency by meeting speed, coverage and reliability requirements. This document shows that the use of lexical and semantic analysis can be applied to domain names and URLs and that this application is relevant to detect phishing attacks
34

Medhjälpare till brott mot URL? : Vem och när anses man vara medhjälpare?

Jansizian, George January 2011 (has links)
Internettjänsten The Pirate Bay fälldes av Svea hovrätt den 26 november 2010 för medhjälp till brott mot URL med motiveringen att denna tjänst främjat fildelning av upphovsrättsligt skyddat material utan upphovsmännens samtycke. Bestämmelsen i 23 kap 4 § 2 st. BrB lyder, ”ansvar som i denna balk är föreskrivet för viss gärning skall ådömas inte bara den som utfört gärningen utan även annan som främjat denna med råd eller dåd. Detsamma skall gälla beträffande i annan lag eller författning straffbelagd gärning, för vilken fängelse är föreskrivet.” I dagsläget finns ett flertal tjänster av liknande karaktär såsom söktjänsten Google och videotjänsten Youtube. Dessa har inte prövats av svensk rätt men åtnjuter skydd av E-handelslagen trots att de i praktiken kan anses fungera som The Pirate Bay. Skillnaden är att dessa aktivt handlar för att förebygga förekomsten av upphovsrättsligt skyddat material utan upphovsmännens samtycke. Trots detta förekommer en betydande mängd upphovsrättsligt skyddat material som med stor sannolikhet inte gjorts tillgängligt för allmänheten med upphovsmännens samtycke. Dessa aktörer kan teoretiskt sätt upprätta en policy mot spridning av olovligt material som är tillräcklig för att väga upp den skadan som tillförs upphovsmännen, på så sätt åtnjuter de titeln informationssamhällets tjänst, de vill säga samhällsnyttan väger över den tillförda skadan enligt E-handelslagen. Hovrättens deldom i Pirate Bay-målet är nu en milstolpe för dessa aktörer avseende tolkningen av medhjälpsbegreppet i BrB, men det är ändock av vikt att HD samt EU-domstolen klargör den diffusa gränsen mellan definitionen informationssamhällets tjänst och tolkningen av ordalydelsen i 23 kap 4 § 2 st. BrB (medhjälpsbegreppet). / The Pirate Bay was convicted by the Svea Court of Appeals in November 26, 2010 for aiding in crime against Swedish Copyright Law on the grounds that this service promoted the sharing of copyright material without the authors’ consent. The wording in chapter 23, paragraph 4, part 2 of the Swedish Criminal Code reads, "responsibilities in this section are prescribed for a specific act, it shall be imposed not only on those who carried out the deed, but also the one that facilitated this by giving advice or carrying out deeds. The same shall apply in relation to another law or constitutional criminal offense for which imprisonment is prescribed." Nowadays there are several services of similar nature such as the search engine Google and the video streaming service Youtube. These services have not been tested by Swedish law, since they are protected by the Swedish E-Commerce Law. Google and Youtube actively take actions to prevent the occurrence of copyrighted material without the authors' consent. However, there is a considerable amount of copyright material, which most probably has been made available to the public without the authors' consent. These companies can in theory establish a policy against the proliferation of unauthorized materials in an amount sufficient to offset the damage which affects the copyright owners, but still enjoy the title of information society services in the E-Commerce Law, since the social benefits are larger than the caused injury. The Swedish Court of Appeal's judgement is now a landmark for similar services when it comes to the interpretation of aiding crime against the Swedish copyright law. It is nevertheless important that the Supreme Court of Sweden and the EU-court defines the cloudy boundary between the definition information society services and the interpretation of the wording in the 23 chapter 4 § 2 part, Swedish penal code.
35

Removing DUST using multiple alignment of sequences

Rodrigues, Kaio Wagner Lima, 92991221146 21 September 2016 (has links)
Submitted by Kaio Wagner Lima Rodrigues (kaiowagner@gmail.com) on 2018-08-23T05:45:00Z No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-08-23T19:08:57Z (GMT) No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-08-24T13:43:58Z (GMT) No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Made available in DSpace on 2018-08-24T13:43:58Z (GMT). No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) Previous issue date: 2016-09-21 / FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas / A large number of URLs collected by web crawlers correspond to pages with duplicate or near-duplicate contents. These duplicate URLs, generically known as DUST (Different URLs with Similar Text), adversely impact search engines since crawling, storing and using such data imply waste of resources, the building of low quality rankings and poor user experiences. To deal with this problem, several studies have been proposed to detect and remove duplicate documents without fetching their contents. To accomplish this, the proposed methods learn normalization rules to transform all duplicate URLs into the same canonical form. This information can be used by crawlers to avoid fetching DUST. A challenging aspect of this strategy is to efficiently derive the minimum set of rules that achieve larger reductions with the smallest false positive rate. As most methods are based on pairwise analysis, the quality of the rules is affected by the criterion used to select the examples and the availability of representative examples in the training sets. To avoid processing large numbers of URLs, they employ techniques such as random sampling or by looking for DUST only within sites, preventing the generation of rules involving multiple DNS names. As a consequence of these issues, current methods are very susceptible to noise and, in many cases, derive rules that are very specific. In this thesis, we present a new approach to derive quality rules that take advantage of a multi-sequence alignment strategy. We demonstrate that a full multi-sequence alignment of URLs with duplicated content, before the generation of the rules, can lead to the deployment of very effective rules. Experimental results demonstrate that our approach achieved larger reductions in the number of duplicate URLs than our best baseline in two different web collections, in spite of being much faster. We also present a distributed version of our method, using the MapReduce framework, and demonstrate its scalability by evaluating it using a set of 7.37 million URLs. / Um grande número de URLs obtidas por coletores corresponde a páginas com conteúdo duplicado ou quase duplicado, conhecidas em Inglês pelo acrônimo DUST, que pode ser traduzido como Diferentes URLs com Texto Similar. DUST são prejudiciais para sistemas de busca porque ao serem coletadas, armazenadas e utilizadas, contribuem para o desperdício de recursos, a criação de rankings de baixa qualidade e, consequentemente, uma experiência pior para o usuário. Para lidar com este problema, muita pesquisa tem sido realizada com intuito de detectar e remover DUST antes mesmo de coletar as URLs. Para isso, esses métodos se baseiam no aprendizado de regras de normalização que transformam todas as URLs com conteúdo duplicado para uma mesma forma canônica. Tais regras podem ser então usadas por coletores com o intuito de reconhecer e ignorar DUST. Para isto, é necessário derivar, de forma eficiente, um conjunto mínimo de regras que alcance uma grande taxa de redução com baixa incidência de falsos-positivos. Como a maioria dos métodos propostos na literatura é baseada na análise de pares, a qualidade das regras é afetada pelo critério usado para selecionar os exemplos de pares e a disponibilidade de exemplos representativos no treino. Para evitar processar um número muito alto de exemplos, em geral, são aplicadas técnicas de amostragem ou a busca por DUST é limitada apenas a sites, o que impede a geração de regras que envolvam diferentes nomes de DNS. Como consequência, métodos atuais são muito suscetíveis a ruído e, em muitos casos, derivam regras muito específicas. Nesta tese, é proposta uma nova técnica para derivar regras, baseada em uma estratégia de alinhamento múltiplo de sequências. Em particular, mostramos que um alinhamento prévio das URLs com conteúdo duplicado contribui para uma melhor generalização, o que resulta na geração de regras mais efetivas. Através de experimentos em duas diferentes coleções extraídas da Web, observa-se que a técnica proposta, além de ser mais rápida, filtra um número maior de URLs duplicadas. Uma versão distribuída do método, baseada na arquitetura MapReduce, proporciona a possibilidade de escalabilidade para coleções com dimensões compatíveis com a Web.
36

A MACHINE LEARNING BASED WEB SERVICE FOR MALICIOUS URL DETECTION IN A BROWSER

Hafiz Muhammad Junaid Khan (8119418) 12 December 2019 (has links)
Malicious URLs pose serious cyber-security threats to the Internet users. It is critical to detect malicious URLs so that they could be blocked from user access. In the past few years, several techniques have been proposed to differentiate malicious URLs from benign ones with the help of machine learning. Machine learning algorithms learn trends and patterns in a data-set and use them to identify any anomalies. In this work, we attempt to find generic features for detecting malicious URLs by analyzing two publicly available malicious URL data-sets. In order to achieve this task, we identify a list of substantial features that can be used to classify all types of malicious URLs. Then, we select the most significant lexical features by using Chi-Square and ANOVA based statistical tests. The effectiveness of these feature sets is then tested by using a combination of single and ensemble machine learning algorithms. We build a machine learning based real-time malicious URL detection system as a web service to detect malicious URLs in a browser. We implement a chrome extension that intercepts a browser’s URL requests and sends them to web service for analysis. We implement the web service as well that classifies a URL as benign or malicious using the saved ML model. We also evaluate the performance of our web service to test whether the service is scalable.
37

Informační systém laboratoře inteligentních systémů / Information System of Laboratory of Intelligent Systems

Kundrát, Miloš January 2009 (has links)
Information system of laboratory intelligent systems. A target of my masters thesis is online reservation and evidence information system of the property of laboratory Department of Intelligent Systems on the Faculty of Information Technology (DIS FIT). This system files and manages all property of the laboratory, collects detailed information (description and utilization of the property, related photos and associated documents. It makes possible to manage not only the property and in addition to it documents in electronic form as well, but it contains full-value reservation system with a possibility of entering and handling a reservations and loans for use. The system is divided to an administrator and user part. A researching program unit is a part of the system DIS FIT. It makes possible to lookup the online documents in worldwide net Internet, scilicet with minimal service of the authorized user on the basis of keywords (authors name, documents title, eventually the others) and it makes possible to store documents direct into the systems database.
38

Generátor vědeckých webových portálů / Scientific Web Portal Generator

Kundrát, Miloš January 2009 (has links)
Scientific web portal generátor. The entire project, whose part this dissertation is, consists of the users interface GUI, the processes' communication, the softwares communication with a user and of the script for automatic parsering the selected entity by a method of extraction the semantic information from the marked text. The last-named part is a main content of my dissertation. The dissertations objective is a script (prototype), which returns on the basis of an input XML file with names of research workers, an output XML file with URL home pages' addresses and pages' addresses with a list of publications. Great deal of my dissertation will be devoted to the first-rate test datas assembly and in the end to the profound static analysis of the resulting scripts behaviour, which I have programed. This part will survey us about the percentage fruitfulness and about the rate of the designed script. The script will be prepared for the integration into the common project.
39

3D-skrivarteknik, mode och framställning av exemplar för privat bruk inom upphovsrätten : En modernisering av upphovsrättslagstiftningen i takt med teknikens framfart / 3D-printing technology, fashion and reprocution for private use in Copyright Law : A modernisation of the Copyright Law as a result of the technological progress

Andersson, Madelene January 2020 (has links)
Abstract Copyright is seen everywhere in the society. It is structured to accommodate several perspectives, namely, a balance between protecting the individual copyright owners’ rights and encouraging the creativity on both an individual and societal level. The copyright owner cannot be granted a redundant protection at the expense of the other interests. For this reason, a copy of someone else’s copyright protected work can be done as long as the copying is for private use. Copying is for this reason something that the copyright owner must endure, but in some cases the right to copy for private use has to be constrained. A potential future problem will be at the point when a private person can 3D print in their own home.     The rapid development regarding 3D printing makes the technology more developed and, in addition, an increased influence over the society. As for the fashion industry, the technology comes with advantages, but also some disadvantages. In some years, the 3D printer has been able to print jewelry, watches and shoes. Lately, the technology also can print fashion products with material such as leather and textiles. If the technology continues to develop like today there is a chance, or a risk, with private persons having access to 3D printers at a “printing house” or in their own home, similar to a traditional paper printer today. The possibilities with printing what ever a person want to print will be a threat against manufacturing companies and their retailers. The person can thus avoid purchasing the product on the market by producing his own by making a copy of someone else’s copyright protected work with a disclaimer that the copy is for private use. The technological development will materialize challenges that the legislator has to consider and respond to, especially on the copyright area. The day when a copy can be made as cheap and as fast as purchasing the product on the market, the principal rule regarding private copying must be limited. Otherwise, the manufacturing companies and their retailers will be threatened and there is a risk that they will be outcompeted. The conclusion can be drawn, that an extended private copying levy in combination with a protection through technical action and a requisite of a reasonable use in art. 5 Infosoc-directive and 12 § URL, can solve the problem with 3D-printers and copying for private use. Whether the problems with 3D printing and copyright will become a reality, or not, depends on the future development of the technology. / Sammanfattning Upphovsrätter återfinns överallt i samhället. Dess syfte är att skydda upphovsmannens ensamrätt, liksom att balansera intressen som att främja kreativiteten och åsiktsbildningen, samhällsintresset och konsumentintresset. Kopiering av ett upphovsrättsligt skyddat verk får upphovsmannen utstå dagligen. Förutsatt att kopian är avsedd för privat brukande är kopieringen tillåten. Anledningen till detta undantag är att lagstiftaren anser att ensamrätten inte får vara för stark. Problematiken beträffande privat kopiering kommer att aktualiseras den dag när 3D-skrivartekniken är så pass utvecklad att privatpersoner har en ”minifabrik” i sitt hem. Den dagen när detta inträffar kommer privatpersoner inte att behöva gå till affären för att inhandla en specifik produkt, utan kan istället tillverka den hemma med hjälp av en CAD-fil och en 3D-skrivare, ett förfarande som kan komma att bli förfärande för företagen och samhället som helhet. Den teknologiska utvecklingen beträffande 3D-skrivare går snabbt framåt och med tiden får skrivaren ett större inflytande i samhället. I samband med utvecklingen har skrivarens kapacitet ökat samtidigt som priset sjunker. I dagsläget är emellertid en avancerad 3D-skrivare dyr vilket medför att det är ovanligt för privatpersoner att ha tillgång till en sådan i sitt hem.   För modebranschen har 3D-skrivarens utveckling inneburit en hel del fördelar, men med tekniken kommer även nackdelar. 3D-skrivaren har under en längre tid kunnat framställa modeprodukter såsom smycken, klockor och skor. Utvecklingen har på senaste tiden gjort det möjligt att även framställa modeprodukter av exempelvis läder, textil och skinn. Förutsatt att tekniken fortsätter utvecklas i den snabba takt som sker i dagsläget finns det en chans eller en risk att gemene man kommer att ha tillgång till en 3D-skrivare via ett ”printing house” eller till och med att gemene man har tillgång till en egen 3D-skrivare i sitt hem, precis som de allra flesta har tillgång till en traditionell pappersskrivare idag. Möjligheten att kunna skriva ut den produkt som önskas kommer att hota tillverkningsföretag och dess återförsäljare. Utvecklingen kommer att aktualisera utmaningar som lagstiftaren måste beakta och bemöta, särskilt på det upphovsrättsliga området. En potentiell lösning på dessa utmaningar är ett utvidgat kassettersättningssystem för att på så sätt säkerställa att upphovsmannen får ersättning för sin skada, i kombination med ett skydd för tekniska åtgärder och ett nytt rekvisit om skälig användning i art. 5 Infosoc-direktivet och 12 § URL. Om problemen med 3D-skrivartekniken och upphovsrätten kommer att bli realiserade i framtiden beror på hur utvecklingen kommer att se ut, något som återstår att se.
40

Fisk- och fågelpredations påverkan på den bentiska makroevertebratfaunans sammansättning under tidig vår i Tåkern

Saarinen Claesson, Per January 2012 (has links)
Predation is one of many factors that form the structure of the macroinvertebrate community in lakes, wetlands and watercourses. Earlier studies lack an examination concerning how fish- and waterfowl predation affect macroinvertebrates during shorter periods in the spring. I performed an exclosure study in the shallow eutrophic Lake Tåkern which is located in the western part of Östergötland County, Sweden. The experiment was performed during a three week period (1-21 April 2012) when the water temperature was low and the density of migrating diving ducks was high. The experimental cages used included three out of four different treatments; general predation (open cages), bird exclusion (net with mesh size 90*45mm) and no predation (net with mesh size 1*1mm). In the fourth treatment, which was used to control for cage effects, samples were collected outside the cages. The cages where placed in blocks three and three, one for each the three treatments, on eight different locations in the experiment area. No significant difference was detected between the different treatments concerning diversity (Shannon diversity index: F3,25=1,39; P=0,27 and Simpson diversity index: F3,25=1,47; P=0,25), taxonomic richness (F3,25=1,74; P=0,19) nor density (F2,25=0.41; P=0,75) of the macroinvertebrate fauna. The lack of effect from predation of fish is most likely explained by the low water temperature during the experiment period. The lack of effect from waterfowl predation is most likely due to a low density of diving ducks, despite the season. / Predation är en av flera faktorer som styr makroevertebratfaunans sammansättning i sjöar, våtmarker och vattendrag. Jag genomförde ett utestängningsexperiment i den grunda näringsrika sjön Tåkern i västra Östergötland. Experimentet genomfördes under en treveckorsperiod (1april-21-april) på året då densiteten flyttande dykänder är hög och vattentemperaturen är låg, vilket saknas från tidigare studier. Experimentet innefattade fyra behandlingar; kontrollpunkter utan burar, generell predation (öppna burar), fågelexkludering (nät 45*90mm) samt ingen predation (stängda burar, nät, 1,1mm). Burarna bestod av plastlådor med en volym på 130 liter (Längd*höjd*bredd: 79cm*57cm*42cm). Burarna placerades i block om tre, en för varje behandling, på botten av sjön vid åtta olika provpunkter. Resultatet visade ingen signifikant skillnad mellan de olika behandlingarna gällande makroevertebratfaunans sammansättning, varken i fråga om diversitet (Shannons diversitetsindex: F3,25=1,39; P=0,27 och Simpsons diversitetsindex: F3,25=1,47; P=0,25 ANOVA) eller densitet (F2,25=0.41; P=0,75). Avsaknaden av effekt från fiskpredation förklaras troligen av låg vattentemperatur under experimentperioden vilken påverkar fiskarnas födosökshastighet negativt. Som förväntat såg vi ingen effekt av fågelpredation vilket troligen beror på att densiteten dykänder inte var tillräckligt hög.

Page generated in 0.1528 seconds