• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 21
  • 21
  • 17
  • 15
  • 15
  • 7
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 196
  • 44
  • 30
  • 29
  • 26
  • 26
  • 26
  • 23
  • 21
  • 19
  • 17
  • 16
  • 15
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Nanofluorures de métaux à structures hiérarchisées / Nanofluorides of metals with hierarchized structures

Doubtsof, Léa 06 December 2016 (has links)
Plusieurs structures hiérarchisées des fluorures de fer et de nickel avec des matrices carbonées ou métalliques ont été obtenues par deux voies de fluoration : fluoration gaz-solide de nanoparticules par le fluor moléculaire pur ou par fluoration en milieu liquide via l’agent fluorant NH 4 F. Les différentes nanostructures des matériaux ainsi préparées ont été caractérisées par les techniques classiques de microscopies électroniques, de spectroscopies vibrationnelles (infrarouge et Raman) ou encore d’analyse thermogravimétrique. En complément, la diffraction des rayons X a permis d’étudier les matériaux tant à l’ordre global, qu’à l’ordre local par affinement et analyse PDF sous rayonnement synchrotron. Ainsi, les conditions de synthèse et les mécanismes de formation de différents assemblages de type 0D avec des structures core-shell nickel/fluorure de nickel ; mais aussi 1D avec des nanotubes de carbone double parois remplis par du fluorure de fer, ou encore 3D (« flower-like ») avec le greffage de nanoparticules de fluorures de nickel en surface de nanotubes de carbone simple ou multi parois ont pu être appréhendés. Finalement, les nanostructures les plus adaptées à la diffusion des ions lithium (particules core-shell et flower-like) ont finalement été testées comme matériaux de cathode. / Some hierarchized structures made of iron or nickel fluorides together with carbonaceous or metallic matrix were obtained owing to two fluorination ways : solid-gas fluorination proceeding by pure molecular fluorine gas or fluorination in liquid media using NH 4 F in solution. The different nanostructures have been characterized thanks to classical technics such as electronic microscopies, vibrational spectroscopies (infrared and Raman) or thermogravimetric analysis. Many attentions have been paid to determine the global and local structures by using X-ray diffraction, refinement of the diffraction pattern by Rietveld analysis or Pair Distribution Function analysis on pattern registered on synchrotron. So, the synthesis conditions and the formation mechanism of various assemblies have been carried out on 0D core-shell nickel/ nickel fluoride, 1D double-walled carbon nanotubes filled with iron fluoride or 3D single and multi-walled carbon nanotubes decorated with flower-like nickel fluoride. Finally, the nanostructures the most favorable to lithium-ion diffusion (core-shell and flower like nanostructures) have been used as electrode in secondary lithium batteries.
62

PDF shopping system with the lightweight currency protocol

Wang, Yingzhuo 01 January 2005 (has links)
This project is a web application for two types of bookstores an E-Bookstore and a PDF-Bookstore. Both are document sellers, however, The E-Bookstore is not a currency user. The PDF-Bookstore sells PDF documents and issues a lightweight currency called Scart. Customers can sell their PDF documents to earn Scart currency and buy PDF documents by paying with Scart.
63

Automatic conversion of PDF-based, layout-oriented typesetting data to DAISY: potentials and limitations

Nikolaus, Ulrich, Dobroschke, Julia January 2009 (has links)
Only two percent of new books released in Germany are professionally edited for visually impaired people. However, more and more print publications are made available to the public in digital formats through online content delivery platforms like “libreka!”. The automatic conversion of such contents into DAISY would considerably increase the number of publications available in accessible formats. Still, most data available on “libreka!” is published as non-tagged PDF. In this paper, we examine the potential for automatic conversion of “libreka!”-based content into DAISY, while also analyzing the potentials and limitations of current conversion tools.
64

Entwicklung eines Konzepts für die Erstellung und Bearbeitung intertextueller Dokumente unter Beachtung kontextadäquater Gestaltung

Klenner, Michael 27 January 2021 (has links)
Heutzutage ist es in unterschiedlichen Lebenssituationen wichtig, Wissen schnell aufzunehmen. Bei der Wissensvermittlung werden häufig schrift- und bilddominierte Medien eingesetzt, um die Inhalte zu transportieren. Für die Wahl eines geeigneten Mediums ist die vorherrschende Kommunikationssituation ausschlaggebend, da die Medienwirkung von den situativen Gegebenheiten abhängt. Für eine bestmögliche Inhaltsvermittlung ist es deshalb nötig, die eingesetzten Dokumente kontextadäquat zu gestalten, um den gegebenen Potentialen und Einschränkungen einer Rezeptionssituation Sorge zu tragen. Wenn gleiche Inhalte zudem in unterschiedlichen Rezeptionskontexten parallel vermittelt werden sollen, müssen folglich mehrere Dokumente mit individuellen situationsspezifischen Anpassungen erstellt werden. Diese Dokumente unterscheiden sich zwar in der Darstellung ihrer Inhalte, überschneiden sich aufgrund ähnlicher Inhalte aber zu einem gewissen Grad intertextuell. Das Ziel dieser Arbeit war es, ein technisches Konzept zu entwickeln, mit welchem solche intertextuellen Dokumente für unterschiedliche Rezeptionskontexte arbeitserleichternd erstellt und gepflegt werden können. Im Vorfeld wurden Aspekte der digitalen Textproduktion unter Bezug auf Theorien aus der Medienwissenschaft und Linguistik beleuchtet. Dabei wurde analysiert, inwieweit unterschiedliche Kommunikationssituationen von speziell angepassten Trägermedien profitieren, welche Einflussfaktoren auf die Entstehung solcher kontextspezifischen Dokumente wirken und in welchen Textkriterien sich diese unterscheiden. Anschließend wurden anhand einer empirischen Fallstudie exemplarische intertextuelle Materialien untersucht. Dazu wurden 36 Kombinationen von digitalen Vortragsfolien mit vortragsunterstützenden Handouts einem paarweisen Dokumentenvergleich unterzogen. Das Ziel war es festzustellen, wie sich die Texte unterscheiden und welche intertextuellen Bezüge existieren. Für die Untersuchung wurden Methoden und Instrumente der quantitativen Linguistik eingesetzt und weiterentwickelt. Die genutzte Untersuchungsmethode basiert auf einem computergestützten Vergleich von intertextuellen Texten, die im PDF-Format vorliegen. In diesem Rahmen wurde die Software PDF-Visual-Extractor entwickelt, welche Textfragmente für eine linguistisch-mediale Untersuchung aus PDF-Dateien extrahieren und Kennzahlen für verschiedene Textkriterien erheben kann. Im Ergebnis zeigte sich, dass jedes Dokumentenpaar über intertextuelle Überschneidungen verfügt, aber der gemessene intertextuelle Anteil im Untersuchungskorpus stark schwankt. Zusätzlich zeigte sich, dass intertextuelle Textsegmente in beiden Medien fast immer unterschiedlich formatiert sind. Darauf aufbauend wurden Technologieansätze evaluiert, welche zur Erstellung intertextuell geprägter Dokumente fähig sind. Abschließend wurden die analysierten Schwachpunkte dieser Technologien als Ausgangspunkt genutzt, um ein eigenes Konzept zu entwickeln. Das neue Konzept verbindet ein Single-Source-Repository mit medienspezifischen Editoren. Zum Beweis der Funktionstüchtigkeit wurde ein technischer Softwareprototyp vorgestellt und validiert. Diese Arbeit leistet in zwei Bereichen einen Beitrag zur Forschung. Zum einen wurde eine Methodik zur Messung von Intertextualität entwickelt und an einem Korpus mit Vortragsfolien und zugehörigen Handouts angewendet. Zum anderen wurde ein neuer Ansatz für die Textproduktion von intertextuellen Dokumenten vorgestellt, welcher die sprachliche-mediale Gestaltungsfreiheit der Dokumente in den Mittelpunkt stellt.
65

Accurately extracting information from a finite set of different report categories and formats / Precis extraktion av information från ett begränsat antal rapporter med olika struktur och format på datan

Holmbäck, Jonatan January 2023 (has links)
POC Sports (hereafter simply POC) is a company that manufactures gear and accessories for winter sports as well as cycling. Their mission is to “Protect lives and reduce the consequences of accidents for athletes and anyone inspired to be one”. To do so, a lot of care needs to be put into making their equipment as protective as possible, while still maintaining the desired functionality. To aid in this, their vendor companies run standardized tests to evaluate their products. The results of these tests are then compiled into a report for POC. The problem is that the different companies use different styles and formats to convey this information, which can be classified into different categories. Therefore, this project aimed to provide a tool that can be used by POC to identify the report’s category and then accurately extract relevant data from it. An accuracy score was used as the metric to evaluate the tool’s accuracy with respect to extracting the relevant data. The development and evaluation of the tool were performed in two evaluation rounds. Additional metrics were used to evaluate a number of existing tools. These metrics included: whether the tools were open source, how easy they are to set up, pricing, and how much of the task the tool could cover. A proof of concept tool was realized and it demonstrated an accuracy of 97%. This was considered adequate when compared to the minimum required accuracy of 95%. However, due to the available time and resources, the sample size was limited, and thus this accuracy may not apply to the entire population with a confidence level higher than 75%. The results of evaluating the iterative improvements in the tool suggest that it is possible by addressing issues as they are found to achieve an acceptable score for a large fraction of the general population. Additionally, it would be beneficial to keep a catalog of the recurring solutions that have been made for different problems, so they can be reused for similar problems, allowing for better extensibility and generalizability. To build on the work performed in this thesis, the next steps might be to look into similar problems for other formats and to examine how different PDF generators may affect the ability to extract and process data present in PDF reports. / POC är ett företag som tillverkar utrustning, i synnerhet hjälmar, för vintersport och cyklister. Deras mål är att “Skydda liv och minska konsekvenserna från olyckor för atleter och vem som helst som är inspirerad till att bli en sådan”. För att uppnå detta har mycket jobb lagts ner för att göra deras utrustning så skyddande som möjligt., men samtidigt bibehålla samma funktionalitet. För att bidra med detta har POCs säljare genomfört standardiserade tester för att evaluera om deras produkter håller upp till standardena som satts på dem. Resultaten från dessa test är ofta presenterade i form av en rapport som sedan skickas till POC. Problemet är att de olika säljarna använder olika sätt och även format för att presentera den här informationen, som kan klassifieras in till olika kategorier. Därför avser det här projektet att skapa ett verktyg som kan användas av POC för att identifiera och därefter extrahera datan från dessa rapporter. Ett precisionsspoäng användes som mått för att utvärdera verktygets precision med avseende på att extrahera relevant data. Utvecklingen och utvärderingen av verktyget genomfördes i två utvärderingsomgångar. Ytterligare mått användes för att utvärdera ett antal befintliga verktyg. Dessa mått inkluderade: om verktygen var öppen källkod, hur enkla de är att installera och bröja använda, prissättning och hur mycket av uppgiften verktyget kunde täcka. En prototype utvecklades med en precision på 97%. Detta ansågs vara tillräckligt jämfört med den minsta nödvändiga precision på 95%. Men på grund av den tillgängliga tiden och resurserna var urvalsstorleken begränsad, och därför kanske denna noggrannhet inte gäller för hela populationen med en konfidensnivå högre än 75%. Resultaten av utvärderingen av de iterativa förbättringarna i verktyget tyder på att det är möjligt att genom att ta itu med problem som dyker upp, att uppnå en acceptabel poäng för en stor del av den allmänna befolkningen. Dessutom skulle det vara fördelaktigt att föra en katalog över de återkommande lösningar som har gjorts för olika problem, så att de kan återanvändas för liknande problem, vilket möjliggör bättre töjbarhet och generaliserbarhet. För att bygga vidare på det arbete som utförts i denna avhandling kan nästa steg vara att undersöka liknande problem för andra format och att undersöka hur olika PDF-generatorer kan påverka hur väl det går att extrahera och bearbeta data som finns i PDF-rapporter.
66

Expanding KTH's Canvas ecosystem to support additional automated services : Automating the injection of theses and their metadata into a digital archive / Utöka KTHs Canvas-ekosystem för att stödja ytterligare automatiserade tjänster : Automatisera injektionen av avhandlingar och deras metadata i ett digitalt arkiv

Fallahian, Shayan, Zioris, Konstantinos January 2020 (has links)
Whenever a student submits their final version of their thesis, a series of processes is triggered to finalize and archive the report. These processes are often handled in a less than efficient way which results in excessive manual labor and costs that can be prevent if automated. This report describes a solution that automates the series of processes that occur following a final thesis report submission. By utilizing the available information in a Canvas course and the content in the submitted thesis much of the manual cut-and-paste effort is avoided. Entering this data into DiVA is done by automated interaction via a browser, as DiVA does not have an application programming interface that could be used. The conclusion is that it is possible to automate this process through a headless browser. However, the automated parsing of the PDF version of the thesis proven to be inconsistent which results in the extracted data being inconsistent. With some improvements to the parsing module, the entire process could be fully automated. / Varje gång en student skickar in sin slutgiltiga version av sitt examensarbete, utlöses en serie av processer för att slutföra och arkivera examensarbetet. Dessa processer hanteras ofta på ett mindre än effektivt sätt vilket resulterar i extra mycket manuellt arbete och kostnader som kan förhindras ifall de automatiseras. Denna uppsats beskriver en lösning som automatiserar serien av processer som inträffar efter att ett slutgiltigt examensarbete har godkänts. Genom att använda tillgänglig information i en Canvas-kurs och innehållet i det inlämnade examensarbetet undviks mycket av den manuella ”klipp-och-klistra”-insatsen. Inmatning av den relevanta data från examensarbetet måste göras via automatiserad interaktion via en webbläsare i DiVA, eftersom DiVA inte hade ett API som kunde användas. Slutsatsen är att det är möjligt att automatisera detta genom en huvudlös webbläsare, även om modulen som behandlar PDF har visat sig vara inkonsekvent vilket i sin tur har resulterat i att den automatiska interaktionen är inkonsekvent. Med några optimeringar i analysmodulen kan hela processen automatiseras.
67

Increased evasion resilience in modern PDF malware detectors : Using a more evasive training dataset / När surnar filen? : Obfuskeringsresistens vid detektion av skadliga PDF-filer

Ekholm, Oscar January 2022 (has links)
The large scale usage of the PDF coupled with its versatility has made the format an attractive target for carrying and deploying malware. Traditional antivirus software struggles against new malware and PDF's vast obfuscation options. In the search of better detection systems, machine learning based detectors have been developed. Although their approaches vary, some strictly examine structural features of the document whereas other examine the behavior of embedded code, they generally share high accuracy against the evaluation data they have been tested against. However, structural machine learning based PDF malware detectors have been found to be weak against targeted evasion attempts that may be found in more sophisticated malware. Such evasion attempts typically exploit knowledge of what the detection system associates with 'benign' and 'malicious' to emulate benign features or exploit a bug in the implementation, with the purpose of evading the detector. Since the introduction of such evasion attacks more structural detectors have been developed, without introducing mitigations against such evasion attacks. This thesis aggregates the existing knowledge of evasion strategies and applies them against a reproduction of a recent, not previously evasion tested, detection system and finds that it is susceptible to various evasion techniques. Additionally, the produced detector is experimentally trained with a combination of the standard data and the recently published CIC-Evasive-PDFMal2022 dataset which contains malware samples which display evasive properties. The evasive-trained detector is tested against the same set of evasion attacks. The results of the two detectors are compared, concluding that supplementing the training data with evasive samples results in a more evasion resilient detector. / Flexibiliteten och mångsidigheten hos PDF-filer har gjort dessa till attraktiva attackvektorer, där en användare eller ett system riskerar att utsättas för skadlig kod vid läsning av dessa filer. Som åtgärd har formatsspecifika, vanligtvis maskininlärningsbaserade, detektorer utvecklats. Dessa detektorer ämnar att, givet en PDF-fil, ge ett svar: skadlig eller oskadlig, ofta genom att inspektera strukturella egenskaper hos dokumentet. Strukturella detektorer har påvisats sårbara mot riktade undvikningsattacker som, genom att efterlikna egenskaper hos oskadliga dokument, lyckas smuggla skadliga dokument förbi sådana detektorer. Trots detta har liknande detektorer fortsatt utvecklas, utan att implementera försvar mot sådana attacker. Detta arbete testar en modern strukturell detektor med undvikningsattacker bestående av attackfiler av olika obfuskeringsnivåer och bekräftar att dessa svagheter kvarstår. Dessutom prövas en experimentell försvarsåtgärd i form av att tillsätta typiskt normavvikande PDF-filer (från datasetet CIC-Evasive-PDFMal2022) till träningssteget under konstruktionen av detektorn, för att identifiera hur detta påverkar resistensen mot undvikningsattacker. Detektorvarianterna prövas mot samma attackfiler för att jämföras mot varandra. Resultaten från detta påvisar en ökad resistens i detektorn med tillskottet av avikande träningsdata.
68

Study of the Higgs boson decay H → ZZ(∗) → 4ℓ and inner detector performance studies with the ATLAS experiment

Selbach, Karoline Elfriede January 2014 (has links)
The Higgs mechanism is the last piece of the SM to be discovered which is responsible for giving mass to the electroweak W± and Z bosons. Experimental evidence for the Higgs boson is therefore important and is currently explored at the Large Hadron Collider (LHC) at CERN. The ATLAS experiment (A Toroidal LHC ApparatuS) is analysing a wide range of physics processes from collisions produced by the LHC at a centre-of-mass energy of 7-8TeV and a peak luminosity of 7.73×10³³ cm−2s−1. This thesis concentrates on the discovery and mass measurement of the Higgs boson. The analysis using the H → ZZ(∗) → 4ℓ channel is presented, where ℓ denotes electrons or muons. Statistical methods with non-parametric models are successfully cross-checked with parametric models. The per-event errors studied to improve the mass determination decreases the total mass uncertainty by 9%. The other main focus is the performance of the initial, and possible upgraded, layouts of the ATLAS inner detector. The silicon cluster size, channel occupancy and track separation in jets are analysed for a detailed understanding of the inner detector. The inner detector is exposed to high particle fluxes and is crucial for tracking and vertexing. The simulation of the detector performance is improved by adjusting the cross talk of adjacent hit pixels and the Lorentz Angle in the digitisation. To improve the ATLAS detector for upgrade conditions, the performance is studied with pile-up of up to 200. Several possible layout configurations were considered before converging on the baseline one used for the Letter of Intent. This includes increased granularity in the Pixel and SCT and additional silicon detector layers. This layout was validated to accomplish the design target of an occupancy < 1% throughout the whole inner detector. The H → ZZ(∗) → 4ℓ analysis benefits from the excellent momentum resolution, particularly for leptons down to pT = 6GeV. The current inner detector is designed to provide momentum measurements of low pT charged tracks with resolution of σpT /pT = 0.05% pT ⊕ 1% over a range of |η| < 2.5. The discovery of a new particle in July 2012 which is compatible with the Standard model Higgs boson included the 3.6σ excess of events observed in the H → ZZ(∗) → 4ℓ channel at 125GeV. The per-event error was studied using a narrow mass range, concentrated around the signal peak (110GeV< mH < 150GeV). The error on the four-lepton invariant mass is derived and its probability density function (pdf) is multiplied by the conditional pdf of the four-lepton invariant mass given the error. Applying a systematics model dependent on the true mass of the discovered particle, the new fitting machinery was developed to exploit additional statistical methods for the mass measurement resulting in a discovery with 6.6σ at mH = 124.3+0.6−0.5(stat)+0.5−0.3(syst)GeV and μ = 1.7±0.5 using the full 2011 and 2012 datasets.
69

The Affective PDF Reader

Radits, Markus January 2010 (has links)
<p>The Affective PDF Reader is a PDF Reader combined with affect recognition systems. The aim of the project is to research a way to provide the reader of a PDF with real - time visual feedback while reading the text to influence the reading experience in a positive way. The visual feedback is given in accordance to analyzed emotional states of the person reading the text - this is done by capturing and interpreting affective information with a facial expression recognition system. Further enhancements would also include analysis of voice in the computation as well as gaze tracking software to be able to use the point of gaze when rendering the visualizations.The idea of the Affective PDF Reader mainly arose in admitting that the way we read text on computers, mostly with frozen and dozed off faces, is somehow an unsatisfactory state or moreover a lonesome process and a poor communication. This work is also inspired by the significant progress and efforts in recognizing emotional states from video and audio signals and the new possibilities that arise from.The prototype system was providing visualizations of footprints in different shapes and colours which were controlled by captured facial expressions to enrich the textual content with affective information. The experience showed that visual feedback controlled by utterances of facial expressions can bring another dimension to the reading experience if the visual feedback is done in a frugal and non intrusive way and it showed that the evolvement of the users can be enhanced.</p>
70

A measurement of the W boson charge asymmetry with the ATLAS detector

Whitehead, Samuel Robert January 2012 (has links)
Uncertainties on the parton distribution functions (PDFs), in particular those of the valence quarks, can be constrained at LHC energies using the charge asymmetry in the production of W<sup>&plusmn;</sup> bosons. This thesis presents a measurement of the electron channel, lepton charge asymmetry using 497 pb<sup>-1</sup> of data recorded with the ATLAS detector in 2011. The measurement is included in PDF fits using the machinery of HERAPDF and is found to have some constraining power beyond that of existing W charge asymmetry measurements.

Page generated in 0.0594 seconds