• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 621
  • 136
  • 119
  • 108
  • 108
  • 103
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

#Historia : Metadata som resurs i historieforskning

Boström, Hanna January 2020 (has links)
Under 2000-talet har det producerats och spridits mängder av betydande forskning som publicerats via databaser. En betydelsefull länk i kunskapsspridningen utgörs av akademin som i dag står för den största andelen av vetenskapliga publikationer. I denna historiografiskt inriktade undersökning kartläggs och undersöks en del av svensk historieforskning och historieskrivning som ägt rum under 2000-talet. Den vetenskapliga disciplin som undersöks inom det humanistiska fältet är historievetenskapen, avgränsat till de resultat av forskning som studenter gjort runtom på de svenska universiteten och högskolorna, i ämnet historia. Källmaterialet består av studentuppsatser som publicerats i databasen Digitala vetenskapliga arkivet, DiVA vilket i dag ses som det nationellt mest använda systemet för publikationsdata, med över 400 tusen publicerade fulltexter varav antal nedladdade uppgår över 53 miljoner gånger. Genom empiriska och teoretiska studier och bruket av både kvantitativa och kvalitativa metoder analyseras metadata, för att ge svar och resultat över frågan om vad studenter i det svenska utbildningssystemet, på universitet och högskolor skriver historia om under 2000-talet. För att få fram svar fungerade bibliometri som kunskapsområde och frågan om vilka nyckelord som dominerar och var de mest frekvent använda i taggningen (definitionerna) av forskningsresultaten ställdes. Delfrågan om hur bruket av nyckelord ser ut över tid användes för att få fram och se trend över resultat. Teoretiskt ramverk i undersökningen och läsning av de kvantitativa resultaten utgick från Kuhns teori om paradigm. Resultat visar att Genus, Historiebruk, Arkeologi, Historiemedvetande, Historiedidaktik, Identitet, Osteologi, Andra världskriget, Diskursanalys, Kalla kriget, Samer, Laborativ arkeologi och Utbildningshistoria utgör några ledande sakområden som studenterna skrivit historia om under 2000-talet. Resultat visar också att det nationella paradigmet är ledande för studenternas historieforskning, även om USA, Sovjetunionen, Jugoslavien, Japan, Finland, Sápmi och Israel förekommer frekvent. Avslutningsvis visade föreliggande undersökning att metadata kan användas som resurs i historieforskning samtidigt som det historiska perspektivet vidgas. / During the 2000s, numerous significant researches have been produced and disseminated through databases. An important link in the dissemination of knowledge consists of the academy, which today accounts for the largest proportion of scientific publications. In this historiographically oriented study, a part of Swedish history research and history writing that took place during the 2000s is mapped and examined. The scientific discipline that is investigated in the humanities field is the science of history, limited to the results of research that students have done around the Swedish universities and colleges, in the subject of history. The source material consists of student essays published in the database Digital Scientific Archive, DiVA, which today is seen as the nationally most used system for publication data, with over 400 thousand published full texts, of which the number downloaded is over 53 million times.  Through empirical and theoretical studies and the use of both quantitative and qualitative methods, metadata is analyzed, to provide answers and results on the question of what students in the Swedish education system, at universities and colleges write history about during the 2000s. To obtain answers, bibliometrics functioned as an area of ​​knowledge and the question of which keywords dominated and were the most frequently used in the tagging (definitions) of the research results was asked. The sub-question about how the use of keywords looks over time was used to bring out and see the trend over results. Theoretical framework in the study and reading of the quantitative results was based on Kuhn's theory of paradigm.  Results indicate that Gender, History Use, Archeology, History Consciousness, History Didactics, Identity, Osteology, World War II, Discourse Analysis, the Cold War, Sami, Laboratory Archeology and Educational History are some leading subject areas that students wrote history about during the 2000s. Results also point out that the national paradigm is leading for students' history research, although the United States, the Soviet Union, Yugoslavia, Japan, Finland, Sápmi and Israel occur frequently. In conclusion, the present study showed that metadata can be used as a resource in history research while broadening the historical perspective.
562

Multimedia Forensics Using Metadata

Ziyue Xiang (17989381) 21 February 2024 (has links)
<p dir="ltr">The rapid development of machine learning techniques makes it possible to manipulate or synthesize video and audio information while introducing nearly indetectable artifacts. Most media forensics methods analyze the high-level data (e.g., pixels from videos, temporal signals from audios) decoded from compressed media data. Since media manipulation or synthesis methods usually aim to improve the quality of such high-level data directly, acquiring forensic evidence from these data has become increasingly challenging. In this work, we focus on media forensics techniques using the metadata in media formats, which includes container metadata and coding parameters in the encoded bitstream. Since many media manipulation and synthesis methods do not attempt to hide metadata traces, it is possible to use them for forensics tasks. First, we present a video forensics technique using metadata embedded in MP4/MOV video containers. Our proposed method achieved high performance in video manipulation detection, source device attribution, social media attribution, and manipulation tool identification on publicly available datasets. Second, we present a transformer neural network based MP3 audio forensics technique using low-level codec information. Our proposed method can localize multiple compressed segments in MP3 files. The localization accuracy of our proposed method is higher compared to other methods. Third, we present an H.264-based video device matching method. This method can determine if the two video sequences are captured by the same device even if the method has never encountered the device. Our proposed method achieved good performance in a three-fold cross validation scheme on a publicly available video forensics dataset containing 35 devices. Fourth, we present a Graph Neural Network (GNN) based approach for the analysis of MP4/MOV metadata trees. The proposed method is trained using Self-Supervised Learning (SSL), which increased the robustness of the proposed method and makes it capable of handling missing/unseen data. Fifth, we present an efficient approach to compute the spectrogram feature with MP3 compressed audio signals. The proposed approach decreases the complexity of speech feature computation by ~77.6% and saves ~37.87% of MP3 decoding time. The resulting spectrogram features lead to higher synthetic speech detection performance.</p>
563

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
564

以專家策略為本的交易夥伴搜尋輔助

鍾豐謙 Unknown Date (has links)
近十年內網際網路迅速興起並蓬勃發展,對我們生活的各個層面造成劇烈的影響,並掀起電子商務的熱潮。目前最引人注目的焦點是B2B,利用網路的特性來降低成本,縮短供應鏈,加速產品生命週期。產業已注意到協同商務中之供應網絡管理,並探討企業問之商務管理所需之方法與資訊技術,以尋求新的企業營運模式。在相關發展中,WWW上之商務資料交換,更是目前發展之重點,我們的終極目標是一個跨產業且進入門檻小的全球性交易平台,ebXML因其可能帶來跨產業協同商務之平台架構逐成為產業矚目之對象。 另一個網路所帶來的問題是資訊爆炸。當人們才剛開始享受網路世界的多采多姿,馬上卻又得面臨資訊氾濫的夢魘。如何利用智慧型的方法,提昇搜尋的效率與提高資訊的效果,是我們所關心的。搜尋引擎的演算法發展已到極致,但在搜尋策略的輔助上仍有發展的空間。 本研究回顧電子商務的緣由與發展,提出web service輿ebMXL應用的跨產業網路交易平台,並設計以5W1H的方式儲存專家經驗與策略,透過查詢擴充的機制,達成搜尋策略與結果的改善,並在這個電子商務架構平台的註冊機制與儲存庫(registry/repository)上運作,讓代理人理解企業之需求與期望,進而完成企業間交易夥伴的尋找,以達成動態供應鏈之實現。 關鍵字:XML,ebXML,web service,UDDI,註冊機制與儲存庫,資訊檢索,搜尋策略,5W1H,後設資料 / Starting from the concept of B2B e-commerce in general, the aim of this thesis is to propose and test a method for supporting trading partners' matching, in particulars those who follows ebXML. In the first place, this research presents a study of the areas where XML may have significant contributions. To avoid falling into pitfalls that works in e-commerce have experienced, we ought to understand the evolution of e-commerce so that the target supports can be derived from learned lessons. With these caveats in mind, the next step is to clarify the characteristics and requirements of a generic B2B framework. Base on the aforementioned survey, the framework of ebXML can be clarified, which is considered as the State-of-Art e-business technology. To reach this, this research is to address not only the problem domain and original concepts but also technology requirements. The ebXML architecture as well as relevant initiatives, viz. SOAP, WSDL and UDDI are then examined to search for potential ebXML-based solutions. In a comparison to RosettaNet, ebXML can provide more efficient and effective searches and matches of trading partners on electronic market place. Among others, the author emphasizes the research into a hybrid of ebXML and so-called web based service technologies. To realize this concept, a searching and matching mechanism with aids of expert's strategies based on 5WIH knowledge schema is carried out in this research. Last but not least, 5W1H knowledge schema is applied, another word serves as metadata, to organize and store expert's heuristic and intelligence in so-called strategy base, so that this work can use the expert's strategy for expanding the keywords to refine user queries in the run-time and thus provide a more efficient and effective matching results. Keyword: XML, ebXML, web service, UDDI, registry/repository, information retrieval, searching strategy, 5W1H, metadata
565

Surveillance électronique et métadonnées : vers une nouvelle conception constitutionnelle du droit à la vie privée au Canada?

Thibeault, Alexandre 03 1900 (has links)
Ce mémoire traite de la portée de la protection constitutionnelle du droit à la vie privée informationnelle au Canada, au regard de la surveillance électronique gouvernementale à grande échelle des métadonnées des communications électroniques, à des fins de sécurité nationale. Il est soutenu, après une présentation de l’importance démocratique de la vie privée, de même que de la nature et de la portée de certaines activités gouvernementales de surveillance électronique, que le cadre d’analyse du « Biographical core », qui conditionne l’étendue de la protection de la vie privée informationnelle en droit constitutionnel canadien, est susceptible d’inclure les métadonnées des communications électroniques. Cette position est appuyée par un argumentaire juridique fondé sur les règles d’interprétation et la jurisprudence constitutionnelle pertinente. Cet argumentaire se trouve renforcé par potentiel considérablement révélateur des métadonnées, des particularités propres aux activités de surveillance électronique analysées, ainsi que des implications non-juridiques soulevées par ces dernières. / This master’s thesis focuses on the scope of the Canadian constitutional protection of the right to privacy, in view of the wide scale governmental electronic surveillance of electronic communications metadata, conducted for national security purposes. It is argued, following a detailed presentation of the nature and extent of certain specific governmental electronic surveillance activities, that the « Biographical core » analytical framework, governing the scope of the protection granted to informational privacy in Canadian constitutional law, is most likely applicable to electronic communications metadata. This position is directly supported by the relevant constitutional interpretation rules and cases. This is particularly true in light of the fact that metadata are, inherently, potentially significantly revealing, especially considering the capacities of an array of electronic surveillance activities, as well as the non-legal implications they entail for privacy.
566

Objektově - relační rámec pro PHP / Object-Relational Framework for PHP

Hudec, Michal Unknown Date (has links)
The objective of this work is to design and implement an Object-relational framework for PHP. This framework will be able to map objects to traditional relational database tables. In this work, an appropriate solution of  metadata specification is presented. These metadata describe how an object can be store in a relational database. The framework itself is able to store, load and query any object data in relational database. This object-relational framework has been designed for simple portability among various database systems.
567

Modelovanje i implementacija sistema za podršku vrednovanju publikovanih naučno-istraživačkih rezultata / Modeling and implementation of system for evaluation of published research outputs

Nikolić Siniša 26 April 2016 (has links)
<p>Cilj &ndash; Prvi cilj istraživanja je kreiranje modela podataka i implementacija informacionog sistema zasnovanog na modelu za potrebe vrednovanja publikovanih naučno-istraživačkih rezultata. Model bi bio primenjen u CRIS UNS informacionom sistemu, kao podr&scaron;ka sistemu vrednovanja.<br />Drugi cilj istraživanja je utvrđivanje u kojoj meri i na koji način se može automatizovati proces evaluacije koji se zasniva na različitim pravilima i pravilnicima.<br />Metodologija &ndash; Kako bi se definisalo pro&scaron;irenje CERIF modela neophodno je bilo identifikovati različite aspekte podataka koji su prisutni u evaluaciji naučno-istraživačkih publikacija. Stoga, zarad potreba istraživanja, odabrana su i analizirana su dokumenta koja predstavljaju različite nacionalne pravilnike, okvire i smernice za evaluaciju.<br />Za modelovanje specifikacije arhitekture sistema za vrednovanje kori&scaron;ćeni su CASE alati koji su bazirani na objektno-orijentisanoj metodologiji (UML 2.0). Za implementaciju pro&scaron;irenja CERIF modela u CRIS UNS sistemu kori&scaron;ćena je Java platforma i tehnologije koji olak&scaron;avaju kreiranje veb aplikacija kao &scaron;to su AJAX, RichFaces, JSF itd. Pored navedene op&scaron;te metodologije za razvoj softverskih sistema kori&scaron;ćeni su primeri dobre prakse u razvoju informacionih sistema. To se pre svega odnosi na principe kori&scaron;ćene u razvoju institucionalnih repozitorijuma, bibliotečkih informacionih sistema, informacionih sistema naučno-istraživačke delatnosti, CRIS sistema, sistema koji omogućuju evaluaciju podataka itd.<br />Ekspertski sistem koji bi podržao automatizaciju procesa evaluacije po različitim pravilnicima odabran je na osnovu analize postojećih re&scaron;enja za sisteme bazirane na pravilima i pregleda naučne literature.<br />Rezultati &ndash; Analizom nacionalnih pravilnika i smernica dobijen je skup podataka na osnovu kojeg je moguće evaluirati publikovane rezultate po odabranim pravilnicima.<br />Razvijen je model podataka kojim se predstavljaju svi podaci koji učestvuju u procesu evaluacije i koji je kompatibilan sa CERIF modelom podataka.<br />Predloženi model je moguće implementirati u CERIF kompatibilnim CRIS sistemima, &scaron;to je potvrđeno implementacijom informacionog sistema za vrednovanje publikovanih naučno-istraživačkih rezultata u okviru CRIS UNS.<br />Ekspertski sistem baziran na pravilima može biti iskori&scaron;ćen za potrebe automatizacije procesa evaluacije, &scaron;to je potvrđeno predstavom i implementacijom SRB pravilnika u Jess sistemu baziranom na pravilima.<br />Praktična primena &ndash;Zaključci proiza&scaron;li iz analize pravilnika (npr. poređenje sistema i definisanje metapodataka za vrednovanje) se mogu primeniti pri definisanju modela podataka za CERIF sisteme i za sisteme koji nisu CERIF orijentisani.<br />Sistem za podr&scaron;ku vrednovanju publikovanih naučno-istraživačkih rezultata je implementiran kao deo CRIS UNS sistema koji se koristi na Univerzitetu u Novom Sadu čime je obezbeđeno vrednovanje publikovanih naučno-istraživačkih rezultata za različite potrebe (npr. promocije u naučna i istraživačka zvanja, dodele nagrada i materijalnih sredstava, finansiranje projekata, itd.), po različitim pravilnicima i komisijama.<br />Vrednost &ndash; Dati su metapodaci na osnovu kojih se vr&scaron;i vrednovanje publikovanih rezultat istraživanja po raznim nacionalnim pravilnicima i smernicama. Dat je model podataka i pro&scaron;irenje CERIF modela podataka kojim se podržava vrednovanje rezultata istraživanja u CRIS sistemima. Posebna prednost pomenutih modela je nezavisnost istih od implementacije sistema za vrednovanje rezultata istraživanja. Primena predloženog pro&scaron;irenje CERIF modela u CRIS sistemima praktično je pokazana u CRIS sistemu Univerziteta u Novom Sadu. Sistem za vrednovanje koji se bazira na pro&scaron;irenju CERIF modela pruža i potencijalnu interoperabilnost sa sistemima koji CERIF model podržavaju. Implementacijom informacionog sistema za vrednovanje, vrednovanje naučnih publikacija je postalo olak&scaron;ano i transparentnije. Potvrda koncepata da se ekspertski sistemi bazirani na pravilima mogu koristiti za automatizaciju vrednovanja, otvara totalno novi okvir za implementaciju informacionih sistema za podr&scaron;ku vrednovanja postignutih rezultata istraživanja.</p> / <p>Aim &ndash; The first aim of the research was creation of data model and implementation of information system based on the proposed model for the purpose of evaluation of published research outputs. The model is applied in CRIS information system to support the system for evaluation.<br />The second objective was determination of the manner and extent in which the evaluation process that is based on different rules and different rulebooks could be automated.<br />Methodology - In order to define the extension of the CERIF model, it was necessary to identify the various aspects of data which is relevant in evaluation of scientific research publications. Therefore, documents representing different national regulations, frameworks and guidelines for evaluations were selected and analyzed.<br />For the modeling of the system architecture, CASE tools were used, which are based on object-oriented methodology (UML 2.0). To implement the extension of the CERIF model within the CRIS UNS system, JAVA platform and technologies that facilitate creation of web applications such as AXAJ and RichFaces were used. In addition to this general methodology for development of software systems, best practice examples from the information systems development are also used. This primary refers to the principles used in development of institutional repositories, library information systems, information systems of the scientific-research domain, CRIS systems, systems that enable evaluation of data, etc.<br />The expert system that supports automation of the evaluation process by different rulebooks was selected based on analysis of the existing solutions for rule based systems and examination of scientific literature.<br />Results - By analysis of the national rulebooks and guidelines, a pool of data was gathered, which served as a basis for evaluation of published results by any analyzed rulebook.<br />A data model was developed, by which all data involved in the evaluation process can be represented. The proposed model is CERIF compatible.<br />The proposed model can be implemented in CERIF compatible CRIS systems, which was confirmed by the implementation of an information system for evaluation of published scientific research results in CRIS UNS.<br />An expert system based on rules can be used for the needs of automation of the evaluation process, which was confirmed by the presentation and implementation of the Serbian Rulebook by Jess.<br />Practical application - The conclusions raised from the analysis of rulebooks (e.g. Comparison of systems and defining metadata for evaluation) can be applied in defining the data model for CERIF systems and for systems that are not CERIF oriented.<br />The system for support of evaluation of published scientific research results was implemented as part of the CRIS UNS system used at the University of Novi Sad, thus providing evaluation of published scientific research results for different purposes (e.g. promotion in scientific and research titles, assignment of awards and material resources, financing of projects, etc.), according to different rulebooks and commissions.<br />Value &ndash; Metadata is provided on which basis the evaluation of published research results by various national rulebooks and guidelines is conducted. A data model and an expansion of the CERIF data model that supports the evaluation of the research results within CRIS systems are given. A special advantage of these models is their independence of the implementation of the system for evaluation of research results. The application of the proposed extension of the CERIF model into CRIS systems practically is demonstrated in the CRIS system of the University of Novi Sad. The system that implements an expansion of the CERIF model provides a potential interoperability with systems that support CERIF model. After the implementation of the information system for evaluation, the evaluation of scientific publications becomes easier and more transparent. A confirmation of the concept that the expert systems based on rules can be used in automation of the evaluation process opens a whole new framework for implementation of information systems for evaluation.</p>
568

MKM – ein Metamodell für Korpusmetadaten

Odebrecht, Carolin 11 September 2018 (has links)
Korpusdokumentation wird in dieser Arbeit als eine Voraussetzung für die Wiederverwendung von Korpora und als ein Bestandteil des Forschungsdatenmanagements verstanden, welches unter anderem die Veröffentlichung und Archivierung von Korpora umfasst. Verschiedene Forschungsdaten stellen ganz unterschiedliche Anforderungen an die Dokumentation und können auch unterschiedlich wiederverwendet werden. Ein geeignetes Anwendungsbeispiel stellen historische Textkorpora dar, da sie in vielen Fächern als empirische Grundlage für die Forschung genutzt werden können. Sie zeichnen sich im Weiteren durch vielfältige Unterschiede in ihrer Aufbereitung und durch ein komplexes Verhältnis zu der historischen Vorlage aus. Die Ergebnisse von Transkription und Normalisierung müssen als eigenständige Repräsentationen und Interpretationen im Vergleich zur Vorlage verstanden werden. Was müssen Forscherinnen und Forscher über ihr Korpus mit Hilfe von Metadaten dokumentieren, um dessen Erschließung und Wiederverwendung für andere Forscherinnen und Forscher zu ermöglichen? Welche Funktionen übernehmen dabei die Metadaten? Wie können Metadaten modelliert werden, um auf alle Arten von historischen Korpora angewendet werden zu können? Die Arbeit und ihre Fragestellung sind fest in einem interdisziplinären Kontext verortet. Für die Beantwortung der Forschungsfragen wurden Erkenntnisse und Methoden aus den Fachbereichen der Korpuslinguistik, der historischen Linguistik, der Informationswissenschaft sowie der Informatik theoretisch und empirisch betrachtet und für die Entwicklung eines Metamodells für Korpusmetadaten fruchtbar gemacht. Das im Rahmen dieser Arbeit in UML entwickelte Metamodell für Korpusmetadaten modelliert Metadaten von historischen textbasierten Korpora aus einer technisch-abstrakten, produktorientierten und überfachlichen Perspektive und ist in einer TEI-Spezifikation mit Hilfe der TEI-eigenen Modellierungssprache ODD realisiert. / Corpus documentation is a requirement for enabling corpus reuse scenarios and is a part of research data management which covers, among others, data publication and archiving. Different types of research data make differing demands on corpus documentation, and may be reused in various ways. Historical corpora represent an interesting and challenging use case because they are the foundation for empirical studies in many disciplines and show a great variety of reuse possibilities, of data creation, and of data annotation. Furthermore, the relation between the historical corpus and the historical original is complex. The transcription and normalisation of historical texts must be understood as independent representations and interpretations in their own right. Which kind of metadata information, then, must be included in a corpus documentation in order to enable intellectual access and reuse scenarios? What kind of role do metadata play? How can metadata be designed to be applicable to all types of historical corpora? These research questions can only be addressed with help of an interdisciplinary approach, considering findings and methods of corpus linguistics, historical linguistics, information science and computer science. The metamodel developed in this thesis models metadata of historical text-based corpora from a technical, abstract, and interdisciplinary point of view with help of UML. It is realised as a TEI-specification using the modelling language ODD.
569

Gestion des risques appliquée aux systèmes d’information distribués / Risk management to distributed information systems

Lalanne, Vincent 19 December 2013 (has links)
Dans cette thèse nous abordons la gestion des risques appliquée aux systèmes d’information distribués. Nous traitons des problèmes d’interopérabilité et de sécurisation des échanges dans les systèmes DRM et nous proposons la mise en place de ce système pour l’entreprise: il doit nous permettre de distribuer des contenus auto-protégés. Ensuite nous présentons la participation à la création d’une entreprise innovante qui met en avant la sécurité de l’information, avec en particulier la gestion des risques au travers de la norme ISO/IEC 27005:2011. Nous présentons les risques liés à l’utilisation de services avec un accent tout particulier sur les risques autres que les risques technologiques; nous abordons les risques inhérents au cloud (défaillance d’un provider, etc...) mais également les aspects plus sournois d’espionnage et d’intrusion dans les données personnelles (Affaire PRISM en juin 2013). Dans la dernière partie nous présentons un concept de DRM d’Entreprise qui utilise les métadonnées pour déployer des contextes dans les modèles de contrôle d’usage. Nous proposons une ébauche de formalisation des métadonnées nécessaires à la mise en œuvre de la politique de sécurité et nous garantissons le respect de la réglementation et de la loi en vigueur. / In this thesis we discuss the application of risk management to distributed information systems. We handle problems of interoperability and securisation of the exchanges within DRM systems and we propose the implementation of this system for the company: it needs to permit the distribution of self-protected contents. We then present the (our) participation in the creation of an innovative company which emphasizes on the security of information, in particular the management of risks through the ISO/IEC 27005:2011 standard. We present risks related to the use of services, highlighting in particular the ones which are not technological: we approach inheritent risks in clouds (provider failure, etc ...) but also the more insidious aspects of espionage and intrusion in personal data (Case PRISM in June 2013). In the last section, we present a concept of a DRM company which uses metadata to deploy settings in usage control models. We propose a draft formalization of metadata necessary for the implementation of a security policy and guarantee respect of regulations and legislation.
570

Web semântica : uma análise focada no uso de metadados /

Alves, Rachel Cristina Vesu. January 2005 (has links)
Orientador: Plácida Leopoldina Ventura Amorim da Costa Santos / Banca: Silvana Ap. B. Gregório Vidotti / Banca: Edberto Ferneda / Resumo: Atualmente a nossa sociedade, denominada sociedade da informação, vem sendo caracterizada pela valorização da informação, pelo uso cada vez maior de tecnologias de informação e comunicação e pelo crescimento exponencial dos recursos informacionais disponibilizados em diversos ambientes, principalmente na Web. Essa realidade trouxe algumas mudanças no acesso automatizado às informações. Se por um lado temos uma grande quantidade de recursos informacionais disponibilizados, por outro temos como conseqüência problemas relacionados à busca, localização, acesso e recuperação dessas informações em ambientes digitais. Nesse contexto, o problema que originou essa pesquisa está relacionado com a dificuldade na busca e na recuperação de recursos informacionais digitais na Web e a ausência de tratamento adequado para a representação informacional desses recursos. O maior desafio para a comunidade científica no momento está na identificação de padrões e métodos de representação da informação, ou seja, na construção de formas de representação do recurso informacional de maneira a proporcionar sua busca e recuperação de modo mais eficiente. Assim, a proposição apontada nesse trabalho como solução do problema refere-se ao estabelecimento da Web Semântica e a aplicação de padrões de metadados para a representação da informação, pois são consideradas como iniciativas importantes para proporcionar uma melhor estruturação e representação dos recursos informacionais em ambientes digitais. Com uma metodologia baseada na análise exploratória e descritiva do tema a partir da literatura disponível, apresenta-se uma análise da Web Semântica como uma nova proposta para organização dos recursos informacionais na Web e as ferramentas tecnológicas que permeiam sua construção, com enfoque no uso de metadados como elemento fundamental para proporcionar... (Resumo completo, clicar acesso eletrônico abaixo). / Abstract: Nowadays our society, named society of information, has been characterized by the valorization of information through the increasing use of the information and communication technologies and the exponential growth of the informational resources, available in various environments, mainly on the Web. This reality has brought some changes for the automated access to information. If we have a big amount of informational resources available at one side, on the other we have problems related to search, localization, access and recuperation of this information in digital environments as a consequence. In this context, the problem that originated this research is related to the difficulty on searching and recuperating digital informational resources on the Web, and the lack of adequate treatment for the informational representation of these resources. At the moment, the biggest challenge for the scientific community is to identify patterns and methods of representation of information, that is, the construction of forms of representation of the informational resource in order to provide its search and recuperation in a more efficient manner. So, the pointed proposition for the solution of the problem, in this paper, refers to the Semantic Web establishment and the application of metadata patterns to the representation of information, because they are considered an important initiative for providing a better structuring and representation of the informational resources in digital environments. With a methodology based on the exploratory and descriptive analysis of the theme, beginning from the available literature, it is possible to present a Semantic Web analysis as a new proposal for the organization of the informational resources on the Web, and the technological tools that permeate its construction, focusing the use of metadata as the fundamental element to provide a better representation of the informational resources available on the Web, and their. / Mestre

Page generated in 0.0677 seconds