• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 21
  • 15
  • 10
  • 8
  • 7
  • 6
  • 6
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 149
  • 43
  • 28
  • 26
  • 26
  • 24
  • 22
  • 22
  • 21
  • 19
  • 19
  • 18
  • 18
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Automatizovaná rekonstrukce webových stránek / Automatic Webpage Reconstruction

Serečun, Viliam January 2018 (has links)
Many legal institutions require a burden of proof regarding web content. This thesis deals with a problem connected to web reconstruction and archiving. The primary goal is to provide an open source solution, which will satisfy legal institutions with their requirements. This work presents two main products. The first is a framework, which is a fundamental building block for developing web scraping and web archiving applications. The second product is a web application prototype. This prototype shows the framework utilization. The application output is MAFF archive file which comprises a reconstructed web page, web page screenshot, and meta information table. This table shows information about collected data, server information such as IP addresses and ports of a device where is the original web page located, and time stamp.
132

Systémy monitorování kvality elektrické energie / Distributed power quality monitoring systems

Pithart, Jan January 2008 (has links)
Subject of this master’s thesis deals with the issues parameters of electric power and some possibility of measuring with focusing on norm requiring parameters. There are some distributed power quality monitoring systems described, which are available. The following chapter discusses the suggestion and implementation of the low-cost monitor with remote control for low-voltage networks. The thesis also concerns the using MySQL database for archiving measuring values and implementation of web applications for on-line presentation of the measured values.
133

Open Access an der Technischen Universität Chemnitz

Thümer, Ingrid 25 October 2006 (has links)
Mit der Sonderausgabe des Newsletters 2/2006 möchte die Universitätsbibliothek ergänzend zum Rundschreiben des Rektors 02/2006 den Angehörigen der TU Chemnitz weitere Informationen zum Thema Open Access (OA) geben. Die Universitätsbibliothek begrüßt den Beschluß von Rektorat und Senat zur Unterstützung von Open Acces an der TU Chemnitz mit dem Wortlaut - Rektorat und Senat der TU Chemnitz fordern die Autoren der Universität nachdrücklich auf, ihre wissenschaftlichen Publikationen als Pre- oder Postprintversion, soweit rechtliche Bedenken der Verlage nicht entgegenstehen, auf dem Publikationsserver MONARCH der TU Chemnitz abzulegen. - Rektorat und Senat ermutigen die Wissenschaftler der TU Chemnitz, in bestehenden Open-Access-Zeitschriften zu publizieren. Gerade die Bibliothek der TU Chemnitz als Dienstleister für Wissenschaftler und Studierende hat die Auswirkungen der Zeitschriftenkrise bitter zu spüren bekommen. Eine drastische Reduzierung des Zeitschriftenbestandes seit Mitte der 90er Jahre ist die Folge. Schon lange beklagen die Wissenschaftler diese Situation. Die Universitätsbibliothek ist der Auffassung, dass langfristig und durch eine weltweite Unterstützung von Open Access diese Entwicklung korrigiert und die Krise im System der wissenschaftlichen Kommunikation überwunden werden kann. Die Etablierung des "Prinzips des offenen Zugangs" setzt jedoch die aktive Beteiligung jedes einzelnen Produzenten von wissenschaftlicher Information voraus. Entscheidend für die erfolgreiche Umsetzung der geplanten Open Access Aktivitäten an der TU Chemnitz ist die Akzeptanz unter den Wissenschaftlern und vor allem deren aktive Unterstützung. / This is a special issue of the library's newsletter. In addition to the rector-circular 02/2006 we would like to give you further information about open access publishing at our university. The library welcomes the joint resolution recently made by the university's rectorate and its senate to support open access publishing at Chemnitz University of Technology. - Rectorate and Senate of Chemnitz University of Technology invite the authors to publish their scientific works as pre- or postprintversion in MONARCH, under attention of the copyright. - Rectorate and Senate encourage the scientists to publish in open access journals. The library as a service provider for scientists and students is confronted with the effects of journal crisis. A dramatically reduction of acquired journal titles is the result. The university library is in the conviction that a long-term and world-wide support of Open Access can corrected and negotiate this development. Therefore is the activity of each individual producer of scientific information important. The acceptance and assistance from our authors is the base for transposition Open Access to the Chemnitz University of Technology.
134

Dlouhodobé uchování webového obsahu / Long-term Preservation of Web Content

Kvasnica, Jaroslav January 2016 (has links)
This work describes the long term preservation of digital documents, particularly websites. The aim of this work is to give an explanation of the long term preservation, to define the differences between various approaches and to describe long term preservation of web content possibilities such as migration or emulation. It also explains risks and challenges of these strategies. It discusses new problems which the long term preservation aim leads to. It also describes possible solutions as well as it describes the situation in selected significant foreign institutions. The main aim of this work is detailed analysis of long term preservation strategy in theNational Library of the Czech Republic, which is the only institution engaged in the preservation of Czech web. The process of data preparation, metadata creation and data storing in the long term repository of the Czech National Library is thoroughly described, including examples and their explanation. Future actions of long term preservation in the Czech Web Archive are articulated in the conclusion. Powered by TCPDF (www.tcpdf.org)
135

Autenticita a digitální informace / Authenticity and Digital Information

Cubr, Ladislav January 2017 (has links)
The dissertation focuses on the authenticity of digitized books in the context of their life cycle (production, preservation, access). First the OIAS high-level conceptual framework for lifecycle management of digital documents maintained by organizations is introduced. Then the current situation of digitized books lifecycle management is described. This part is followed by an introducing to relevant conceptualizations of the authenticity of digital documents and these conceptualizations are analyzed and reviewed. Then framework for analysis of authenticity is established based on previous findings. This framework is then used to identify authenticity requirements for digitized books and to develop a domain-specific conceptualization of the authenticity of digitized books. This conceptualization deploys detailed analysis of risks threatening authenticity during lifecycle management of digitized books. The selected topics of this conceptualization are then the source for the next step, which is to develop a recommended practice for maintaining authenticity of digitized books. This practice is further specified for one partial solution to the problem of maintaining the authenticity of digital documents throughout their life cycle, which is a persistent identification system.
136

Rechteklärung für OA-Zweitveröffentlichungen – das Serviceangebot der SLUB Dresden: Session 6: Rechtliche Aspekte des Open Access, Open-Access-Tage 2013

Di Rosa, Elena 09 October 2013 (has links)
Vortrag im Rahmen der Open Access Tage 2013, Session "Rechtliche Aspekte des Open Access": Bereits in der „Budapest Open Access Initiative“, dem „Bethesda Statement on Open Access Publishing“ und der „Berliner Erklärung über offenen Zugang zu wissenschaftlichem Wissen“ wird die rechtliche Dimension von Open Access deutlich: Wissenschaftliche Werke sollen nicht nur zugänglich sondern nach nachnutzbar sein. Bei der Umsetzung des Grünen Weges des Open Access kann diese Nachnutzung nur selten umgesetzt werden, da wissenschaftliche AutorInnen im Rahmen von „Copyright Transfer Agreements“ mehrheitlich ausschließliche Nutzungsrechte an Verlage übertragen. Das sich in der politischen Diskussion befindliche Zweitveröffentlichungsrecht würde wissenschaftliche AutorInnen die rechtssichere Zugänglichmachung ihrer Werke auf Repositorien gewähren und damit einen wichtigen Beitrag zur Förderung von Open Access leisten. Vor dem Hintergrund der anhaltenden Diskussion um ein wissenschaftsfreundliches Urheberrecht und der Verankerung eines unabdingbaren Zweitveröffentlichungsrechts widmet sich die Session den rechtlichen Aspekten von Open Access. Referiert und diskutiert werden u.a. die Chancen und Herausforderungen von nicht-exklusiven Verwertungsgesellschaften im Wissenschaftsbereich sowie der Stand und die Perspektive des Zweitveröffentlichungsrechts. Darüber hinaus wird die praktische Umsetzung des Grünen Weges betrachtet: Am Beispiel der SLUB Dresden und dem DINI-Zertifikat 2013 wird der Umgang mit rechtliche Fragestellungen skizziert und mit den TeilnehmerInnen diskutiert. Vortrag 1 C3S: Cultural Commons Collecting Society – auch ein Modell für den Textbereich? Michael Weller (Europäische EDV-Akademie des Rechts, Merzig/Saar) Vortrag 2 Neues gesetzliches Zweitveröffentlichungsrecht – Update zu den Anforderungen an Bibliotheken und Wissenschaftseinrichtungen Thomas Hartmann (Max Planck Digital Library, München) Vortrag 3 Rechteklärung für OA-Zweitveröffentlichungen – das Serviceangebot der SLUB Dresden Elena Di Rosa (Sächsische Landesbibliothek - Staats- und Universitätsbibliothek, Dresden) Vortrag 4 DINI-Zertifikat 2013 – Neuerungen im Abschnitt Rechtliche Aspekte Michaela Voigt (Sächsische Landesbibliothek - Staats- und Universitätsbibliothek, Dresden)
137

En förbisedd skatt av svenskt kulturarv : Kulturarw³ och dess värde för forskningen / An Overlooked Treasure of Swedish Cultural Heritage : Kulturarw³ and its Value for Scientific Research

Skjöldebrand Lefevre, Caroline January 2023 (has links)
This master thesis has examined a user’s capabilities to utilize the Swedish national web archive Kulturarw³ for research purposes. The aim was also to identify any potential areas of improvement in the user’s capabilities working with Kulturarw³. The research questions are: 1. How does Kulturarw³ operate? 2. What are the main factors which affect Kulturarw³ structure and function? 3. What capabilities exist for researchers and students to utilize Kulturarw³ for their research? Are there any potential areas of improvement to the web archives user capabilities? The author has analyzed the web archive altogether using institutional theory in organization studies. The analysis has been loosely structured after Staffan Furusten’s model of the outside world in using institutional theory in organization studies. The purpose of this is to explain why the web archive looks the way it does today. An understanding of the web archive will better illuminate why any potential areas of improvement identified may or may not be possible for KW3 to implement. The author has conducted email interviews, in-person interviews as well as digital interviews with the staff responsible for working with Kulturarw³ at the Swedish National Library, Kungliga biblioteket. A draft of guidelines concerning Kulturarw³ from Kungliga biblioteket and a video-interview at Internetmuseum with one of the the founders of the web archive has also been used as source-material for this master thesis. The author concluded that Kulturarw³ is a national web archive with a long history. Its functions and limitations are complex. Kulturarw³s operation has changed greatly throughout its lifetime because of the surrounding environment. Several main factors which affect Kulturarw³ were identified. Several Swedish laws, international charters and initiatives, collaborations between and relations to other web archives, use of open-source software and digitalization’s impact on Kulturarw³ is discussed in detail. Kulturarw³'s long history of archiving the Swedish web makes it a valuable and plentiful source for research. Its collections and functions should be sufficient for anyone to conduct qualitative research. Yet at the current moment, the web archive is too inaccessible to live up to user’s expectations. That makes it an unviable option for research purposes. Unfortunately, there is not a lot Kulturarw³ can currently change to make it more assessable. The lack of information readily available also hinders users from using the web archive at max efficiency. There is a lot of opportunities for KB to better inform its users of its value and capabilities. An increased collaboration with Swedish research institutions would also benefit both researchers and the web archive in the long run.
138

Geo-Locating Tweets with Latent Location Information

Lee, Sunshin 13 February 2017 (has links)
As part of our work on the NSF funded Integrated Digital Event Archiving and Library (IDEAL) project and the Global Event and Trend Archive Research (GETAR) project, we collected over 1.4 billion tweets using over 1,000 keywords, key phrases, mentions, or hashtags, starting from 2009. Since many tweets talk about events (with useful location information), such as natural disasters, emergencies, and accidents, it is important to geo-locate those tweets whenever possible. Due to possible location ambiguity, finding a tweet's location often is challenging. Many distinct places have the same geoname, e.g., "Greenville" matches 50 different locations in the U.S.A. Frequently, in tweets, explicit location information, like geonames mentioned, is insufficient, because tweets are often brief and incomplete. They have a small fraction of the full location information of an event due to the 140 character limitation. Location indicative words (LIWs) may include latent location information, for example, "Water main break near White House" does not have any geonames but it is related to a location "1600 Pennsylvania Ave NW, Washington, DC 20500 USA" indicated by the key phrase 'White House'. To disambiguate tweet locations, we first extracted geospatial named entities (geonames) and predicted implicit state (e.g., Virginia or California) information from entities using machine learning algorithms including Support Vector Machine (SVM), Naive Bayes (NB), and Random Forest (RF). Implicit state information helps reduce ambiguity. We also studied how location information of events is expressed in tweets and how latent location indicative information can help to geo-locate tweets. We then used a machine learning (ML) approach to predict the implicit state using geonames and LIWs. We conducted experiments with tweets (e.g., about potholes), and found significant improvement in disambiguating tweet locations using a ML algorithm along with the Stanford NER. Adding state information predicted by our classifiers increased the possibility to find the state-level geo-location unambiguously by up to 80%. We also studied over 6 million tweets (3 mid-size and 2 big-size collections about water main breaks, sinkholes, potholes, car crashes, and car accidents), covering 17 months. We found that up to 91.1% of tweets have at least one type of location information (geo-coordinates or geonames), or LIWs. We also demonstrated that in most cases adding LIWs helps geo-locate tweets with less ambiguity using a geo-coding API. Finally, we conducted additional experiments with the five different tweet collections, and found significant improvement in disambiguating tweet locations using a ML approach with geonames and all LIWs that are present in tweet texts as features. / Ph. D.
139

企業資訊生命週期管理策略之研究 / Enterprise Information Life Cycle Management Strategy Research

黃順安, Huang, Shun An Unknown Date (has links)
近年來由於網際網路的普及,資訊成爆炸性的成長,無論是企業e化、電子商務的應用服務,或是數位家庭的興起,加上網路應用服務的創新,出現如影音部落格等。這些資訊除透過網路傳遞流通外,不管是個人或企業,是使用者或提供服務的業者,都需面對管理如此龐大的資訊儲存服務。 隨著數位資訊的快速成長,檔案的體積與數量日漸增加,雖然資訊科技的進步讓儲存媒體的種類更加多元化且容量越來越大,例如一顆SATA磁碟就有500GB的容量、藍光光碟一片容量達100GB,但根據IDC公佈調查指出,2006年全球資訊量大爆炸,全年的照片、影音檔、電子郵件、網頁、即時通訊與行動電話等數位資料量,高達1610億GB,所以儲存容量的提升似乎永遠趕不上資訊的成長速度。 企業目前分散在各分支機構的IT機房,面臨人員設備的重複投資及分散管理不易,隨著寬頻網路的來臨,企業將IT基礎設施集中化,建置企業的資料中心已成趨勢,我國政府的資訊改造就規劃機房共構成13+1個資料中心,如何建構一個資料中心,應用集中化、虛擬化的趨勢讓儲存系統集中化,同時企業的資訊也集中化,大量的資訊與儲存,更需對資訊做有效的儲存管理。 根據SNIA統計,儲存系統上的資訊,30天內沒有被存取過的大約占80%不常用、不重要的資訊不只造成儲存空間的浪費,也間接影響資訊存取沒有效率,所以在有限的高階線上儲存空間下,將較少用到的資訊搬到較低階的儲存系統,不用的資訊歸檔保存。資訊也有生命週期的演變,本研究將資訊生命週期分四個階段,分別為資訊建立導入新生期、資訊使用黃金成熟期、資訊參考使用衰老期、資訊處置歸檔終老期,透過資訊價值的分類,區分資訊對企業的重要程度,融合資訊生命週期的演進,制定資訊生命週期管理策略,協助企業從資訊的建立、存取、備份、複製、資安、歸檔保存到刪除,使得資訊的儲存保護與存取效率能達到最佳化,確保資訊服務不中斷,獲得最好的儲存投資效益。 / Due to the prevalence of Internet in recent years, information grows explosively. No matter it is e-enable, the service that electronic commerce offers, or the spring up of digital family, in addition to the innovation of the application service that the Internet offers, they all enabled the appearance of products such as Vlog. These information not only circulate through the Internet, no matter it is personal or companies, users or dealers who offer service, all of them have to face the problem of managing such huge information storage service. With the rapid growth of digital information, the volume and the amount of files are getting larger and larger. The advance in information technology makes the type of storage media more various and with larger and larger capacity. For instance, a SATA hard drive has the capacity of 500GB, a Blue-ray disk has the capacity of 100GB, but according to the survey of IDC, the information around the world exploded in year 2006. The total digital information such as pictures, video/audio archive, emails, web sites, messengers, and mobile phones in the year is as much as 161 billion GB. So the storage capacity never seems to catch up with the growth of information. Companies now scatter over the IT control room of each branch. They face the difficulties of repeatedly investing in manual and facilities, and separate management. With the appearance of broadband network, companies consolidate the infrastructure of IT, building companies’ data center has become a current. The information engineering step that our government takes is to draw the control room into 13+1data center. How to build a data center? We use the current of consolidation and virtualization to consolidate storage systems. Mean while, the information of companies should be consolidated. Mass amount of information and storage needs a more efficient way of managing and storing information. According to statistic that SNIA shows, there are about 80% of information in storage system will not be accessed within 30 days. Information that are not often used or are not important can be a waste of capacity, and it can indirectly affect the inefficiency of storing information. So, in the limited high level online storage capacity, we should move the information that are not so often used to lower level storage systems, and we will not have to archive the information. There is also a life cycle within information. This research classify the life cycle of information into 4 stages, which include the introduction/emergence stage in the establishment of information, the decline stage during the reference and usage of information, as well as the final stage in the management and filing of information. Through the classify of information value, we classify the importance of the information to the company, integrate the evolution of information life cycle, establish tactics of information life cycle management, assist companies from the establishment of information, to the storage, backup, copy, information security, archiving and then to the delete of the information. This optimizes the storing, accessing efficiency, assures the continuance of information service and acquires most benefit of storage investment.
140

基於行動影像標籤的飲食管理系統之可行性分析 / An Evaluation of Mobile Personal Dietary Management Application Based On Photograph Annotation Evaluation of Mobile Personal Dietary Management Application Based On Photograph Annotation

何浩瑋, Ho, Hao Wei Unknown Date (has links)
生活於手機相機普及的今日,人們逐漸依賴隨手可得的行動裝置記錄生活,影像記錄儼然成為今日人們記錄生活的方式之一。本研究致力於提供使用者個人化之飲食管理系統,讓使用者在利用影像記錄飲食之時,改變自身的多樣性飲食攝取行為,讓個體更為健康。透過健康控制與社會認知理論,我們以影像後設資料的運用,增加自我效能,提供使用者方便的飲食記錄方式、以及更能掌握自身的營養狀況,讓使用者在記錄飲食的同時,還能改變自身飲食攝取行為,更能作為未來回憶飲食狀況之線索。在實驗設計上,我們邀請6位受試者使用本研究所開發的飲食管理系統,利用兩週的時間真實模擬生活中的飲食記錄環境。我們分析使用者實驗前後的自我效能問卷,以及所記錄的飲食記錄作為評估依據。實驗結果顯示,我們的系統能夠提升個體自我效能外,還能夠改變個體的飲食攝取行為。因此本研究有效促進自我效能感協助使用者在記錄生活,讓個體飲食更加健康 / In these days, as the popularity of smart phone grows, people tend to rely on mobile devices to record their lives. Digital photograph has become one of the ways people recording their daily lives. This research is aimed at providing users with personalized dietary management system, allowing users not only use photos to record their life, but also change their behavior to keep users in good health. Based on health control and social cognitive theory, this application utilizes the metadata of the images to enhance self-efficacy, to provide users an easier way to record their dietary and take better control of their own health. Furthermore, the data of the application allows users to recall the dietary and thus improves users’ dietary habits. For the experimental design, 6 subjects were invited to use rthis dietary management application, to record their daily dietary in two weeks. The subjects were asked to use the system, and answer a questionnaire related to the self-efficiency of their adequate dietary after using the application. Experimental results shows that this system not only helps users enhance individual self-efficacy in dietary but also change their dietary habit to achieve the goal of balanced dietary and in good health.

Page generated in 0.0405 seconds