• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 139
  • 42
  • 40
  • 35
  • 19
  • 15
  • 10
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 621
  • 136
  • 119
  • 108
  • 108
  • 103
  • 99
  • 70
  • 62
  • 61
  • 54
  • 54
  • 53
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Digitalizace, popis pomocí metadat a jejich formáty / Digitalization, metadata description and metadata formats

Hutař, Jan January 2012 (has links)
(EN) This thesis is dedicated to the processes of digitization and metadata description, as well as the links that connect them. In recent years, another topic has been becoming relevant for both mentioned processes - the logical long-term preservation of digital objects. Long-term preservation is dependent on metadata and therefore on the processes of digitization, when some important metadata is created. The first introductory chapter of the thesis briefly describes, with an emphasis on metadata, digitization and theoretical and practical problems of long-term preservation of digital objects. The OAIS reference framework is also analysed, since it is the background for digital preservation and present preservation metadata standards. OAIS is also important for the shape and functionality of digital repositories. Metadata is also the topic of the next chapter of the thesis. General metadata use and its development are discussed, with emphasis on administrative, technical and preservation metadata. The following chapter focuses on the use of metadata in the National Library of the Czech Republic. It describes the evolution during two periods leading up to the present. This section includes comments on how long-term preservation has been reflected in used metadata standards. The second-to-last part...
402

Privacy Paradox : En kvalitativ studie om svenskars medvetenhet och värnande om integritet / Privacy Paradox : A qualitative study on Swedes awareness and protection of integrity

Harzdorf, Hjördis, Talal Abdulrahman, Hanin, Duric, Sumejja January 2019 (has links)
Genom digitalisering av samhället och teknologins utveckling har marknadsföringsstrategier progressivt reformerats, från att uppmärksamma produkter mot konsumenten till att istället sätta konsumenten i fokus. Genom avancerade algoritmer, Business Intelligence och digitala DNA spår har det blivit möjligt att individualisera och rikta marknadsföring mot konsumentens intressen och även förutse individens konsumentbeteende. Samtidigt uttrycker individer ett stort värde för anonymitet och integritet online. Trots detta fortsätter konsumenter att frivilligt att lämna sin persondata, främst via olika kundklubbar, internet och sociala medier. Detta beteende påvisar en så kallad “privacy paradox”. Privacy paradox syftar på medvetenhet och oro kring utgivandet av persondata samtidigt som man agerar annorlunda. Avsikten med denna studie var att utforska om fenomenet privacy paradox existerar inom svenska konsumenters handlingar och konsumentens medvetenhet kring användning av personlig data för riktad marknadsföring online. Det empiriska materialet i denna studie består av semi-strukturerade intervjuer med sju olika respondenter gällande deras medvetenhet, tillit och integritet online. Resultatet analyserades med hjälp av den tematiska strategin för att lättare identifiera beetendemönster som respondenterna utgav. Slutligen besvaras fenomenet privacy paradox hos svenska konsumenter genom tre forskningsfrågor 1.​“Hur medvetna är svenska konsumenter om den information som de delar med sig av, i synnerhet inom riktad marknadsföring?” 2.​“Hur mycket värnar svenska konsumenter om sin integritet?” ​3.​“ Påvisar svenska konsumenter privacy paradox och varför?”. Majoriteten av respondenterna var medvetna om personliga uppgifter online, dock varierade medvetenheten om vad för information som fanns tillgänglig både för privata användare och verksamheter. Man sa sig även värna om sin integritet men ens handlingar stödde inte detta till fullo. Med hjälp av denna studie fann man att fenomenet privacy paradox existerar hos de svenska konsumenter som deltog under denna studie. Anledningar till dessa var bland annat att man inte vill bli exkluderad från samhället och det kognitiva förtroendet till verksamheter. Man litar på att de gör rätt för sig. Värnande om integritet visades då genom att man minskade mängden personinformation som andra privatpersoner kunde komma åt. En annan anledning som uppkom var svårigheten i att bryta vanor och beteendemönster. Därför fortsätter man agera på samma sätt som tidigare, trots ny kunskap samt GDPR. Respondenter hade olika nivåer av förståelse riktad marknadsföring. Det majoriteten inte var medvetna om var mängden av lagrad information samt hur den samlas in t.ex. genom cookies. / Through digitalisation of the society and the technological development, the marketing strategies has progressively been reformed. From mainly giving attention to the product towards the consumers to instead place the consumer in the center of attention. Subsequently advanced algorithms, Business Intelligence and digital DNA tracing has enabled individualisation and target marketing, for the interest of the consumer, this also gave access to predict consumer behaviour. Meanwhile individuals put a big value on anonymity and integrity online. Despite this consumers keep sharing their data voluntary, primarily through customer clubs, the internet and social media. This behaviour demonstrates a so called “privacy paradox”. Privacy paradox refers consumers awareness and concern about sharing personal data, while still sharing their information. The purpose of this study was to examine whether the phenomenon of privacy paradox exists in Swedish consumers actions and the consumer’s awareness of the use of personal data for targeted online marketing. The empirical material in this study exists of semi-structured interviews with 7 different respondents regarding their consciousness, trust and integrity online. The results were analyzed through the thematic strategy to easily identify behavioural patterns that the respondents showed. Lastly, the phenomenon of privacy paradox in Swedish consumers is answered through three research questions 1. ​“How aware are Swedish consumers regarding the information they share, particularly in target marketing? ​2. ​“How much does the Swedish consumer care about their integrity?” ​3. ​“Does the Swedish consumer show privacy paradox and why?”. The majority of the respondents were aware that personal information exists online. The awareness regarding what kind of information that is available for both private users and organisations varied. While respondents mentioned that they want to protect their privacy, their actions proved otherwise. With the help of this study, we could conclude that the phenomenon named privacy paradox exists through the information gathered from the swedish consumers that participated in this study. Reasons being the willingness to not be excluded from society and the cognitive trust towards organizations. You trust that they do the right thing. Respondents protected privacy by reducing the amount of personal information other individuals could access. Another reason that was brought up was the difficulty in changing habits and behaviour. Therefore respondents continued doing the same things as before, despite new knowledge and GDPR. Respondents showed different levels of understanding regarding targeted marketing. However the majority was not aware of the amount of stored information and how it is collected, for example through cookies.
403

Qualitative analysis of challenges in geodata management : An interview study analysing challenges of geodata management in Swedish companies and public authorities / Kvalitativ analys av utmaningar inom geodataförvaltning : En intervjustudie som analyserar utmaningar inom geodataförvatlning bland svenska företag och myndigheter

Kalhory, Josef January 2022 (has links)
With a constant increase of the datasphere so does the need for proper management of this data in order to minimise potential inefficiencies when it comes to the usage of this data. Geodata is no exception to the need of management.  The purpose of this thesis was to investigate the current challenges of geodata management in Swedish companies and public authorities through qualitative analysis by interviews. Geodata- and GIS-users from the public and private sector made up the pool of interviewees and a total of 20 interviews were conducted. Despite a large diversity of daily tasks at hand, from data transfer for a customer in a system change process to updating attributes in NIS tools, all of the interviewees had some degree of challenges with respect to management of geodata. The results showed that the main challenges regarded inadequate or lack of quality geodata and its metadata as well as clarity of the location of these datasets. Scarcity of common understanding of geodata and GIS-systems among colleagues of geodata- and GIS-users causes these colleagues to often deliver incorrect, poorly formatted or low quality geodata and metadata. A large number of geodata file formats also contributes to confusion amongst geodata- and GIS-users and non-users which directly and indirectly causes some inefficiency. It was determined that the challenges of geodata management are highly abundant in the Swedish public and private sector. Furthermore, it was evaluated that the possible solutions would be to simplify geodata with less file formats as well as better and more clear coordination at organisational levels. Educating non-geodata and GIS-users in the workforce as well as in higher educational institutions that have majors related to geodata was also suggested as a possible solution to minimise challenges.
404

Incorporating Metadata Into the Active Learning Cycle for 2D Object Detection / Inkorporera metadata i aktiv inlärning för 2D objektdetektering

Stadler, Karsten January 2021 (has links)
In the past years, Deep Convolutional Neural Networks have proven to be very useful for 2D Object Detection in many applications. These types of networks require large amounts of labeled data, which can be increasingly costly for companies deploying these detectors in practice if the data quality is lacking. Pool-based Active Learning is an iterative process of collecting subsets of data to be labeled by a human annotator and used for training to optimize performance per labeled image. The detectors used in Active Learning cycles are conventionally pre-trained with a small subset, approximately 2% of available data labeled uniformly at random. This is something I challenged in this thesis by using image metadata. With the motivation of many Machine Learning models being a "jack of all trades, master of none", thus it is hard to train models such that they generalize to all of the data domain, it can be interesting to develop a detector for a certain target metadata domain. A simple Monte Carlo method, Rejection Sampling, can be implemented to sample according to a metadata target domain. This would require a target and proposal metadata distribution. The proposal metadata distribution would be a parametric model in the form of a Gaussian Mixture Model learned from the training metadata. The parametric model for the target distribution could be learned in a similar manner, however from a target dataset. In this way, only the training images with metadata most similar to the target metadata distribution can be sampled. This sampling approach was employed and tested with a 2D Object Detector: Faster-RCNN with ResNet-50 backbone. The Rejection Sampling approach was tested against conventional random uniform sampling and a classical Active Learning baseline: Min Entropy Sampling. The performance was measured and compared on two different target metadata distributions that were inferred from a specific target dataset. With a labeling budget of 2% for each cycle, the max Mean Average Precision at 0.5 Intersection Over Union for the target set each cycle was calculated. My proposed approach has a 40 % relative performance advantage over random uniform sampling for the first cycle, and 10% after 9 cycles. Overall, my approach only required 37 % of the labeled data to beat the next best-tested sampler: the conventional uniform random sampling. / De senaste åren har Djupa Neurala Faltningsnätverk visat sig vara mycket användbara för 2D Objektdetektering i många applikationer. De här typen av nätverk behöver stora mängder av etiketterat data, något som kan innebära ökad kostnad för företag som distribuerar dem, om kvaliteten på etiketterna är bristfällig. Pool-baserad Aktiv Inlärning är en iterativ process som innebär insamling av delmängder data som ska etiketteras av en människa och användas för träning, för att optimera prestanda per etiketterat data. Detektorerna som används i Aktiv Inlärning är konventionellt sätt förtränade med en mindre delmängd data, ungefär 2% av all tillgänglig data, etiketterat enligt slumpen. Det här är något jag utmanade i det här arbetet genom att använda bild metadata. Med motiveringen att många Maskininlärningsmodeller presterar sämre på större datadomäner, eftersom det kan vara svårt att lära detektorer stora datadomäner, kan det vara intressant att utveckla en detektor för ett särskild metadata mål-domän. För att samla in data enligt en metadata måldomän, kan en enkel Monte Carlo metod, Rejection Sampling implementeras. Det skulle behövas en mål-metadata-distribution och en faktisk metadata distribution. den faktiska metadata distributionen skulle vara en parametrisk modell i formen av en Gaussisk blandningsmodell som är tränad på träningsdata. Den parametriska modellen för mål-metadata-distributionen skulle kunna vara tränad på liknande sätt, fast ifrån mål-datasetet. På detta sätt, skulle endast träningsbilder med metadata mest lik mål-datadistributionen kunna samlas in. Den här samplings-metoden utvecklades och testades med en 2D objektdetektor: Faster R-CNN med ResNet-50 bildegenskapextraktor. Rejection sampling metoden blev testad mot konventionell likformig slumpmässig sampling av data och en klassisk Aktiv Inlärnings metod: Minimum Entropi sampling. Prestandan mättes och jämfördes mellan två olika mål-metadatadistributioner som var framtagna från specifika mål-metadataset. Med en etiketteringsbudget på 2%för varje cykel, så beräknades medelvärdesprecisionen om 0.5 snitt över union för mål-datasetet. Min metod har 40%bättre prestanda än slumpmässig likformig insamling i första cykeln, och 10 % efter 9 cykler. Överlag behövde min metod endast 37 % av den etiketterade data för att slå den näst basta samplingsmetoden: slumpmässig likformig insamling.
405

Energibesparing med automatiserad inneklimat- och ventilationsstyrning – drivkrafter och barriärer

Selhammer, Andreas January 2022 (has links)
Energiförbrukningen i Sveriges fastigheter uppgår till närmare 40% av totalprimärenergin för Sverige. Denna siffra förväntas öka ytterligare under den kommande 20-årsperioden. Av denna energimängd motsvarar 67% byggnadens operativa fas. I denna studie undersöks hur energiförbrukning kan minskas genom att införa mera byggnadsautomation och högre automationsgrad. Detta för att få fastigheter i ökad grad att anpassa sina energibehov efter faktiska rådande behov i stället för mer statiska driftfall.  En litteraturgranskning inom forskningsfältet utfördes mot forskningsfråga 1, Finns det en korrelation mellan energibesparing och automationsgrad i inneklimat och ventilationsstyrning? Här har tekniker som building management system (BMS) och building energy management system (BEMS) påvisat besparingar runt 30% vid införande. Vidare har tekniker som digitala tvillingar påvisat besparingar mellan 6,2%- 21,5% samt lägre påverkan på fastighetens brukare. Detta genom ett mer prediktivt underhåll och bättre förhandsanalyser av energieffektiva utfall innan implementering. Även artificiell intelligens (AI) påvisade goda energibesparingar vid införande med energibesparingar mellan 14% - 44%. Här indikerar studien att det finns problem med implementeringen. Detta har sin härkomst i, dels felaktigt konstruerat metadata och för få sensorer som ger AI för litet beslutsfattande underlag att arbete mot. Här har studier funnit att AI införd på en för dålig dataplattform kan bli direkt kontraproduktivt och öka energiförbrukningen i stället för att sänka denna. Dock framkommer det att det finns en positiv koppling mellan energibesparing och automationsgrad. Detta då besparingen ligger i fastighetens förmåga att adaptera sig till rådande omständigheter.  Forskningsfråga 2 avser: Vad föreligger det för hinder och drivkrafter för ökad implementering av automationsgrad i inneklimat och ventilationsstyrning? Gällande barriärer och hinder påvisar svaren från enkätundersökningen utförd i denna studie att det förkommer främst kunskapshinder och ekonomiska hinder för vidare implementering av automation inom fastigheterna. Vidare kan det utrönas att förvaltare och drifttekniker arbetar mer aktivt med energiledningsfrågor än de övriga skråna som undersöks i denna undersökning. Här visar svaren på att framför allt styrentreprenörerna och konsulterna behöver informera i högre grad om den nytta deras lösningar kan erbjuda för energikonservering. Detta på ett sätt som mottagaren förstår och kan relatera till för att motivera prisskillnader initialt i byggprocessen och med detta försöka överbrygga energiparadoxen, där kostnadseffektiva och energieffektiva lösningar uteblir som en konsekvens. / Energy consumption in Sweden, which originates from buildings and facilities, amounts to almost 40% of the total primary energy in Sweden. This figure is expected to increase further over the next 20 years. From this, 67% corresponds to the operational phase of the building. This study examines how this energy consumption can be reduced by increasingly adding a higher degree of building automation into the buildings, to get properties to increasingly adapt their energy needs to the actual prevailing needs instead of a more static operation. A literature review in the research field was performed against research question 1, Is there a correlation between energy saving and degree of automation in indoor climate and ventilation control? Here, technologies such as building management systems (BMS) and building energy management systems (BEMS) had demonstrated savings around 30% upon introduction. By adding technologies such as digital twins have demonstrated savings between 6.2%-21.5% and lower the effect on the occupants’ residences comfort through better predictive maintenance and preliminary analysis of energy and comfort outcomes before real life implementation. AI also showed good energy saving potential with energy reduction between 14.4%-44.36%. However, there are problems that could occur with the implementation, as this study stats. This has its origin in partly incorrectly constructed metadata and a lack of sensors and actuators. This in turn gives the AI insufficient data for training basis and incorrect bases to build their forecasts on. This study also found that AI, or other analysis tools, on an insufficient databased platform can be directly counterproductive and increase energy consumption instead. However, it appears that there is a positive connection between energy saving and degree of automation. This is because the savings lie in the property's ability to adapt to prevailing circumstances. Research question 2 refers to what are the obstacles and driving forces for increased implementation of the degree of automation in indoor climate and ventilation control? The answers from the questionnaire show that there are mainly knowledge barriers and financial obstacles for further implementation of automation within the properties. Furthermore, it can be ascertained that facility managers and technicians are more actively engaged with energy management issues than the other guilds in this survey. Here, the answers show that, above all, the automation-contractors and consultants need to provide better information about the benefits if their automation solutions and how it could reduce energy waste and thereby try to bridge the energy paradox, where cost-effective and energy-efficient solutions are overlooked due to hinders and barriers.
406

The project is completed! What now?

Legowski, Aris 20 April 2016 (has links) (PDF)
The Book of the Dead-Project Bonn started in the early 1990s. Prof Ursula Rößler-Köhler, who had previously laid the foundation for modern Book of the Dead studies by her work on BD chapter 17 applying the method of textual criticism, achieved a 10-year funding from the German Research Society (DFG). In 2004 the project was granted another 9-year funding by the Academy of Sciences and Arts of North Rhine-Westphalia. One aim of the project was to gather all available evidence of Book of the Dead manuscripts spread across collections around the world. Today, the archive comprises approximately 3000 records of BD sources. In 2012 the corresponding database, after undergoing a transfer from FileMaker to XML format in collaboration with the department of e-Humanities at the University of Cologne, was launched and made publicly available online. The data sets include various different kinds of information about the objects and the sets of BD spells and vignettes found on them. These are now easily accessible for statistic analyses such as evaluations of neighbouring spells and sequences or occurrences in specific locations or time periods. Furthermore, the database includes several metadata such as bibliographical information, translations of spells and a motif index. It is cross connected with other Egyptological databases such as Trismegistos and the Thesaurus Linguae Aegyptiae. After the project was completed at the end of 2012, the online database has been operating for a considerable amount of time with scholars using it and trying the several opportunities it provides. Now is the time for a first evaluation to actually see which functions of the database work well, which might have been ignored by users and what information the database could provide scholars with for their actual research. Naturally, there is a need for a continuous maintenance and update on new findings and the latest research. Furthermore it is important to understand which possibly missing functions or information the users wish to be included and if this is actually realisable. On the other hand, there might be opportunities for analyses that have not been fully understood and therefore have not been made use of. This presentation aims to address some of these issues concerning the BD online database and to gather ideas and possible collaborators for future BD project plans.
407

Next Generation Feature Roadmap for IP-Based Range Architectures

Kovach, Bob 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / The initial efforts that resulted in the migration of range application traffic to an IP infrastructure largely focused on the challenge of obtaining reliable transport for range application streams including telemetry and digital video via IP packet-based network technology. With the emergence of architectural elements that support robust Quality of Service, multicast routing, and redundant operation, these problems have largely been resolved, and a large number of ranges are now successfully utilizing IP-based network topology to implement their backbone transport infrastructure. The attention now turns to the need to provide supplemental features that provide enhanced functionality in addition to raw stream transport. These features include: *Stream monitoring and native test capability, usually called Service Assurance *Extended support for Ancillary Data / Metadata *Archive and Media Asset Management integration into the workflow *Temporal alignment of application streams This paper will describe a number of methods to implement these features utilizing an approach that leverages the features offered by IP-based technology, emphasizes the use of standards-based COTS implementations, and supports interworking between features.
408

Rule-Based Constraints for Metadata Validation and Verification in a Multi-Vendor Environment

Hamilton, John, Darr, Timothy, Fernandes, Ronald, Jones, Dave, Morgan, Jon 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / This paper describes a method in which users realize the benefits of a standards-based method for capturing and evaluating verification and validation (V&V) rules within and across metadata instance documents. The method uses a natural language based syntax for the T&E metadata V&V rule set in order to abstract the highly technical rule languages to a domain-specific syntax. As a result, the domain expert can easily specify, validate and manage the specification and validation of the rules themselves. Our approach is very flexible in that under the hood, the method automatically translates rules to a host of target rule languages. We validated our method in a multi-vendor scenario involving Metadata Description Language (MDL) and Instrumentation Hardware Abstraction Language (IHAL) instance documents, user constraints, and domain constraints. The rules are captured in natural language, and used to perform V&V within a single metadata instance document and across multiple metadata instance documents.
409

BlobSeer: Towards efficient data storage management for large-scale, distributed systems

Nicolae, Bogdan 30 November 2010 (has links) (PDF)
With data volumes increasing at a high rate and the emergence of highly scalable infrastructures (cloud computing, petascale computing), distributed management of data becomes a crucial issue that faces many challenges. This thesis brings several contributions in order to address such challenges. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, it highlights the potentially large benefits of using versioning in this context. Second, based on these principles, it introduces a series of distributed data and metadata management algorithms that enable a high throughput under concurrency. Third, it shows how to efficiently implement these algorithms in practice, dealing with key issues such as high-performance parallel transfers, efficient maintainance of distributed data structures, fault tolerance, etc. These results are used to build BlobSeer, an experimental prototype that is used to demonstrate both the theoretical benefits of the approach in synthetic benchmarks, as well as the practical benefits in real-life, applicative scenarios: as a storage backend for MapReduce applications, as a storage backend for deployment and snapshotting of virtual machine images in clouds, as a quality-of-service enabled data storage service for cloud applications. Extensive experimentations on the Grid'5000 testbed show that BlobSeer remains scalable and sustains a high throughput even under heavy access concurrency, outperforming by a large margin several state-of-art approaches.
410

Contextual image browsing in connection with music listening - matching music with specific images

Saha, Jonas January 2007 (has links)
<p>This thesis discusses the possibility of combining music and images through the use of metadata. Test subjects from different usability tests say they are interested in seeing images of the band or artist they are listening too. Lyrics matching the actual song are also something they would like to see. As a result an application for cellphones is created with Flash Lite which shows that it is possible to listen to music and automatically get images from Flickr and lyrics from Lyrictracker which match the music and show them on a cellphone.</p>

Page generated in 0.0487 seconds