• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 23
  • 13
  • 8
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 86
  • 73
  • 48
  • 43
  • 32
  • 25
  • 24
  • 22
  • 20
  • 18
  • 17
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Engineering Magnetism in Rare Earth Garnet and Metallic Thin Film Heterostructures

Lee, Aidan Jarreau January 2020 (has links)
No description available.
182

Vägen mot öppen vetenskap : Tillkomsten och utformningen av forskningsdatastödet vid sex svenska universitetsbibliotek / The road to open science : The origin and design of the research data support at six Swedish university libraries

Bornsäter, Barbro January 2022 (has links)
Introduction: The subject of research data management is highly topical and university libraries around the world are working hard to establish a well-functioning support as well as technical solutions to deal with sharing and storing research data. Swedish university libraries are no exception, and this study aims to give a clearer view on how these support functions have come to take the form they have today, what they look like now and what the plans are.  Method: For this study eight people working with research data support at six Swedish university libraries were interviewed about the work of the support groups they are part of. These interviews were recorded and transcribed, and then analysed thematically by colour coding themes in the text.  Analysis and results: The findings show that the persons working with data managing support at these six university libraries agree on many points of how the support needs to be developed to meet the students and researchers needs. One of the most important things of what the research data managing groups need to do are to supply more information sessions, workshops, and teaching to reach out with their knowledge and to make people more aware of their existence and competences. Another is making sure technical solutions are in place to store data throughout the different steps of the research data lifecycle. The training and development of the professionals working in the support groups is also a very important part, as this is a changing subject and the rules and regulations do change. Lastly the interviewees underline the importance of solid motivating factors for researchers to share their data. The data support groups can simplify the process of data sharing and make it easier and smoother for the researcher to do it, but if it is unclear why it should be done it will not happen.  Conclusions: The development of the research data support has been, and is, slow and still ongoing. There is still a fair amount of work to be done, especially when it comes to the technical solutions that will enable safe and FAIR data sharing and storing. But the work that needs to be done cannot come only from university libraries and other university support functions, it must come from publishers, funders and other organisations that have the power to change the norms of data sharing. One of the main blockers of open data today is the lack of motivation for researchers to share their data and to be able to reach the goal of open data 2026 the entire system of merits for researchers needs to change and somehow include data sharing as an important part.  This is a two-year master’s thesis in Library and Information Science.
183

Investigating New Guaiazulenes and Diketopyrropyrroles for Photonic Applications

Ghazvini Zadeh, Ebrahim 01 January 2015 (has links)
?-Conjugated systems have been the focus of study in recent years in order to understand their charge transport and optical properties for use in organic electronic devices, fluorescence bioimaging, sensors, and 3D optical data storage (ODS), among others. As a result, several molecular building blocks have been designed, allowing new frontiers to be realized. While various successful building blocks have been fine-tuned at both the electronic and molecular structure level to provide advanced photophysical and optoelectronic characteristics, the azulene framework has been under-appreciated despite its unique electronic and optical properties. Among several attributes, azulenes are vibrant blue naturally occurring hydrocarbons that exhibit large dipolar character, coupled with stimuli-responsive behavior in acidic environments. Additionally, the non-toxic nature and the accompanying eco-friendly feature of some azulenes, namely guaiazulene, may set the stage to further explore a more "green" route towards photonic and conductive materials. The first part of this dissertation focuses on exploiting guaiazulene as a natural building block for the synthesis of chromophores with varying stimuli-responsiveness. Results described in Chapter 1 show that extending the conjugation of guaiazulene through its seven-membered ring methyl group with aromatic substituents dramatically impacts the optical properties of the guaiazulenium carbocation. Study of these ?–stabilized tropilium ions enabled establishing photophysical structure-property trends for guaiazulene-terminated ?-conjugated analogs under acidic conditions, including absorption, emission, quantum yield, and optical band gap patterns. These results were exploited in the design of a photosensitive polymeric system with potential application in the field of three dimensional (3D) optical data storage (ODS). Chapter 2 describes the use of guaiazulene reactive sites (C-3 and C-4 methyl group) to generate a series of cyclopenta[ef]heptalenes that exhibit strong stimuli-responsive behavior. The approach presents a versatile route that allows for various substrates to be incorporated into the resulting cyclopenta[ef]heptalenes, especially after optimization that led to devising a one-pot reaction toward such tricyclic systems. Examining the UV-vis absorption profiles in neutral and acidic media showed that the extension of conjugation at C(4) of the cyclopenta[ef]heptalene skeleton results in longer absorption maxima and smaller optical energy gaps. Additionally, it was concluded that these systems act as sensitizers of a UV-activated (< 300 nm) photoacid generator (PAG), via intermolecular photoinduced electron transfer (PeT), upon which the PAG undergoes photodecomposition resulting in the generation of acid. In a related study, the guaiazulene methyl group at C-4 was employed to study the linear and nonlinear optical properties of 4-styrylguaiazulenes, having the same ?–donor with varying ?-spacer. It was realized that the conjugation length correlates with the extent of bathochromic shift of the protonated species. On the other hand, a trend of decreasing quantum yield was established for this set of 4-styrylguaiazulenes, which can be explained by the increasingly higher degree of flexibility. The second part of this dissertation presents a comprehensive investigation of the linear photophysical, photochemical, and nonlinear optical properties of diketopyrrolopyrrole (DPP)-based derivatives, including two-photon absorption (2PA), femtosecond transient absorption, stimulated emission spectroscopy, and superfluorescence phenomena. The synthetic feasibility, ease of modification, outstanding robustness, and attractive spectroscopic properties of DPPs have motivated their study for fluorescence microscopy applications, concluding that the prepared DPP's are potentially suitable chromophores for high resolution stimulated emission depletion (STED) microscopy.
184

A Method for Monitoring Operating Equipment Effectiveness with the Internet of Things and Big Data

Hays, Carl D, III 01 June 2021 (has links) (PDF)
The purpose of this paper was to use the Overall Equipment Effectiveness productivity formula in plant manufacturing and convert it to measuring productivity for forklifts. Productivity for a forklift was defined as being available and picking up and moving containers at port locations in Seattle and Alaska. This research uses performance measures in plant manufacturing and applies them to mobile equipment in order to establish the most effective means of analyzing reliability and productivity. Using the Internet of Things to collect data on fifteen forklift trucks in three different locations, this data was then analyzed over a six-month period to rank the forklifts’ productivity from 1 – 15 using the Operating Equipment Effectiveness formula (OPEE). This ranking was compared to the industry standard for utilization to demonstrate how this approach would yield a better performance analysis and provide a more accurate tool for operations managers to manage their fleets of equipment than current methods. This analysis was shared with a fleet operations manager, and his feedback indicated there would be considerable value to analyzing his operations using this process. The results of this research identified key areas for improvement in equipment reliability and the need for additional operator training on the proper use of machines and provided insights into equipment operations in remote locations to managers who had not visited or evaluated those locations on-site.
185

Three Dimensional Data Storage in Polymeric Systems

Ryan, Christopher James 26 June 2012 (has links)
No description available.
186

Navigating the Risks of Dark Data : An Investigation into Personal Safety

Gautam, Anshu January 2023 (has links)
With the exponential proliferation of data, there has been a surge in data generation fromdiverse sources, including social media platforms, websites, mobile devices, and sensors.However, not all data is readily visible or accessible to the public, leading to the emergence ofthe concept known as "dark data." This type of data can exist in structured or unstructuredformats and can be stored in various repositories, such as databases, log files, and backups.The reasons behind data being classified as "dark" can vary, encompassing factors such as limited awareness, insufficient resources or tools for data analysis, or a perception ofirrelevance to current business operations. This research employs a qualitative research methodology incorporating audio/videorecordings and personal interviews to gather data, aiming to gain insights into individuals'understanding of the risks associated with dark data and their behaviors concerning thesharing of personal information online. Through the thematic analysis of the collected data,patterns and trends in individuals' risk perceptions regarding dark data become evident. The findings of this study illuminate the multiple dimensions of individuals' risk perceptions andt heir influence on attitudes towards sharing personal information in online contexts. Theseinsights provide valuable understanding of the factors that shape individuals' decisionsconcerning data privacy and security in the digital era. By contributing to the existing body ofknowledge, this research offers a deeper comprehension of the interplay between dark datarisks, individuals' perceptions, and their behaviors pertaining to online information sharing.The implications of this study can inform the development of strategies and interventionsaimed at fostering informed decision-making and ensuring personal safety in an increasinglydata-centric world
187

Bluetooth-enheter i offentliga rummet och anonymisering av data

Nilsson, Mattias, Olsson, Sebastian January 2015 (has links)
Internet of Things (IoT) ger stora möjligheter att samla in data för olika syfte som till exempel att estimera antalet personer för att styra värmen i ett rum. Vidare så kan IoT-system automatisera uppgifter som kan hjälpa oss människor. Den här studien syftar till vilken typ av data som kan vara intressant att samla in för att kunna estimera antalet personer på en offentlig plats. Det handlar även om hur känslig data som samlas in kan anonymiseras. För att göra detta så valdes det att undersöka hur MAC-adresser från Bluetooth-enheter skulle fungerar för att uppskatta antalet personer. För att samla in MAC-adresser så utvecklades ett proof of concept-system där en Android-applikation samlade in MAC-adresser som anonymiserades innan de lagrades i en databas. Applikationen anonymiserar den unika MAC-adressen enligt tre nivåer med olika säkerhet. Fältstudier gjordes där antalet personer räknades visuellt sedan gjordes anonymiserade insamlingar av MAC-adresser. Slutsatsen var att Bluetooth blir svårt att använda för att estimera antal personer eftersom alla inte har Bluetooth på. Applikationen som utvecklats påvisar att data kan samlas in säkert och på så sätt inte kränka integritet. / Internet of Things (IoT) provides great opportunities to collect data for different purposes such as to estimate the number of people to control the heat in a room. Furthermore, IoT systems can automate tasks that can help us humans. This study is aimed at the type of data that can be interesting to gather in order to estimate the number of people in a public place. It is also about how sensitive data can be anonymized when gathered. To do this, Bluetooth devices was chosen for investigating how the MAC addresses would work to estimate the number of people. For collecting MAC addresses a proof of concept system was developed, where an Android application was used to collect MAC addresses. These MAC addresses were anonymized before being stored in a database. The application anonymize the unique MAC address according to three levels of security. Field studies were conducted as the number of people were counted visually then anonymous collection of MAC addresses were made. The conclusion was that Bluetooth will be difficult to use for estimating the number of people because not everyone has Bluetooth on. The application developed demonstrates that data can be collected safely and thus does not violate privacy.
188

Improving Search Ranking Using a Composite Scoring Approach

Snedden, Larry D 01 January 2017 (has links)
In this thesis, the improvement to relevance in computerized search results is studied. Information search tools return ranked lists of documents ordered by the relevance of the documents to the user supplied search. Using a small number of words and phrases to represent complex ideas and concepts causes user search queries to be information sparse. This sparsity challenges search tools to locate relevant documents for users. A review of the challenges to information searches helps to identify the problems and offer suggestions in improving current information search tools. Using the suggestions put forth by the Strategic Workshop on Information Retrieval in Lorne (SWIRL), a composite scoring approach (Composite Scorer) is developed. The Composite Scorer considers various aspects of information needs to improve the ranked results of search by returning records relevant to the user’s information need. The Florida Fusion Center (FFC), a local law enforcement agency has a need for a more effective information search tool. Daily, the agency processes large amounts of police reports typically written as text documents. Current information search methods require inordinate amounts of time and skill to identify relevant police reports from their large collection of police reports. An experiment conducted by FFC investigators contrasted the composite scoring approach against a common search scoring approach (TF/IDF). In the experiment, police investigators used a custom-built software interface to conduct several use case scenarios for searching for related documents to various criminal investigations. Those expert users then evaluated the results of the top ten ranked documents returned from both search scorers to measure the relevance to the user of the returned documents. The evaluations were collected and measurements used to evaluate the performance of the two scorers. A search with many irrelevant documents has a cost to the users in both time and potentially in unsolved crimes. A cost function contrasted the difference in cost between the two scoring methods for the use cases. Mean Average Precision (MAP) is a common method used to evaluate the performance of ranked list search results. MAP was computed for both scoring methods to provide a numeric value representing the accuracy of each scorer at returning relevant documents in the top-ten documents of a ranked list of search results. The purpose of this study is to determine if a composite scoring approach to ranked lists, that considers multiple aspects of a user’s search, can improve the quality of search, returning greater numbers of relevant documents during an information search. This research contributes to the understanding of composite scoring methods to improve search results. Understanding the value of composite scoring methods allows researchers to evaluate, explore and possibly extend the approach, incorporating other information aspects such as word and document meaning.
189

ESTIMATION ON GIBBS ENTROPY FOR AN ENSEMBLE

Sake, Lekhya Sai 01 December 2015 (has links)
In this world of growing technology, any small improvement in the present scenario would create a revolution. One of the popular revolutions in the computer science field is parallel computing. A single parallel execution is not sufficient to see its non-deterministic features, as same execution with the same data at different time would end up with a different path. In order to see how non deterministic a parallel execution can extend up to, creates the need of the ensemble of executions. This project implements a program to estimate the Gibbs Entropy for an ensemble of parallel executions. The goal is to develop tools for studying the non-deterministic feature of parallel code based on execution entropy and use these developed tools for current and future research.
190

Network Coding in Distributed, Dynamic, and Wireless Environments: Algorithms and Applications

Chaudhry, Mohammad 2011 December 1900 (has links)
The network coding is a new paradigm that has been shown to improve throughput, fault tolerance, and other quality of service parameters in communication networks. The basic idea of the network coding techniques is to relish the "mixing" nature of the information flows, i.e., many algebraic operations (e.g., addition, subtraction etc.) can be performed over the data packets. Whereas traditionally information flows are treated as physical commodities (e.g., cars) over which algebraic operations can not be performed. In this dissertation we answer some of the important open questions related to the network coding. Our work can be divided into four major parts. Firstly, we focus on network code design for the dynamic networks, i.e., the networks with frequently changing topologies and frequently changing sets of users. Examples of such dynamic networks are content distribution networks, peer-to-peer networks, and mobile wireless networks. A change in the network might result in infeasibility of the previously assigned feasible network code, i.e., all the users might not be able to receive their demands. The central problem in the design of a feasible network code is to assign local encoding coefficients for each pair of links in a way that allows every user to decode the required packets. We analyze the problem of maintaining the feasibility of a network code, and provide bounds on the number of modifications required under dynamic settings. We also present distributed algorithms for the network code design, and propose a new path-based assignment of encoding coefficients to construct a feasible network code. Secondly, we investigate the network coding problems in wireless networks. It has been shown that network coding techniques can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks each packet transmitted by a device is broadcasted within a certain area and can be overheard by the neighboring devices. When a device needs to transmit packets, it employs the Index Coding that uses the knowledge of what the device's neighbors have heard in order to reduce the number of transmissions. With the Index Coding, each transmitted packet can be a linear combination of the original packets. The Index Coding problem has been proven to be NP-hard, and NP-hard to approximate. We propose an efficient exact, and several heuristic solutions for the Index Coding problem. Noting that the Index Coding problem is NP-hard to approximate, we look at it from a novel perspective and define the Complementary Index Coding problem, where the objective is to maximize the number of transmissions that are saved by employing coding compared to the solution that does not involve coding. We prove that the Complementary Index Coding problem can be approximated in several cases of practical importance. We investigate both the multiple unicast and multiple multicast scenarios for the Complementary Index Coding problem for computational complexity, and provide polynomial time approximation algorithms. Thirdly, we consider the problem of accessing large data files stored at multiple locations across a content distribution, peer-to-peer, or massive storage network. Parts of the data can be stored in either original form, or encoded form at multiple network locations. Clients access the parts of the data through simultaneous downloads from several servers across the network. For each link used client has to pay some cost. A client might not be able to access a subset of servers simultaneously due to network restrictions e.g., congestion etc. Furthermore, a subset of the servers might contain correlated data, and accessing such a subset might not increase amount of information at the client. We present a novel efficient polynomial-time solution for this problem that leverages the matroid theory. Fourthly, we explore applications of the network coding for congestion mitigation and over flow avoidance in the global routing stage of Very Large Scale Integration (VLSI) physical design. Smaller and smarter devices have resulted in a significant increase in the density of on-chip components, which has given rise to congestion and over flow as critical issues in on-chip networks. We present novel techniques and algorithms for reducing congestion and minimizing over flows.

Page generated in 0.051 seconds