• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19646
  • 3370
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 234
  • 208
  • 208
  • 208
  • Tagged with
  • 38133
  • 12457
  • 9252
  • 7111
  • 6698
  • 5896
  • 5291
  • 5197
  • 4727
  • 3455
  • 3303
  • 2815
  • 2726
  • 2539
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Data blending in health care : Evaluation of data blending

Chen, Qian January 2016 (has links)
This report is aimed at those who are interested in data analysis and data blending. Decision making is crucial for an organization to succeed in today’s market. Data analysis is an important support activity in decision making and is applied in many industries, for example healthcare. For many years data analysts have worked on structured data in small volumes, with traditional methods such as spreadsheet. As new data sources emerged, such as social media, data is generated in higher volume, velocity and variety [1]. The traditional methods data analysts apply are no longer capable of handling this situation. Hence scientists and engineers have developed a new technology called data blending. Data blending is the process of merging, sorting, joining and combining all the useful data into a functional dataset [2]. Some of the well-known data blending platforms include Datawatch, Microsoft Power Query for Excel, IBM DataWorks and Alteryx [3]. Synergus AB is a consulting company engaged in health economics, market access and Health Technology Assessment (HTA) [4]. The company does analysis for their clients. Unfortunately the way they work is not efficient. New tools and methods need to be applied in the company. The company has decided to apply data blending in their daily work. My task in this project was to build datasets for analysis and create workflows for future use with a data blending platform. For my interest, I did a research on data blending to understand how this new technology works. During the project I have worked with four data sources. These were Microsoft Excel worksheet, CSV file, MS Access database and JSON file. I built datasets the company needs. I also preceded a case study on data blending process. I focused on the three steps of data handling, namely input, process and output. After the project, I reached a conclusion that data blending offers better performance and functionality. It is easy to learn and use, too. / Denna rapport vänder sig till de som är intresserad av data analys och datahantering. Belsut fattande är avgörande för en organisation att lyckas i dagens marknad. Data analys är en viktig stöd inom beslutfattande och tillämpas i många industrier, till exempel hälsovård. I många år har data analyster arbetat med strukturerad data i små volymer, med traditionella arbetsmetoder såsom kalkyblad. Med nya data källor uppstått, såsom sociala media, data är genererad i högre volym, högre hastighet och högre variation. De traditionella metoder data analyster använder är inte längre kapabla av att hantera denna situation. Därför har vetenskapsmän och ingenjörer utvecklat ett ny teknologi kallad datahantering. Datahantering är en process för att sammanfoga, sortera och kombinera all värdeful data till en funktionell dataset. Några av de välkända datahanteringsplatformer inkluderar Datawatch, Microsoft Power Query for Excel, IBM DataWorks and Alteryx. Synergus AB är ett konsultföretag engagerad inom hälsoekonomi, marknad tillträde, och Health Technology Assessment (HTA). Företaget gör analys för deras kunder. Tyvärr är de arbetsmetoder inom företaget inte effektiv. Nya verktyg och metoder måste tillämpas inom företaget. Synergus AB har beslutat att tillämpa datahantering i deras dagliga arbete. Mitt uppdrag i detta projekt var att bygga dataset för analys och skapa arbetsflöde för framtida användning med en datahanteringsplatform. För mitt eget intresse, jag utförde en studie av datahantering för att förstå hur denna nya teknologi fungerar. Under projektet har jag arbetat med fyra data källor. De var Microsft Excel kalkylblad, CSV fil, MS Access databas och JSON fil. Jag byggde dataset företaget behöver. Jag också utförde ett fall studie om datahanteringsprocess. Jag fokuserade mig på de tre steg inom datahantering, nämligen inmatning, bearbetning och utmatning. Efter projektet kom jag till en slutsats att datahantering erjuder bättre prestanda och funktionelitet. Det är också lätt att lära sig och använda.
122

Key Components in a Networked Data Acquisition System

Corry, Diarmuid 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / With the growing interest in networked data acquisition there has been a lot of focus on networked data acquisition systems. However, the requirements of a flight test instrumentation system go beyond networked DAU's. For example, a FTI network fabric has particular requirements for switches, time grandmasters, recorders, data servers and network terminals to the ground. This paper discusses these components and how they inter-operate in a single, fully networked system and discusses some FTI oriented requirements for same. Where relevant, we discuss the results of some experiments with network latencies; packet losses etc. and discuss some enhancements that can contribute to improved efficiency for flight test programs.
123

THE USE OF IEEE P1451.3 SMART SENSORS IN A DATA ACQUISITION NETWORK

Eccles, Lee H. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes the use of an IEEE p1451.3 Smart Sensor bus as part of a network centric data acquisition system. IEEE p1451.3 provides for synchronized data acquisition from a number of transducers on a bus. The standard provides for Transducer Electronic Data Sheets (TEDS) that the manufacturer can use to describe the function and capabilities of the sensor module. The standard also provides for TEDS where the user can store information relevant to a particular application. The information in these TEDS can be used to generate much of the information that is required to be able to process the data during or after a test. The use of this information to configure and operate a Network Based Data Acquisition is described.
124

Kan e-tjänster förenklas och bli mer motiverande med gamification och öppna data? : En kvalitativ studie / Is it possible to simplify e-services and become more motivating with gamification and open data? : A qualitative study

Cederberg, Petter January 2016 (has links)
Inför sommaren 2015 var 40 % av alla bygglovsansökningar som skickades in till Karlstads kommun antingen felaktiga eller ofullständiga och det leder till stora förseningar på tiden det tar från inskickad ansökan till att beslut fattas från kommunens handläggare. Denna kandidatuppsats syftar till att undersöka om det med hjälp av gamification och öppna data går att förenkla e-tjänster och på vilket sätt, med ansökan av byggnadslov som exempel på en kommunal e-tjänst på hos Karlstads kommun. Gamification innebär att applicera spelmekanismer eller spelupplevelser i icke spelrelaterade sammanhang för att öka motivationen och engagemanget hos användarna. Öppna data är när data som tidigare inte varit tillgänglig blir disponibel för allmänheten att använda, återanvända och distribuera utan andra förbehåll än källangivelse. Kandidatuppsatsen har genomförts i form av litteraturgenomgång som bas och kvalitativa intervjuer med personer som arbetar med gamification, öppna data eller kommunala e-tjänster. Slutsatserna av undersökningen visar att e-tjänster kan behöva bli mer självförklarande och enklare så att fler medborgare klarar av att använda e-tjänsten. Genom att applicera enklare spelmekanismer som passar till kontexten hos en e-tjänst och informationen som går att få ut genom öppna data är det möjligt att förenkla e-tjänster och göra de mer motiverande för slutanvändaren. Gamification kan göra e-tjänster enklare och mer motiverande med till exempel:  Ge användaren mer feedback  Sparfunktion så att användaren får möjlighet att spara sina framsteg  Förloppsindikator så att användaren kan följa hur långt i processen den kommit Öppna data kan göra e-tjänster enklare och mer motiverande med till exempel:  Geografisk data  Korslänkning av olika öppna data
125

AN INTEGRATED APPROACH TO DATA ACQUISITION AND DATA PROCESSING FOR INDUSTRIAL APPLICATIONS

Schmalz, Axel 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 1984 / Riviera Hotel, Las Vegas, Nevada / The requirements for data acquisition systems grow rapidly with the progress of technology. Increasingly complex test instruments become available. Integration of instruments and computers into an operational measurement system, however, is more difficult and expensive as requirements increase. A family of instruments was developed which can perform complex measurement tasks without a large integration effort since it provides a large number of compatible hardware and software modules for conditioning and conversion of signals into digital form, for data storage, data transmission, and data pre-processing.
126

IN FLIGHT DATA REDUCTION FOR REDUCED BANDWIDTH TRANSMISSION

Hicks, William T. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The desire to acquire large quantities of high speed vibration and acoustic data during aircraft testing is usually satisfied through on-board high speed recording methods. However there is often a need to have some of this data instantly available at the ground station for flight safety and other purposes. A Data Processor (DP) has been developed allowing an airborne data acquisition system to acquire large amounts of wideband analog data, process the data in real-time, and develop reduced bandwidth information from high bandwidth channels. The reduced data can be inserted into a Pulse Code Modulation (PCM) stream and telemetered via a Radio Frequency (RF) link with a potential for a 2000:1 reduction in bandwidth. This on-board processing capability also lends itself to additional tasks such as the generation of a reduced bandwidth marker channel which can flag critical time periods of data activity. This flagging technique can be used to facilitate ground station analysis of specific segments of data, resulting in significant cost and time savings.
127

Improving data quality : data consistency, deduplication, currency and accuracy

Yu, Wenyuan January 2013 (has links)
Data quality is one of the key problems in data management. An unprecedented amount of data has been accumulated and has become a valuable asset of an organization. The value of the data relies greatly on its quality. However, data is often dirty in real life. It may be inconsistent, duplicated, stale, inaccurate or incomplete, which can reduce its usability and increase the cost of businesses. Consequently the need for improving data quality arises, which comprises of five central issues of improving data quality, namely, data consistency, data deduplication, data currency, data accuracy and information completeness. This thesis presents the results of our work on the first four issues with regards to data consistency, deduplication, currency and accuracy. The first part of the thesis investigates incremental verifications of data consistencies in distributed data. Given a distributed database D, a set S of conditional functional dependencies (CFDs), the set V of violations of the CFDs in D, and updates ΔD to D, it is to find, with minimum data shipment, changes ΔV to V in response to ΔD. Although the problems are intractable, we show that they are bounded: there exist algorithms to detect errors such that their computational cost and data shipment are both linear in the size of ΔD and ΔV, independent of the size of the database D. Such incremental algorithms are provided for both vertically and horizontally partitioned data, and we show that the algorithms are optimal. The second part of the thesis studies the interaction between record matching and data repairing. Record matching, the main technique underlying data deduplication, aims to identify tuples that refer to the same real-world object, and repairing is to make a database consistent by fixing errors in the data using constraints. These are treated as separate processes in most data cleaning systems, based on heuristic solutions. However, our studies show that repairing can effectively help us identify matches, and vice versa. To capture the interaction, a uniform framework that seamlessly unifies repairing and matching operations is proposed to clean a database based on integrity constraints, matching rules and master data. The third part of the thesis presents our study of finding certain fixes that are absolutely correct for data repairing. Data repairing methods based on integrity constraints are normally heuristic, and they may not find certain fixes. Worse still, they may even introduce new errors when attempting to repair the data, which may not work well when repairing critical data such as medical records, in which a seemingly minor error often has disastrous consequences. We propose a framework and an algorithm to find certain fixes, based on master data, a class of editing rules and user interactions. A prototype system is also developed. The fourth part of the thesis introduces inferring data currency and consistency for conflict resolution, where data currency aims to identify the current values of entities, and conflict resolution is to combine tuples that pertain to the same real-world entity into a single tuple and resolve conflicts, which is also an important issue for data deduplication. We show that data currency and consistency help each other in resolving conflicts. We study a number of associated fundamental problems, and develop an approach for conflict resolution by inferring data currency and consistency. The last part of the thesis reports our study of data accuracy on the longstanding relative accuracy problem which is to determine, given tuples t1 and t2 that refer to the same entity e, whether t1[A] is more accurate than t2[A], i.e., t1[A] is closer to the true value of the A attribute of e than t2[A]. We introduce a class of accuracy rules and an inference system with a chase procedure to deduce relative accuracy, and the related fundamental problems are studied. We also propose a framework and algorithms for inferring accurate values with users’ interaction.
128

ACQUISITION AND TRANSMISSION OF SEISMIC DATA OVER PACKET RADIO

Hinterseer, Martin, Wegscheider, Christoph 10 1900 (has links)
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada / Research on earthquakes and volcanos is of particular importance in seismic instable regions. To improve opportunities of research and civil defence, continuous coverage of seismic activities of a large area at a certain time is necessary. This paper deals with the issues concerning the collecting of environmental data by a number of autonomous field stations and their transmission to central station through a cost effective low bandwidth packer radio data network. This paper deals with the acquisition, preprocessing and transmission of seismic data. Therefore, a prototype system is discussed, which will be developed at the University of Salzburg, Austria.
129

The Implications for Network Recorder Design in a Networked Flight Test Instrumentation Data Acquisition System

Cranley, Nikki 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The higher bandwidth capacities available with the adoption of Ethernet technology for networked FTI data acquisition systems enable more data to be acquired. However, this puts increased demands on the network recorder to be able to support such data rates. During any given flight, the network recorder may log hundreds of GigaBytes of data, which must be processed and analyzed in real-time or in post-flight. This paper describes several approaches that may be adopted to facilitate data-on-demand data mining and data reduction operations. In particular, the use of filtering and indexing techniques that may be adopted to address this challenge are described.
130

The development and application of informatics-based systems for the analysis of the human transcriptome.

Kelso, Janet January 2003 (has links)
<p>Despite the fact that the sequence of the human genome is now complete it has become clear that the elucidation of the transcriptome is more complicated than previously expected. There is mounting evidence for unexpected and previously underestimated phenomena such as alternative splicing in the transcriptome. As a result, the identification of novel transcripts arising from the genome continues. Furthermore, as the volume of transcript data grows it is becoming increasingly difficult to integrate expression information which is from different sources, is stored in disparate locations, and is described using differing terminologies. Determining the function of translated transcripts also remains a complex task. Information about the expression profile &ndash / the location and timing of transcript expression &ndash / provides evidence that can be used in understanding the role of the expressed transcript in the organ or tissue under study, or in developmental pathways or disease phenotype observed.<br /> <br /> In this dissertation I present novel computational approaches with direct biological applications to two distinct but increasingly important areas of research in gene expression research. The first addresses detection and characterisation of alternatively spliced transcripts. The second is the construction of an hierarchical controlled vocabulary for gene expression data and the annotation of expression libraries with controlled terms from the hierarchies. In the final chapter the biological questions that can be approached, and the discoveries that can be made using these systems are illustrated with a view to demonstrating how the application of informatics can both enable and accelerate biological insight into the human transcriptome.</p>

Page generated in 0.1409 seconds