1 |
SPIRIT III Data Verification ProcessingGarlick, Dean, Wada, Glen, Krull, Pete 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / This paper will discuss the functions performed by the Spatial Infrared Imaging Telescope
(SPIRIT) III Data Processing Center (DPC) at Utah State University (USU). The SPIRIT
III sensor is the primary instrument on the Midcourse Space Experiment (MSX) satellite;
and as builder of this sensor system, USU is responsible for developing and operating the
associated DPC. The SPIRIT III sensor consists of a six-color long-wave infrared (LWIR)
radiometer system, an LWIR spectrographic interferometer, contamination sensors, and
housekeeping monitoring systems. The MSX spacecraft recorders can capture up to 8+
gigabytes of data a day from this sensor. The DPC is subsequently required to provide a
24-hour turnaround to verify and qualify these data by implementing a complex set of
sensor and data verification and quality checks. This paper addresses the computing
architecture, distributed processing software, and automated data verification processes
implemented to meet these requirements.
|
2 |
Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled TrialsWard, Roxanne E. 21 March 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes.
Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG).
Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV.
Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
|
3 |
Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled TrialsWard, Roxanne E. 21 March 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes.
Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG).
Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV.
Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
|
4 |
Examining Methods and Practices of Source Data Verification in Canadian Critical Care Randomized Controlled TrialsWard, Roxanne E. January 2013 (has links)
Statement of the Problem: Source data verification (SDV) is the process of comparing data collected at the source to data recorded on a Case Report Form, either paper or electronic (1) to ensure that the data are complete, accurate and verifiable. Good Clinical Practice (GCP) Guidelines are vague and lack evidence as to the degree of SDV and whether or not SDV affects study outcomes.
Methods of Investigation: We performed systematic reviews to establish the published evidence-base for methods of SDV and to examine the effect of SDV on study outcomes. We then conducted a national survey of Canadian Critical Care investigators and research coordinators regarding their attitudes and beliefs regarding SDV. We followed by an audit of the completed and in-progress Randomized Controlled Trials (RCTs) of the Canadian Critical Care Trials Group (CCCTG).
Results: Systematic Review of Methods of SDV: The most common reported or recommended frequency of source data verification (10/14 - 71%) was either based on level or risk, or that it be conducted early (i.e. after 1st patient enrolled). The amount of SDV recommended or reported, varied from 5-100%. Systematic Review of Impact of SDV on Study Outcomes: There was no difference in study outcomes for 1 trial and unable to assess in the other. National Survey of Critical Care Investigators and Research Coordinators: Data from the survey found that 95.8% (115/120) of respondents believed that SDV was an important part of Quality Assurance; 73.3% (88/120) felt that academic studies should do more SDV; and 62.5% (75/120) felt that there is insufficient funding available for SDV. Audit of Source Data Verification Practices in CCCTG RCTs: In the national audit of in-progress and completed CCCTG RCTs, 9/15 (60%) included a plan for SDV and 8/15 (53%) actually conducted SDV. Of the 9 completed published trials, 44% (4/9) conducted SDV.
Conclusion: There is little evidence base for methods and effect of SDV on study outcomes. Based on the results of the systematic review, survey, and audit, more research is needed to support the evidence base for the methods and effect of SDV on study outcomes.
|
5 |
Implementing a distributed approach for speech resource and system development / Nkadimeng Raymond MolapoMolapo, Nkadimeng Raymond January 2014 (has links)
The range of applications for high-quality automatic speech recognition (ASR) systems has grown
dramatically with the advent of smart phones, in which speech recognition can greatly enhance the
user experience. Currently, the languages with extensive ASR support on these devices are languages
that have thousands of hours of transcribed speech corpora already collected. Developing a speech
system for such a language is made simpler because extensive resources already exist. However for
languages that are not as prominent, the process is more difficult. Many obstacles such as reliability
and cost have hampered progress in this regard, and various separate tools for every stage of the
development process have been developed to overcome these difficulties.
Developing a system that is able to combine these identified partial solutions, involves customising
existing tools and developing new ones to interface the overall end-to-end process. This work documents
the integration of several tools to enable the end-to-end development of an Automatic Speech
Recognition system in a typical under-resourced language. Google App Engine is employed as the
core environment for data verification, storage and distribution, and used in conjunction with existing
tools for gathering text data and for speech data recording. We analyse the data acquired by each of
the tools and develop an ASR system in Shona, an important under-resourced language of Southern
Africa. Although unexpected logistical problems complicated the process, we were able to collect
a useable Shona speech corpus, and develop the first Automatic Speech Recognition system in that
language. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2014
|
6 |
Implementing a distributed approach for speech resource and system development / Nkadimeng Raymond MolapoMolapo, Nkadimeng Raymond January 2014 (has links)
The range of applications for high-quality automatic speech recognition (ASR) systems has grown
dramatically with the advent of smart phones, in which speech recognition can greatly enhance the
user experience. Currently, the languages with extensive ASR support on these devices are languages
that have thousands of hours of transcribed speech corpora already collected. Developing a speech
system for such a language is made simpler because extensive resources already exist. However for
languages that are not as prominent, the process is more difficult. Many obstacles such as reliability
and cost have hampered progress in this regard, and various separate tools for every stage of the
development process have been developed to overcome these difficulties.
Developing a system that is able to combine these identified partial solutions, involves customising
existing tools and developing new ones to interface the overall end-to-end process. This work documents
the integration of several tools to enable the end-to-end development of an Automatic Speech
Recognition system in a typical under-resourced language. Google App Engine is employed as the
core environment for data verification, storage and distribution, and used in conjunction with existing
tools for gathering text data and for speech data recording. We analyse the data acquired by each of
the tools and develop an ASR system in Shona, an important under-resourced language of Southern
Africa. Although unexpected logistical problems complicated the process, we were able to collect
a useable Shona speech corpus, and develop the first Automatic Speech Recognition system in that
language. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2014
|
7 |
Data Verifications for Online Social NetworksRahman, Mahmudur 10 November 2015 (has links)
Social networks are popular platforms that simplify user interaction and encourage collaboration. They collect large amounts of media from their users, often reported from mobile devices. The value and impact of social media makes it however an attractive attack target. In this thesis, we focus on the following social media vulnerabilities. First, review centered social networks such as Yelp and Google Play have been shown to be the targets of significant search rank and malware proliferation attacks. Detecting fraudulent behaviors is thus paramount to prevent not only public opinion bias, but also to curb the distribution of malware. Second, the increasing use of mobile visual data in news networks, authentication and banking applications, raises questions of its integrity and credibility. Third, through proof-of- concept implementations, we show that data reported from wearable personal trackers is vulnerable to a wide range of security and privacy attacks, while off-the-shelves security solutions do not port gracefully to the constraints introduced by trackers. In this thesis we propose novel solutions to address these problems. First, we introduce Marco, a system that leverages the wealth of spatial, temporal and network information gleaned from Yelp, to detect venues whose ratings are impacted by fraudulent reviews. Second, we propose FairPlay, a system that correlates review activities, linguistic and behavioral signals gleaned from longitudinal app data, to identify not only search rank fraud but also malware in Google Play, the most popular Android app market. Third, we describe Movee, a motion sensor based video liveness verification system, that analyzes the consistency between the motion inferred from the simultaneously and independently captured camera and inertial sensor streams. Finally, we devise SensCrypt, an efficient and secure data storage and communication protocol for affordable and lightweight personal trackers. We provide the correctness and efficacy of our solutions through a detailed theoretic and experimental analysis.
|
Page generated in 0.1148 seconds