• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 46
  • 31
  • 29
  • 26
  • 26
  • 14
  • 8
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 379
  • 59
  • 40
  • 39
  • 31
  • 28
  • 27
  • 27
  • 24
  • 23
  • 21
  • 21
  • 21
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Data Handling System for IRS

Rajyalakshmi, P. S., Rajangam, R. K. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California / The three axis stabilized Indian Remote Sensing Satellite will image the earth from a 904 Km polar - sun synchronous orbit. The payload is a set of CCD cameras which collect data in four bands visible and near infra-red region. This payload data from two cameras, each at 10.4 megabits per sec is transmitted in a balanced QPSK in X Band. The payload data before transmission is formatted by adopting Major and Minor frame synchronizing codes. The formatted two streams of data are differentially encoded to take care of 4-phase ambiguity due to QPSK transmission. This paper describes the design and development aspects related to such a Data Handling System. It also highlights the environmental qualification tests that were carried out to meet the requirement of three years operational life of the satellite.
32

THE ART OF INTERNATIONAL SPACE STATION TELEMETRY BANDWIDTH MANAGEMENT

Cerna, Peter J., Klein, Pamela R., Mullett, Joy 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The technicalities of sharing telemetry bandwidth have been addressed in design and specification for the builders of the International Space Station. But success in sharing bandwidth comes from building relationships, documenting guidelines, negotiating, understanding human nature, peer review and willingness to participate in an evolving process. The station, 240 miles above Earth, moves through space at 17,000 mph, has its mass added to by humans and machines, regularly docks with visiting spacecraft, has year-round residents, and communicates with space agencies around the globe. Each new module -- with associated computers, multiplexers, and communications buses -- creates additional telemetry demands.
33

HIGH SPEED ASYNCHRONOUS DATA MULTIPLEXER/ DEMULTIPLEXER FOR HIGH DENSITY DIGITAL RECORDERS

Berdugo, Albert, Small, Martin 10 1900 (has links)
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California / Modern High Density Digital Recorders (HDDR) are ideal devices for the storage of large amounts of digital and/or wideband analog data. Ruggedized versions of these recorders are currently available and are supporting many military and commercial flight test applications. However, in certain cases, the storage format becomes very critical, e.g., when a large number of data types are involved, or when channel-to-channel correlation is critical, or when the original data source must be accurately recreated during post mission analysis. A properly designed storage format will not only preserve data quality, but will yield the maximum storage capacity and record time for any given recorder family or data type. This paper describes a multiplex/demultiplex technique that formats multiple high speed data sources into a single, common format for recording. The method is compatible with many popular commercial recorder standards such as DCRsi, VLDS, and DLT. Types of input data typically include PCM, wideband analog data, video, aircraft data buses, avionics, voice, time code, and many others. The described method preserves tight data correlation with minimal data overhead. The described technique supports full reconstruction of the original input signals during data playback. Output data correlation across channels is preserved for all types of data inputs. Simultaneous real-time data recording and reconstruction are also supported.
34

Configuration of Flight Test Telemetry Frame Formats

Samaan, Mouna M., Cook, Stephen C. 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The production of flight test plans have received attention from many research workers due to increasing complexity of testing facilities, the complex demands proposed by customers and the large volume of data required from test flights. The paper opens with a review of research work conducted by other authors who have contributed to ameliorating the preparation of flight test plans and processing the resulting data. This is followed by a description of a specific problem area; efficiently configuring the flight test data telemetry format (defined by the relevant standards while meeting user requirements of sampling rate and PCM word length). Following a description of a current semi-automated system, the authors propose an enhanced approach and demonstrate its efficiency through two case studies.
35

Kompresia biologických sekvencií / Compression of biological sequences

Šurín, Tomáš January 2012 (has links)
Volumes of data obtained from the next generation sequencing platforms is growing faster than the available capacity of storage media. Sequencers mainly produce short reads of DNA. However, output of the sequencing machines also contains other information, for example information about read reliability/quality. This data must be archived even after successful complete genome assembly. Standard file format used for this type of data is format SAM (Sequence Alignment/Mapping Format) and its binary compressed version BAM. In this thesis we describe the construction of a better lossless compression scheme for compression of files in the SAM/BAM format. This compression scheme provides better compression ratios than the BAM format. In addition, random access to data in the compressed file is retained. Implementation of this compression scheme is platform independent and allows simple configuration of the compression process. Implementation also offers easy extensibility. Thanks to this, we will be able to respond to changes in current sequencing platforms as well as to changes in the SAM format.
36

Measuring Multidimensional Science Learning: Item Design, Scoring, and Psychometric Considerations

Castle, Courtney January 2018 (has links)
Thesis advisor: Henry Braun / The Next Generation Science Standards propose a multidimensional model of science learning, comprised of Core Disciplinary Ideas, Science and Engineering Practices, and Crosscutting Concepts (NGSS Lead States, 2013). Accordingly, there is a need for student assessment aligned with the new standards. Creating assessments that validly and reliably measure multidimensional science ability is a challenge for the measurement community (Pellegrino, et al., 2014). Multidimensional assessment tasks may need to go beyond typical item designs of standalone multiple-choice and short-answer items. Furthermore, scoring and modeling of student performance should account for the multidimensionality of the construct. This research contributes to knowledge about best practices for multidimensional science assessment by exploring three areas of interest: 1) item design, 2) scoring rubrics, and 3) measurement models. This study investigated multidimensional scaffolding and response format by comparing alternative item designs on an elementary assessment of matter. Item variations had a different number of item prompts and/or response formats. Observations about student cognition and performance were collected during cognitive interviews and a pilot test. Items were scored using a holistic rubric and a multidimensional rubric, and interrater agreement was examined. Assessment data was scaled with multidimensional scores and holistic scores, using unidimensional and multidimensional Rasch models, and model-data fit was compared. Results showed that scaffolding is associated with more thorough responses, especially among low ability students. Students tended to utilize different cognitive processes to respond to selected-response items and constructed-response items, and were more likely to respond to selected-response arguments. Interrater agreement was highest when the structure of the item aligned with the structure of the scoring rubric. Holistic scores provided similar reliability and precision as multidimensional scores, but item and person fit was poorer. Multidimensional subscales had lower reliability, less precise student estimates than the unidimensional model, and interdimensional correlations were high. However, the multidimensional rubric and model provide nuanced information about student performance and better fit to the response data. Recommendations about optimal combinations of scaffolding, rubric, and measurement models are made for teachers, policymakers, and researchers. / Thesis (PhD) — Boston College, 2018. / Submitted to: Boston College. Lynch School of Education. / Discipline: Educational Research, Measurement and Evaluation.
37

Searching textual and model-based process descriptions based on a unified data format

Leopold, Henrik, van der Aa, Han, Pittke, Fabian, Raffel, Manuel, Mendling, Jan, Reijers, Hajo A. January 2019 (has links) (PDF)
Documenting business processes using process models is common practice in many organizations. However, not all process information is best captured in process models. Hence, many organizations complement these models with textual descriptions that specify additional details. The problem with this supplementary use of textual descriptions is that existing techniques for automatically searching process repositories are limited to process models. They are not capable of taking the information from textual descriptions into account and, therefore, provide incomplete search results. In this paper, we address this problem and propose a technique that is capable of searching textual as well as model-based process descriptions. It automatically extracts activity-related and behavioral information from both descriptions types and stores it in a unified data format. An evaluation with a large Austrian bank demonstrates that the additional consideration of textual descriptions allows us to identify more relevant processes from a repository.
38

The impact of equating method and format representation of common items on the adequacy of mixed-format test equating using nonequivalent groups

Hagge, Sarah Lynn 01 July 2010 (has links)
Mixed-format tests containing both multiple-choice and constructed-response items are widely used on educational tests. Such tests combine the broad content coverage and efficient scoring of multiple-choice items with the assessment of higher-order thinking skills thought to be provided by constructed-response items. However, the combination of both item formats on a single test complicates the use of psychometric procedures. The purpose of this dissertation was to examine how characteristics of mixed-format tests and composition of the common-item set impact the accuracy of equating results in the common-item nonequivalent groups design. Operational examinee item responses for two classes of data were considered in this dissertation: (1) operational test forms and (2) pseudo-test forms that were assembled from portions of operational test forms. Analyses were conducted on three mixed-format tests from the Advanced Placement Examination program: English Language, Spanish Language, and Chemistry. For the operational test form analyses, two factors of investigation were considered as follows: (1) difference in proficiency between old and new form groups of examinees and (2) relative difficulty of multiple-choice and constructed-response items. For the pseudo-test form analyses, two additional factors of investigation were considered: (1) format representativeness of the common-item set and (2) statistical representativeness of the common-item set. For each study condition, two traditional equating methods, frequency estimation and chained equipercentile equating, and two item response theory (IRT) equating methods, IRT true score and IRT observed score methods, were considered. There were five main findings from the operational and pseudo-test form analyses. (1) As the difference in proficiency between old and new form groups of examinees increased, bias also tended to increase. (2) Relative to the criterion equating relationship for a given equating method, increases in bias were typically largest for frequency estimation and smallest for the IRT equating methods. However, it is important to note that the criterion equating relationship was different for each equating method. Additionally, only one smoothing value was analyzed for the traditional equating methods. (3) Standard errors of equating tended to be smallest for IRT observed score equating and largest for chained equipercentile equating. (4) Results for the operational and pseudo-test analyses were similar when the pseudo-tests were constructed to be similar to the operational test forms. (5) Results were mixed regarding which common-item set composition resulted in the least bias.
39

HADOOP-EDF: LARGE-SCALE DISTRIBUTED PROCESSING OF ELECTROPHYSIOLOGICAL SIGNAL DATA IN HADOOP MAPREDUCE

Wu, Yuanyuan 01 January 2019 (has links)
The rapidly growing volume of electrophysiological signals has been generated for clinical research in neurological disorders. European Data Format (EDF) is a standard format for storing electrophysiological signals. However, the bottleneck of existing signal analysis tools for handling large-scale datasets is the sequential way of loading large EDF files before performing an analysis. To overcome this, we develop Hadoop-EDF, a distributed signal processing tool to load EDF data in a parallel manner using Hadoop MapReduce. Hadoop-EDF uses a robust data partition algorithm making EDF data parallel processable. We evaluate Hadoop-EDF’s scalability and performance by leveraging two datasets from the National Sleep Research Resource and running experiments on Amazon Web Service clusters. The performance of Hadoop-EDF on a 20-node cluster improves 27 times and 47 times than sequential processing of 200 small-size files and 200 large-size files, respectively. The results demonstrate that Hadoop-EDF is more suitable and effective in processing large EDF files.
40

Forensics as a Delay in Stories of Sherlock Holmes : "Although the Series is More Extendedly Delayed by Forensic Elements, the Difference is Not as Significant as Expected"

Junker, Frida January 2019 (has links)
The relationship between the development of real life forensics and fiction’s use of it is a close one, and it offers excitement and pleasure to follow investigations and unravel mysteries, clearly, both in real life and fiction. Sir Arthur Conan Doyle’s fictional detective Sherlock Holmes has famously used advanced deductive methods to solve crimes since his first appearance in A Study in Scarlet. The recent explosion of forensic elements within fiction has not passed by unnoticed, raising the question of whether forensic delays are more extendedly used in more recent adaptations of Sherlock Holmes stories, due to the modern range of methods and techniques available. In this essay I show in a comparison of Doyle’s original works about the character Sherlock Holmes, to one of today’s television series; BBC’s Sherlock, that the recent adaptation is interrupted more frequently by forensic investigations, including modern forensic techniques and helpful equipment, which keeps the story from moving forward for a longer period of time, making it a delay. Furthermore, the comparison deals with adaptation theory and shows that the format in which the story is presented is decisive for the result. I conclude that forensic delays are used more extendedly in the contemporary television series Sherlock, due to a more generous range of methods available, but that measuring the extent of forensic delays generally favors the text format.   Keywords: Delay, Sherlock Holmes, Forensics, Development, Format

Page generated in 0.0291 seconds