• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 252
  • 51
  • 34
  • 27
  • 27
  • 8
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 500
  • 500
  • 114
  • 79
  • 75
  • 68
  • 68
  • 55
  • 47
  • 44
  • 36
  • 36
  • 35
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Applications of Transit Signal Priority Technology for Transit Service

Consoli, Frank Anthony 01 January 2014 (has links)
This research demonstrated the effectiveness of Transit Signal Priority (TSP) in improving bus corridor travel time in a simulated environment using real world data. TSP is a technology that provides preferential treatment to buses at signalized intersections. By considering different scenarios of activating bus signal priority when a bus is 3 or 5 minutes behind schedule, it was demonstrated that bus travel times improved significantly while there is little effect on delays for crossing street traffic. The case of providing signal priority for buses unconditionally resulted in significant crossing street delays for some signalized intersections with only minor improvement to bus travel time over both scenarios of Conditional priority. Evaluation was conducted by using micro-simulation and statistical analysis to compare Unconditional and Conditional TSP with the No TSP scenario. This evaluation looked at performance metrics (for buses and all vehicles) including average speed profiles, average travel times, average number of stops, and crossing street delay. Different Conditional TSP scenarios of activating TSP when a bus is 3 or 5 minutes behind schedule were considered. The simulation demonstrated that Conditional TSP significantly improved bus travel times with little effect on crossing street delays. The results also showed that utilizing TSP technology reduced the environmental emissions in the I-Drive corridor. Furthermore, field data was used to calculate actual passenger travel time savings and benefit cost ratio (7.92) that resulted from implementing conditional TSP. Conditional TSP 3 minutes behind schedule was determined to be the most beneficial and practical TSP scenario for real world implementation at both the corridor and regional levels.
102

The Use of Short-Interval GPS Data for Construction Operations Analysis

Hildreth, John C. 05 March 2003 (has links)
The global positioning system (GPS) makes use of extremely accurate measures of the time to determine position. The times required for electronic signals to travel at the speed of light from at least four orbiting satellites to a receiver on earth is measured precisely and used to calculate the distances from the satellites to the receiver. The calculated distances are used to determine the position of the receiver through triangulation. This research takes an approach opposite the original GPS research, focusing on the use of position to determine the time at which events occur. Specifically, this work addresses the question: Can the information pertaining to position and speed contained in a GPS record be used to autonomously identify the times at which critical events occur within a production cycle? The research question was answered by determining the hardware needs for collecting the desired data in a useable format an developing a unique data collection tool to meet those needs. The tool was field evaluated and the data collected was used to determine the software needs for automated reduction of the data to the times at which key events occurred. The software tools were developed in the form of Time Identification Modules (TIMs). The TIMs were used to reduce data collected from a load and haul earthmoving operation to duration measures for the load, haul, dump, and return activities. The value of the developed system was demonstrated by investigating correlations between performance times in construction operations and by using field data to verify the results obtained from productivity estimating tools. Use of the system was shown to improve knowledge and provide additional insight into operations analysis studies. / Ph. D.
103

Police Use of Force Databases: Sources of Bias in Lethal Force Data Collection

Walkup, Christian Andrew 28 May 2021 (has links)
Understanding police use of lethal force requires the collection of reliable data. Due to bias present in police-use-of-lethal-force databases, researchers typically triangulate using multiple data sources to compensate for this bias; however, triangulation is restricted when the bias present in each database is unknown. This study investigates three government-funded and three independent police-use-of-lethal-force databases to identify methodological sources of bias present in the major U.S. data-collection systems. Bias was coded based on nine categories, including misclassification bias, broad conceptualization, narrow conceptualization, overlap bias, coverage bias, voluntary response bias, observer bias, gatekeeping bias, and self-report response bias. Findings suggest that all six databases had at least three different types of methodological bias present. Generally, public, government-sponsored databases exhibit bias through data self-reporting by law enforcement and varying victim race determination methods. Private databases reveal bias through media-based reporting and the triangulation of data from multiple sources, which is further complicated by lack of transparency in the databases' design and administrative procedures. All six databases have a unique position to the State, which should also inform researcher data selection. I argue that selecting data sources that complement each other based on these identified biases will produce a more complete image of police-use-of-lethal-force and enhance finding accuracy in future research. / Master of Science / Understanding incidents where a civilian dies due to the actions of police officers requires the collection of reliable data. Due to bias—flaws in the data collection methods or data presentation—which lead to varying results when using different databases, researchers typically use multiple data sources to make up for these flaws; however, this method is restricted when the bias present in each database is unknown. This study investigates three government-funded and three independent police-use-of-lethal-force databases to identify sources of bias present in the major U.S. data-collection systems. Findings suggest that all six databases had at least three different types of flaws present. Generally, public, government-sponsored databases exhibit bias through police self-reporting lethal force, where an officer's department reports the officer's actions and there is no individual or group outside of the police reporting these incidents. Additionally, there is a flaw in how police record the race of a victim, who dies through police use of lethal force; Varying procedures in how race is recorded, whether recorded based on the officer's opinion or where a victim self-reports their own race prior to death on a government data system such as the Department of Motor Vehicles, also impacts the race data included in public databases. Private databases reveal bias through collecting incident data from news reports and using data from multiple sources such as law enforcement reports, medical examiner reports, and media reports simultaneously; this is further complicated by lack of transparency in the databases' design and administrative procedures, where there are no documents detailing the steps databases take in collecting and presenting data. All six databases have a unique position to the U.S. Government, where some are funded by the Government and where some are motivated by recent high profile police killings, which should impact researcher data selection. Ideally, the databases used should hold multiple perspectives or positions to the Government to provide an more complete image of lethal force. I argue that selecting data sources that complement each other based on these identified biases will produce a more complete image of police-use-of-lethal-force and enhance finding accuracy in future research.
104

Learning Assessment Data Collection from Educational Game Applications

Songar, Poonam 27 November 2012 (has links)
No description available.
105

Event Driven GPS Data Collection System for Studying Ionospheric Scintillation

Praveen, Vikram 15 December 2011 (has links)
No description available.
106

Design and implementation of an airborne data collection system with application to precision landing systems (ADCS)

Thomas, Robert J., Jr. January 1993 (has links)
No description available.
107

A microprocessor-based highway surface roughness data collection system

Bensonhaver, Samuel D. January 1980 (has links)
No description available.
108

A data collection system for the study of RF interference from industrial, scientific, and medical equipment

Drury, William B. January 1986 (has links)
No description available.
109

iLORE: A Data Schema for Aggregating Disparate Sources of Computer System and Benchmark Information

Hardy, Nicolas Randell 08 June 2021 (has links)
The era of modern computing has been the stage for numerous innovations that have led to cutting edge applications and systems. The characteristics of these systems and applications have been described and quantified by many, however such information is fragmented between various repositories of system and component information. In an effort to collate these disparate collections of information we propose iLORE, an extensible data framework for representing computer systems and their components. We describe the iLORE framework and the pipeline used to aggregate, clean, and insert system and component information into a database that uses iLORE's framework. Additionally, we demonstrate how the database can be used to analyze trends in computing by validating the collected data using previous works, and by showcasing new analyses that were created with said data. Analyses and visualizations created via iLORE are available at csgenome.org. / Master of Science / The era of modern computing has been the stage for numerous innovations that have led to cutting edge applications and computer systems. The characteristics of these systems and applications have been described and quantified by many, however such information is fragmented amongst different websites and databases. We propose iLORE, an extensible data framework for representing computer systems and their components. We describe the iLORE framework and the steps taken to create an iLORE database: aggregation, standardization, and insertion. Additionally, we demonstrate how the database can be used to analyze trends in computing by validating the collected data using previous works, and by showcasing new analyses that were created with said data. Analyses and visualizations created via iLORE are available at csgenome.org.
110

The iLog methodology for fostering valid and reliable Big Thick Data

Busso, Matteo 29 April 2024 (has links)
Nowadays, the apparent promise of Big Data is that of being able to understand in real-time people's behavior in their daily lives. However, as big as these data are, many useful variables describing the person's context (e.g., where she is, with whom she is, what she is doing, and her feelings and emotions) are still unavailable. Therefore, people are, at best, thinly described. A former solution is to collect Big Thick Data via blending techniques, combining sensor data sources with high-quality ethnographic data, to generate a dense representation of the person's context. As attractive as the proposal is, the approach is difficult to integrate into research paradigms dealing with Big Data, given the high cost of data collection, integration, and the expertise needed to manage them. Starting from a quantified approach to Big Thick Data, based on the notion of situational context, this thesis proposes a methodology, to design, collect, and prepare reliable and valid quantified Big Thick Data for the purposes of their reuse. Furthermore, the methodology is supported by a set of services to foster its replicability. The methodology has been applied in 4 case studies involving many domain experts and 10,000+ participants from 10 countries. The diverse applications of the methodology and the reuse of the data for multiple applications demonstrate its inner validity and reliability.

Page generated in 0.106 seconds