• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

AUTOMATED SYSTEM FOR IDENTIFYING USABLE SENSORS IN ALARGE SCALE SENSOR NETWORK FOR COMPUTER VISION

Aniesh Chawla (6630980) 11 June 2019 (has links)
<div>Numerous organizations around the world deploy sensor networks, especially visual sensor networks for various applications like monitoring traffic, security, and emergencies. With advances in computer vision technology, the potential application of these sensor networks has expanded. This has led to an increase in demand for deployment of large scale sensor networks.</div><div>Sensors in a large network have differences in location, position, hardware, etc. These differences lead to varying usefulness as they provide different quality of information. As an example, consider the cameras deployed by the Department of Transportation (DOT). We want to know whether the same traffic cameras could be used for monitoring the damage by a hurricane.</div><div>Presently, significant manual effort is required to identify useful sensors for different applications. There does not exist an automated system which determines the usefulness of the sensors based on the application. Previous methods on visual sensor networks focus on finding the dependability of sensors based on only the infrastructural and system issues like network congestion, battery failures, hardware failures, etc. These methods do not consider the quality of information from the sensor network. In this paper, we present an automated system which identifies the most useful sensors in a network for a given application. We evaluate our system on 2,500 real-time live sensors from four cities for traffic monitoring and people counting applications. We compare the result of our automated system with the manual score for each camera.</div><div>The results suggest that the proposed system reliably finds useful sensors and it output matches the manual scoring system. It also shows that a camera network deployed for a certain application can also be useful for another application.</div>
2

Statistical and machine learning methods to analyze large-scale mass spectrometry data

The, Matthew January 2016 (has links)
As in many other fields, biology is faced with enormous amounts ofdata that contains valuable information that is yet to be extracted. The field of proteomics, the study of proteins, has the luxury of having large repositories containing data from tandem mass-spectrometry experiments, readily accessible for everyone who is interested. At the same time, there is still a lot to discover about proteins as the main actors in cell processes and cell signaling. In this thesis, we explore several methods to extract more information from the available data using methods from statistics and machine learning. In particular, we introduce MaRaCluster, a new method for clustering mass spectra on large-scale datasets. This method uses statistical methods to assess similarity between mass spectra, followed by the conservative complete-linkage clustering algorithm.The combination of these two resulted in up to 40% more peptide identifications on its consensus spectra compared to the state of the art method. Second, we attempt to clarify and promote protein-level false discovery rates (FDRs). Frequently, studies fail to report protein-level FDRs even though the proteins are actually the entities of interest. We provided a framework in which to discuss protein-level FDRs in a systematic manner to open up the discussion and take away potential hesitance. We also benchmarked some scalable protein inference methods and included the best one in the Percolator package. Furthermore, we added functionality to the Percolator package to accommodate the analysis of studies in which many runs are aggregated. This reduced the run time for a recent study regarding a draft human proteome from almost a full day to just 10 minutes on a commodity computer, resulting in a list of proteins together with their corresponding protein-level FDRs. / <p>QC 20160412</p>

Page generated in 0.1839 seconds