521 |
Contributions to Distributed Detection and Estimation over Sensor NetworksWhipps, Gene Thomas January 2017 (has links)
No description available.
|
522 |
Diagnostic Tools for Forecast EnsemblesBaffoe, Nana Ama Appiaa 31 May 2018 (has links)
No description available.
|
523 |
Validating Fiscal Impact Analysis Methods for a Small Ohio City: Comparing the Outcomes of Two Average Cost MethodsJiang, JunSong 01 November 2010 (has links)
No description available.
|
524 |
Analysis of Optimal Strategies to Minimize Message Delay in Mobile Opportunistic Sensor NetworksJun, Jung Hyun 23 September 2011 (has links)
No description available.
|
525 |
ALMOST SURE CENTRAL LIMIT THEOREMSGonchigdanzan, Khurelbaatar 11 October 2001 (has links)
No description available.
|
526 |
SORORITY REJECTION: AN EMPIRICAL STUDY OF ATTRACTIVENESS, PERSONALITY, GRADE POINT AVERAGE, ACT SCORE, INVOLVEMENT, AND CLOSE FRIENDSHIPS AS PREDICTORS OF REJECTION FROM SORORITIES AND ITS RELATIONSHIP TO STUDENT DEPARTUREKane, Laura Rae 16 May 2016 (has links)
No description available.
|
527 |
An Examination of Early Retirement Incentives: A Study of Retirement Rates and Average Retirement Age of Full-time Higher Education Faculty in Postsecondary InstitutionsGoodhart, Gregory S. 05 August 2009 (has links)
No description available.
|
528 |
Hierarchical Ensemble Representations: Forming Ensemble Representations across Multiple Spatial ScalesPandey, Sandarsh 01 September 2020 (has links)
An ensemble representation refers to a statistical summary representation of a group of similar objects. Recent work has shown that we can form multiple ensemble representations – ensemble representations for a single feature dimension across multiple stimulus groups, ensemble representations for multiple feature dimensions in the same stimulus group, and ensemble representations across multiple sensory domains. In our study, we use hierarchical stimuli based on the Navon figures (Navon 1977) to study properties of ensemble representations across multiple spatial scales. In Experiments 1 and 3, we study properties of ensemble representations for the orientation and size feature dimension, respectively. In Experiment 2, we study properties of individual representations for the orientation feature dimension. Results indicate that it is possible to form ensemble representations across multiple spatial scales. Experiment 1 shows that the global ensemble representations may be extracted automatically (without intent) whereas the local ensemble representation is only extracted in response to task demands (with intent). Finally, in both Experiment 1 and Experiment 3, participants were more accurate at reporting the global ensemble representation than the local ensemble representation whereas in Experiment 2, performance did not differ across the levels. These results point towards global precedence in the formation of ensemble representations.
|
529 |
Bottleneck Identification using Data Analytics to Increase Production CapacityGanss, Thorsten Peter January 2021 (has links)
The thesis work develops an automated, data-driven bottleneck detection procedure based on real-world data. Following a seven-step process it is possible to determine the average as well as the shifting bottleneck by automatically applying the active period method. A detailed explanation of how to pre-process the extracted data is presented which is a good guideline for other analysists to customize the available code according to their needs. The obtained results show a deviation between the expected bottleneck and the bottleneck calculated based on production data collected in one week of full production. The expected bottleneck is currently determined by the case company by measuring cycle times physically at the machine, but this procedure does not represent the whole picture of the production line and is therefore recommended to be replaced by the developed automated analysis. Based on the analysis results, different optimization potentials are elaborated and explained to improve the data quality as well as the overall production capacity of the investigated production line. Especially, the installed gantry systems need further analysis to decrease their impact on the overall capacity. As for data quality, especially, the improvement of the machines data itself as well as the standardization of timestamps should be focused to enable better analysis in the future. Finally, future recommendations mainly suggest to run the analysis several times with new data sets to validate the results and improve the overall understanding of the production lines behavior. / Detta examensarbete utvecklar en process för en automatiserad, datadriven flaskhalsidentifiering baserad på verkliga data. Följt av en sjustegsprocess ges det möjlighet att bestämma den genomsnittliga och den varierande flaskhalsen genom en automatisk implementering av ”the active period method”. En detaljerad förklaring av hur man förbehandlar informationen som extraherats är presenterat vilket är en god riktlinje för andra analytiker för att anpassa den tillgängliga koden utifrån deras behov. Det samlade resultatet illustrerar en avvikelse mellan den förväntade flaskhalsen och den flaskhalsen som utgår ifrån beräkningar av tillverkningsdata ansamlat i en vecka av full produktion. Den förväntade flaskhalsen är för nuvarande bestämt av fallets företag genom en fysisk mätning av cykeltiderna på maskinen, däremot är denna process inte representativ för helhetsbilden på tillverkningslinjen och det är därvid rekommenderat att ersätta den föregående flaskhalsidentifieringen med den utvecklade automatiserade analysen. Baserat på analysens resultat framkom det olika optimiseringsmöjligheter som är utvecklade och klargjorda för att förbättra kvaliteten på data samt den övergripande produktionskapaciteten av den undersökta produktionslinjen. Speciellt när det gällerde installerade portalsystemen så behövs det en fördjupande analys för att minimera dess verkan på den översiktliga kapaciteten. När det gäller datakvalitet, speciellt förbättringen av maskindata, behövs det en standardiserad tidsstämpling för att utföra enbättre analys i framtiden. De framtida rekommendationerna föreslår huvudsakligen att köra analysen ett flertal gånger med nya datauppsättningar för att validera resultaten och förbättra den övergripliga uppfattningen av produktionslinjens beteende.
|
530 |
Latency Tradeoffs in Distributed Storage AccessRay, Madhurima January 2019 (has links)
The performance of storage systems is central to handling the huge amount of data being generated from a variety of sources including scientific experiments, social media, crowdsourcing, and from an increasing variety of cyber-physical systems. The emerging high-speed storage technologies enable the ingestion of and access to such large volumes of data efficiently. However, the combination of high data volume requirements of new applications that largely generate unstructured and semistructured streams of data combined with the emerging high-speed storage technologies pose a number of new challenges, including the low latency handling of such data and ensuring that the network providing access to the data does not become the bottleneck. The traditional relational model is not well suited for efficiently storing and retrieving unstructured and semi-structured data. An alternate mechanism, popularly known as Key-Value Store (KVS) has been investigated over the last decade to handle such data. A KVS store only needs a 'key' to uniquely identify the data record, which may be of variable length and may or may not have further structure in the form of predefined fields. Most of the KVS in existence have been designed for hard-disk based storage (before the SSDs gain popularity) where avoiding random accesses is crucial for good performance. Unfortunately, as the modern solid-state drives become the norm as the data center storage, the HDD-based KV structures result in high read, write, and space amplifications which becomes detrimental to both the SSD’s performance and endurance. Also note that regardless of how the storage systems are deployed, access to large amounts of storage by many nodes must necessarily go over the network. At the same time, the emerging storage technologies such as Flash, 3D-crosspoint, phase change memory (PCM), etc. coupled with highly efficient access protocols such as NVMe are capable of ingesting and reading data at rates that challenge even the leading edge networking technologies such as 100Gb/sec Ethernet. At the same time, some of the higher-end storage technologies (e.g., Intel Optane storage based on 3-D crosspoint technology, PCM, etc.) coupled with lean protocols like NVMe are capable of providing storage access latencies in the 10-20$\mu s$ range, which means that the additional latency due to network congestion could become significant. The purpose of this thesis is to addresses some of the aforementioned issues. We propose a new hash-based and SSD-friendly key-value store (KVS) architecture called FlashKey which is especially designed for SSDs to provide low access latencies, low read and write amplification, and the ability to easily trade-off latencies for any sequential access, for example, range queries. Through detailed experimental evaluation of FlashKey against the two most popular KVs, namely, RocksDB and LevelDB, we demonstrate that even as an initial implementation we are able to achieve substantially better write amplification, average, and tail latency at a similar or better space amplification. Next, we try to deal with network congestion by dynamically replicating the data items that are heavily used. The tradeoff here is between the latency and the replication or migration overhead. It is important to reverse the replication or migration as the congestion fades away since our observation tells that placing data and applications (that access the data) together in a consolidated fashion would significantly reduce the propagation delay and increase the network energy-saving opportunities which is required as the data center network nowadays are equipped with high-speed and power-hungry network infrastructures. Finally, we designed a tradeoff between network consolidation and congestion. Here, we have traded off the latency to save power. During the quiet hours, we consolidate the traffic is fewer links and use different sleep modes for the unused links to save powers. However, as the traffic increases, we reactively start to spread out traffic to avoid congestion due to the upcoming traffic surge. There are numerous studies in the area of network energy management that uses similar approaches, however, most of them do energy management at a coarser time granularity (e.g. 24 hours or beyond). As opposed to that, our mechanism tries to steal all the small to medium time gaps in traffic and invoke network energy management without causing a significant increase in latency. / Computer and Information Science
|
Page generated in 0.063 seconds