• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6800
  • 683
  • 671
  • 671
  • 671
  • 671
  • 671
  • 671
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10978
  • 10978
  • 6695
  • 1946
  • 989
  • 862
  • 543
  • 532
  • 524
  • 507
  • 506
  • 468
  • 457
  • 448
  • 403
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Rethinking Routing and Peering in the era of Vertical Integration of Network Functions

Dey, Prasun Kanti 01 January 2019 (has links)
Content providers typically control the digital content consumption services and are getting the most revenue by implementing an "all-you-can-eat" model via subscription or hyper-targeted advertisements. Revamping the existing Internet architecture and design, a vertical integration where a content provider and access ISP will act as unibody in a sugarcane form seems to be the recent trend. As this vertical integration trend is emerging in the ISP market, it is questionable if existing routing architecture will suffice in terms of sustainable economics, peering, and scalability. It is expected that the current routing will need careful modifications and smart innovations to ensure effective and reliable end-to-end packet delivery. This involves new feature developments for handling traffic with reduced latency to tackle routing scalability issues in a more secure way and to offer new services at cheaper costs. Considering the fact that prices of DRAM or TCAM in legacy routers are not necessarily decreasing at the desired pace, cloud computing can be a great solution to manage the increasing computation and memory complexity of routing functions in a centralized manner with optimized expenses. Focusing on the attributes associated with existing routing cost models and by exploring a hybrid approach to SDN, we also compare recent trends in cloud pricing (for both storage and service) to evaluate whether it would be economically beneficial to integrate cloud services with legacy routing for improved cost-efficiency. In terms of peering, using the US as a case study, we show the overlaps between access ISPs and content providers to explore the viability of a future in terms of peering between the new emerging content-dominated sugarcane ISPs and the healthiness of Internet economics. To this end, we introduce meta-peering, a term that encompasses automation efforts related to peering – from identifying a list of ISPs likely to peer, to injecting control-plane rules, to continuous monitoring and notifying any violation – one of the many outcroppings of vertical integration procedure which could be offered to the ISPs as a standalone service.
232

Data-Driven Modeling and Optimization of Building Energy Consumption

Grover, Divas 01 January 2019 (has links)
Sustainability and reducing energy consumption are targets for building operations. The installation of smart sensors and Building Automation Systems (BAS) makes it possible to study facility operations under different circumstances. These technologies generate large amounts of data. That data can be scrapped and used for the analysis. In this thesis, we focus on the process of data-driven modeling and decision making from scraping the data to simulate the building and optimizing the operation. The City of Orlando has similar goals of sustainability and reduction of energy consumption so, they provided us access to their BAS for the data and study the operation of its facilities. The data scraped from the City's BAS serves can be used to develop statistical/machine learning methods for decision making. We selected a mid-size pilot building to apply these techniques. The process begins with the collection of data from BAS. An Application Programming Interface (API) is developed to login to the servers and scrape data for all data points and store it on the local machine. Then data is cleaned to analyze and model. The dataset contains various data points ranging from indoor and outdoor temperature to fan speed inside the Air Handling Unit (AHU) which are operated by Variable Frequency Drive (VFD). This whole dataset is a time series and is handled accordingly. The cleaned dataset is analyzed to find different patterns and investigate relations between different data points. The analysis helps us in choosing parameters for models that are developed in the next step. Different statistical models are developed to simulate building and equipment behavior. Finally, the models along with the data are used to optimize the building Operation with the equipment constraints to make decisions for building operation which leads to a reduction in energy consumption while maintaining temperature and pressure inside the building.
233

Modeling and optimization of emerging on-chip cooling technologies via machine learning

Yuan, Zihao 30 August 2022 (has links)
Over the last few decades, processor performance has continued to grow due to the down-scaling of transistor dimensions. This performance boost has translated into high power densities and localized hot spots, which decrease the lifetime of processors and increase transistor delays and leakage power. Conventional on-chip cooling solutions are often insufficient to efficiently mitigate such high-power-density hot spots. Emerging cooling technologies such as liquid cooling via microchannels, thermoelectric coolers (TECs), two-phase vapor chambers (VCs), and hybrid cooling options (e.g., of liquid cooling via microchannels and TECs) have the potential to provide better cooling performance compared to conventional cooling solutions. However, these potential solutions’ cooling performance and cooling power vary significantly based on their design and operational parameters (such as liquid flow velocity, evaporator design, TEC current, etc.) and the chip specifications. In addition, the cooling models of such emerging cooling technologies may require additional Computational Fluid Dynamics (CFD) simulations (e.g., two-phase cooling), which are time-consuming and have large memory requirements. Given the vast solution space of possible cooling solutions (including possible hybrids) and cooling subsystem parameters, the optimal solution search time is also prohibitively time-consuming. To minimize the cooling power overhead while satisfying chip thermal constraints, there is a need for an optimization flow that enables rapid and accurate thermal simulation and selection of the best cooling solution and the associated cooling parameters for a given chip design and workload profile. This thesis claims that combining the compact thermal modeling methodology with machine learning (ML) models enables rapidly and accurately carrying out thermal simulations and predicting the optimal cooling solution and its cooling parameters for arbitrary chip designs. The thesis aims to realize this optimization flow through three fronts. First, it proposes a parallel compact thermal simulator, PACT, that enables speedy and accurate standard-cell-level to architecture-level thermal analysis for processors. PACT has high extensibility and applicability and models and evaluates thermal behaviors of emerging integration (e.g., monolithic 3D) and cooling technologies (e.g., two-phase VCs). Second, it proposes an ML-based temperature-dependent simulation framework designed for two-phase cooling methods to enable fast and accurate thermal simulations. This simulation framework can also be applied to other emerging cooling technologies. Third, this thesis proposes a systematic way to create novel deep learning (DL) models to predict the optimal cooling methods and cooling parameters for a given chip design. Through experiments based on real-world high-power-density chips and their floorplans, this thesis aims to demonstrate that using ML models substantially minimizes the simulation time of emerging cooling technologies (e.g., up to 21x) and improves the optimization time of emerging cooling solutions (e.g., up to 140x) while achieving the same optimization accuracy compared to brute force methods. / 2023-02-28T00:00:00Z
234

Machine learning for magnetic resonance spectroscopy: modeling in the preclinical development process

Sahaya Louis, Marcia 30 August 2022 (has links)
Magnetic Resonance Spectroscopy (MRS) is a specialized non-invasive technique associated with magnetic resonance imaging (MRI) that quantifies the metabolic activity and biochemical composition of cellular metabolism in real-time. In the last few years, research has shown that many of these metabolites can be used as indicators of disease risk and can be used as biochemical markers for the prognosis of various diseases. Furthermore, as our understanding of the biochemical pathways that generate these compounds grows, it is likely that they will be incorporated into new diagnostic and therapeutic protocols in the future. MRS is a promising tool for studying neurological disorders, as it can provide valuable insights into the brain's metabolic activity. However, there are some limitations that need to be considered, such as poor spectral resolution, residual water resonance, and inter-scanner variability. To address these limitations, we explore machine learning methods to improve the spectral quality of MRS data and propose an interpretable model to identify metabolite spectral patterns. We begin with single-voxel non-water suppressed MRS data as it has the potential to provide an internal reference for inter-and intra-subject comparisons. We develop an autoencoder model to reconstruct metabolite spectra and learn latent vector representation of non-water suppressed MRS data. The reconstructed metabolite spectra can be quantified using standard software. We extend this approach to support data from multiple echo times and multiple voxels while preserving the diagnostic value of MRS. We evaluate the data representation of the autoencoder model using two case studies. The first case study is the diagnosis of low-grade gliomas by detecting 2-hydroxyglutarate (2HG), a biomarker for isocitrate dehydrogenase mutations. We quantitatively compare the autoencoder reconstructed metabolite spectra with those acquired with water suppression. The Pearson correlation R2 value is 0.40 - 0.91 between the metabolites from the two approaches. These results suggest that our autoencoder-based metabolite spectrum reconstruction approach provides a good representation of metabolite spectra from non-water suppressed MRS data and can be used for diagnostic purposes. In the second case study, we use the generated latent vector representation of the autoencoder model to understand long-term neurological difficulties after repetitive brain trauma experienced by individuals in contact sports. Athletes with multiple concussions have the potential to develop Chronic Traumatic Encephalopathy (CTE), a neurodegenerative disease that is currently diagnosed only postmortem by tau protein deposition in the brain. We map the latent vector representation of MRS data to neuropsychological evaluation using a support vector machine model. The support vector machine model has a cross-validated score of 0.72 (0.052), which is higher than the previous prediction model's cross-validated score of 0.65 (0.026) for CTE diagnosis. The results suggest that the latent vector representation of MRS data can be used to identify individuals at risk for developing CTE after repetitive brain trauma. To promote more clinical usage, we propose an interpretable machine learning pipeline to identify the metabolic spectral pattern to predict outcomes after cardiac arrest. Targeted Temperature Management (TTM) has improved the outcome in patients resuscitated after cardiac arrest, but 45-70% of these patients still die or have a poor neurological outcome at hospital discharge, and 50% of survivors have long-term neurocognitive deficits. MRS has been to be highly sensitive to changes in the brain after TTM after cardiac arrest namely showing significant reductions in N-acetylaspartate (NAA), a neuronal marker, and lactate (Lac), a marker of hypoxia. Initial findings show that a lactate/creatine ratio above 0.23 would prognose poor outcome with good sensitivity and specificity, however, if all metabolites could be utilized, a much greater accuracy could be achieved. The proposed pipeline utilizes a machine-learning algorithm to predict the outcome for these individuals, based on their metabolic patterns with 80% accuracy. This would allow for better TTM interventions for these individuals and could improve their long-term neurological outcomes.
235

Discovering user mobility and activity in smart lighting environments

Zhang, Yuting 17 February 2016 (has links)
"Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users. / 2017-02-17T00:00:00Z
236

Securing web applications through vulnerability detection and runtime defenses

Jahanshahi, Rasoul 05 September 2023 (has links)
Social networks, eCommerce, and online news attract billions of daily users. The PHP interpreter powers a host of web applications, including messaging, development environments, news, and video games. The abundance of personal, financial, and other sensitive information held by these applications makes them prime targets for cyber attacks. Considering the significance of safeguarding online platforms against cyber attacks, researchers investigated different approaches to protect web applications. However, regardless of the community’s achievements in improving the security of web applications, new vulnerabilities and cyber attacks occur on a daily basis (CISA, 2021; Bekerman and Yerushalmi, 2020). In general, cyber security threat mitigation techniques are divided into two categories: prevention and detection. In this thesis, I focus on tackling challenges in both prevention and detection scenarios and propose novel contributions to improve the security of PHP applications. Specifically, I propose methods for holistic analyses of both the web applications and the PHP interpreter to prevent cyber attacks and detect security vulnerabilities in PHP web applications. For prevention techniques, I propose three approaches called Saphire, SQLBlock, and Minimalist. I first present Saphire, an integrated analysis of both the PHP interpreter and web applications to defend against remote code execution (RCE) attacks by creating a system call sandbox. The evaluation of Saphire shows that, unlike prior work, Saphire protects web applications against RCE attacks in our dataset. Next, I present SQLBlock, which generates SQL profiles for PHP web applications through a hybrid static-dynamic analysis to prevent SQL injection attacks. My third contribution is Minimalist, which removes unnecessary code from PHP web applications according to prior user interaction. My results demonstrate that, on average, Minimalist debloats 17.78% of the source-code in PHP web applications while removing up to 38% of security vulnerabilities. Finally, as a contribution to vulnerability detection, I present Argus, a hybrid static-dynamic analysis over the PHP interpreter, to identify a comprehensive set of PHP built-in functions that an attacker can use to inject malicious input to web applications (i.e., injection-sink APIs). I discovered more than 300 injection-sink APIs in PHP 7.2 using Argus, an order of magnitude more than the most exhaustive list used in prior work. Furthermore, I integrated Argus’ results with existing program analysis tools, which identified 13 previously unknown XSS and insecure deserialization vulnerabilities in PHP web applications. In summary, I improve the security of PHP web applications through a holistic analysis of both the PHP interpreter and the web applications. I further apply hybrid static-dynamic analysis techniques to the PHP interpreter as well as PHP web applications to provide prevention mechanisms against cyber attacks or detect previously unknown security vulnerabilities. These achievements are only possible due to the holistic analysis of the web stack put forth in my research.
237

An Implementation of a Real-Time Non-Contact Strain Measurement Device Using Digital Image Correlation

Seifert, Nicholas 04 December 2022 (has links)
No description available.
238

Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics

Alawad, Mohammed 01 January 2016 (has links)
Effectively tackling the upcoming "zettabytes" data explosion requires a huge quantum leap in our computing power and energy efficiency. However, with the Moore's law dwindling quickly, the physical limits of CMOS technology make it almost intractable to achieve high energy efficiency if the traditional "deterministic and precise" computing model still dominates. Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain, imperfect real-world environment. As such, the traditional computing means of first-principle modeling or explicit statistical modeling will very likely be ineffective to achieve flexibility, autonomy, and human interaction. The bottom line is clear: given where we are headed, the fundamental principle of modern computing—deterministic logic circuits can flawlessly emulate propositional logic deduction governed by Boolean algebra—has to be reexamined, and transformative changes in the foundation of modern computing must be made. This dissertation presents a novel stochastic-based computing methodology. It efficiently realizes the algorithmatic computing through the proposed concept of Probabilistic Domain Transform (PDT). The essence of PDT approach is to encode the input signal as the probability density function, perform stochastic computing operations on the signal in the probabilistic domain, and decode the output signal by estimating the probability density function of the resulting random samples. The proposed methodology possesses many notable advantages. Specifically, it uses much simplified circuit units to conduct complex operations, which leads to highly area- and energy-efficient designs suitable for parallel processing. Moreover, it is highly fault-tolerant because the information to be processed is encoded with a large ensemble of random samples. As such, the local perturbations of its computing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate building scalable precision systems, which provides an elegant way to trade-off between computing accuracy and computing performance/hardware efficiency for many real-world applications. To validate the effectiveness of the proposed PDT methodology, two important signal processing applications, discrete convolution and 2-D FIR filtering, are first implemented and benchmarked against other deterministic-based circuit implementations. Furthermore, a large-scale Convolutional Neural Network (CNN), a fundamental algorithmic building block in many computer vision and artificial intelligence applications that follow the deep learning principle, is also implemented with FPGA based on a novel stochastic-based and scalable hardware architecture and circuit design. The key idea is to implement all key components of a deep learning CNN, including multi-dimensional convolution, activation, and pooling layers, completely in the probabilistic computing domain. The proposed architecture not only achieves the advantages of stochastic-based computation, but can also solve several challenges in conventional CNN, such as complexity, parallelism, and memory storage. Overall, being highly scalable and energy efficient, the proposed PDT-based architecture is well-suited for a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images, especially those perception-based computing tasks that are inherently fault-tolerant.
239

Towards High-Efficiency Data Management In the Next-Generation Persistent Memory System

Chen, Xunchao 01 January 2017 (has links)
For the sake of higher cell density while achieving near-zero standby power, recent research progress in Magnetic Tunneling Junction (MTJ) devices has leveraged Multi-Level Cell (MLC) configurations of Spin-Transfer Torque Random Access Memory (STT-RAM). However, in order to mitigate the write disturbance in an MLC strategy, data stored in the soft bit must be restored back immediately after the hard bit switching is completed. Furthermore, as the result of MTJ feature size scaling, the soft bit can be expected to become disturbed by the read sensing current, thus requiring an immediate restore operation to ensure the data reliability. In this paper, we design and analyze a novel Adaptive Restore Scheme for Write Disturbance (ARS-WD) and Read Disturbance (ARS-RD), respectively. ARS-WD alleviates restoration overhead by intentionally overwriting soft bit lines which are less likely to be read. ARS-RD, on the other hand, aggregates the potential writes and restore the soft bit line at the time of its eviction from higher level cache. Both of these two schemes are based on a lightweight forecasting approach for the future read behavior of the cache block. Our experimental results show substantial reduction in soft bit line restore operations. Moreover, ARS promotes advantages of MLC to provide a preferable L2 design alternative in terms of energy, area and latency product compared to SLC STT-RAM alternatives. Whereas the popular Cell Split Mapping (CSM) for MLC STT-RAM leverages the inter-block nonuniform access frequency, the intra-block data access features remain untapped in the MLC design. Aiming to minimize the energy-hungry write request to Hard-Bit Line (HBL) and maximize the dynamic range in the advantageous Soft-Bit Line (SBL), an hybrid mapping strategy for MLC STT-RAM cache (Double-S) is advocated in the paper. Double-S couples the contemporary Cell-Split-Mapping with the novel Word-Split-Mapping (WSM). Sparse cache block detector and read depth based data allocation/ migration policy are proposed to release the full potential of Double-S.
240

Context-Centric Affect Recognition From Paralinguistic Features of Speech

Marpaung, Andreas 01 January 2019 (has links)
As the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic features of speech can improve its performance by integrating relevant contextual knowledge. This dissertation describes our research to integrate contextual knowledge into the paralinguistic affect recognition process from acoustic features of speech. We conceived, built, and tested a two-phased system called the Context-Based Paralinguistic Affect Recognition System (CxBPARS). The first phase of this system is context-free and uses the AdaBoost classifier that applies data on the acoustic pitch, jitter, shimmer, Harmonics-to-Noise Ratio (HNR), and the Noise-to-Harmonics Ratio (NHR) to make an initial judgment about the emotion most likely exhibited by the human elicitor. The second phase then adds context modeling to improve upon the context-free classifications from phase I. CxBPARS was inspired by a human subject study performed as part of this work where test subjects were asked to classify an elicitor's emotion strictly from paralinguistic sounds, and then subsequently provided with contextual information to improve their selections. CxBPARS was rigorously tested and found to, at the worst case, improve the success rate from the state-of-the-art's 42% to 53%.

Page generated in 0.0987 seconds