• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6112
  • 683
  • 654
  • 654
  • 654
  • 654
  • 654
  • 654
  • 184
  • 62
  • 16
  • 7
  • 2
  • 2
  • 2
  • Tagged with
  • 10272
  • 10272
  • 6037
  • 1943
  • 826
  • 796
  • 524
  • 524
  • 510
  • 494
  • 454
  • 442
  • 442
  • 431
  • 401
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Machine learning for magnetic resonance spectroscopy: modeling in the preclinical development process

Sahaya Louis, Marcia 30 August 2022 (has links)
Magnetic Resonance Spectroscopy (MRS) is a specialized non-invasive technique associated with magnetic resonance imaging (MRI) that quantifies the metabolic activity and biochemical composition of cellular metabolism in real-time. In the last few years, research has shown that many of these metabolites can be used as indicators of disease risk and can be used as biochemical markers for the prognosis of various diseases. Furthermore, as our understanding of the biochemical pathways that generate these compounds grows, it is likely that they will be incorporated into new diagnostic and therapeutic protocols in the future. MRS is a promising tool for studying neurological disorders, as it can provide valuable insights into the brain's metabolic activity. However, there are some limitations that need to be considered, such as poor spectral resolution, residual water resonance, and inter-scanner variability. To address these limitations, we explore machine learning methods to improve the spectral quality of MRS data and propose an interpretable model to identify metabolite spectral patterns. We begin with single-voxel non-water suppressed MRS data as it has the potential to provide an internal reference for inter-and intra-subject comparisons. We develop an autoencoder model to reconstruct metabolite spectra and learn latent vector representation of non-water suppressed MRS data. The reconstructed metabolite spectra can be quantified using standard software. We extend this approach to support data from multiple echo times and multiple voxels while preserving the diagnostic value of MRS. We evaluate the data representation of the autoencoder model using two case studies. The first case study is the diagnosis of low-grade gliomas by detecting 2-hydroxyglutarate (2HG), a biomarker for isocitrate dehydrogenase mutations. We quantitatively compare the autoencoder reconstructed metabolite spectra with those acquired with water suppression. The Pearson correlation R2 value is 0.40 - 0.91 between the metabolites from the two approaches. These results suggest that our autoencoder-based metabolite spectrum reconstruction approach provides a good representation of metabolite spectra from non-water suppressed MRS data and can be used for diagnostic purposes. In the second case study, we use the generated latent vector representation of the autoencoder model to understand long-term neurological difficulties after repetitive brain trauma experienced by individuals in contact sports. Athletes with multiple concussions have the potential to develop Chronic Traumatic Encephalopathy (CTE), a neurodegenerative disease that is currently diagnosed only postmortem by tau protein deposition in the brain. We map the latent vector representation of MRS data to neuropsychological evaluation using a support vector machine model. The support vector machine model has a cross-validated score of 0.72 (0.052), which is higher than the previous prediction model's cross-validated score of 0.65 (0.026) for CTE diagnosis. The results suggest that the latent vector representation of MRS data can be used to identify individuals at risk for developing CTE after repetitive brain trauma. To promote more clinical usage, we propose an interpretable machine learning pipeline to identify the metabolic spectral pattern to predict outcomes after cardiac arrest. Targeted Temperature Management (TTM) has improved the outcome in patients resuscitated after cardiac arrest, but 45-70% of these patients still die or have a poor neurological outcome at hospital discharge, and 50% of survivors have long-term neurocognitive deficits. MRS has been to be highly sensitive to changes in the brain after TTM after cardiac arrest namely showing significant reductions in N-acetylaspartate (NAA), a neuronal marker, and lactate (Lac), a marker of hypoxia. Initial findings show that a lactate/creatine ratio above 0.23 would prognose poor outcome with good sensitivity and specificity, however, if all metabolites could be utilized, a much greater accuracy could be achieved. The proposed pipeline utilizes a machine-learning algorithm to predict the outcome for these individuals, based on their metabolic patterns with 80% accuracy. This would allow for better TTM interventions for these individuals and could improve their long-term neurological outcomes.
242

Discovering user mobility and activity in smart lighting environments

Zhang, Yuting 17 February 2016 (has links)
"Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users. / 2017-02-17T00:00:00Z
243

Securing web applications through vulnerability detection and runtime defenses

Jahanshahi, Rasoul 05 September 2023 (has links)
Social networks, eCommerce, and online news attract billions of daily users. The PHP interpreter powers a host of web applications, including messaging, development environments, news, and video games. The abundance of personal, financial, and other sensitive information held by these applications makes them prime targets for cyber attacks. Considering the significance of safeguarding online platforms against cyber attacks, researchers investigated different approaches to protect web applications. However, regardless of the community’s achievements in improving the security of web applications, new vulnerabilities and cyber attacks occur on a daily basis (CISA, 2021; Bekerman and Yerushalmi, 2020). In general, cyber security threat mitigation techniques are divided into two categories: prevention and detection. In this thesis, I focus on tackling challenges in both prevention and detection scenarios and propose novel contributions to improve the security of PHP applications. Specifically, I propose methods for holistic analyses of both the web applications and the PHP interpreter to prevent cyber attacks and detect security vulnerabilities in PHP web applications. For prevention techniques, I propose three approaches called Saphire, SQLBlock, and Minimalist. I first present Saphire, an integrated analysis of both the PHP interpreter and web applications to defend against remote code execution (RCE) attacks by creating a system call sandbox. The evaluation of Saphire shows that, unlike prior work, Saphire protects web applications against RCE attacks in our dataset. Next, I present SQLBlock, which generates SQL profiles for PHP web applications through a hybrid static-dynamic analysis to prevent SQL injection attacks. My third contribution is Minimalist, which removes unnecessary code from PHP web applications according to prior user interaction. My results demonstrate that, on average, Minimalist debloats 17.78% of the source-code in PHP web applications while removing up to 38% of security vulnerabilities. Finally, as a contribution to vulnerability detection, I present Argus, a hybrid static-dynamic analysis over the PHP interpreter, to identify a comprehensive set of PHP built-in functions that an attacker can use to inject malicious input to web applications (i.e., injection-sink APIs). I discovered more than 300 injection-sink APIs in PHP 7.2 using Argus, an order of magnitude more than the most exhaustive list used in prior work. Furthermore, I integrated Argus’ results with existing program analysis tools, which identified 13 previously unknown XSS and insecure deserialization vulnerabilities in PHP web applications. In summary, I improve the security of PHP web applications through a holistic analysis of both the PHP interpreter and the web applications. I further apply hybrid static-dynamic analysis techniques to the PHP interpreter as well as PHP web applications to provide prevention mechanisms against cyber attacks or detect previously unknown security vulnerabilities. These achievements are only possible due to the holistic analysis of the web stack put forth in my research.
244

An Implementation of a Real-Time Non-Contact Strain Measurement Device Using Digital Image Correlation

Seifert, Nicholas 04 December 2022 (has links)
No description available.
245

Probabilistic-Based Computing Transformation with Reconfigurable Logic Fabrics

Alawad, Mohammed 01 January 2016 (has links)
Effectively tackling the upcoming "zettabytes" data explosion requires a huge quantum leap in our computing power and energy efficiency. However, with the Moore's law dwindling quickly, the physical limits of CMOS technology make it almost intractable to achieve high energy efficiency if the traditional "deterministic and precise" computing model still dominates. Worse, the upcoming data explosion mostly comprises statistics gleaned from uncertain, imperfect real-world environment. As such, the traditional computing means of first-principle modeling or explicit statistical modeling will very likely be ineffective to achieve flexibility, autonomy, and human interaction. The bottom line is clear: given where we are headed, the fundamental principle of modern computing—deterministic logic circuits can flawlessly emulate propositional logic deduction governed by Boolean algebra—has to be reexamined, and transformative changes in the foundation of modern computing must be made. This dissertation presents a novel stochastic-based computing methodology. It efficiently realizes the algorithmatic computing through the proposed concept of Probabilistic Domain Transform (PDT). The essence of PDT approach is to encode the input signal as the probability density function, perform stochastic computing operations on the signal in the probabilistic domain, and decode the output signal by estimating the probability density function of the resulting random samples. The proposed methodology possesses many notable advantages. Specifically, it uses much simplified circuit units to conduct complex operations, which leads to highly area- and energy-efficient designs suitable for parallel processing. Moreover, it is highly fault-tolerant because the information to be processed is encoded with a large ensemble of random samples. As such, the local perturbations of its computing accuracy will be dissipated globally, thus becoming inconsequential to the final overall results. Finally, the proposed probabilistic-based computing can facilitate building scalable precision systems, which provides an elegant way to trade-off between computing accuracy and computing performance/hardware efficiency for many real-world applications. To validate the effectiveness of the proposed PDT methodology, two important signal processing applications, discrete convolution and 2-D FIR filtering, are first implemented and benchmarked against other deterministic-based circuit implementations. Furthermore, a large-scale Convolutional Neural Network (CNN), a fundamental algorithmic building block in many computer vision and artificial intelligence applications that follow the deep learning principle, is also implemented with FPGA based on a novel stochastic-based and scalable hardware architecture and circuit design. The key idea is to implement all key components of a deep learning CNN, including multi-dimensional convolution, activation, and pooling layers, completely in the probabilistic computing domain. The proposed architecture not only achieves the advantages of stochastic-based computation, but can also solve several challenges in conventional CNN, such as complexity, parallelism, and memory storage. Overall, being highly scalable and energy efficient, the proposed PDT-based architecture is well-suited for a modular vision engine with the goal of performing real-time detection, recognition and segmentation of mega-pixel images, especially those perception-based computing tasks that are inherently fault-tolerant.
246

Towards High-Efficiency Data Management In the Next-Generation Persistent Memory System

Chen, Xunchao 01 January 2017 (has links)
For the sake of higher cell density while achieving near-zero standby power, recent research progress in Magnetic Tunneling Junction (MTJ) devices has leveraged Multi-Level Cell (MLC) configurations of Spin-Transfer Torque Random Access Memory (STT-RAM). However, in order to mitigate the write disturbance in an MLC strategy, data stored in the soft bit must be restored back immediately after the hard bit switching is completed. Furthermore, as the result of MTJ feature size scaling, the soft bit can be expected to become disturbed by the read sensing current, thus requiring an immediate restore operation to ensure the data reliability. In this paper, we design and analyze a novel Adaptive Restore Scheme for Write Disturbance (ARS-WD) and Read Disturbance (ARS-RD), respectively. ARS-WD alleviates restoration overhead by intentionally overwriting soft bit lines which are less likely to be read. ARS-RD, on the other hand, aggregates the potential writes and restore the soft bit line at the time of its eviction from higher level cache. Both of these two schemes are based on a lightweight forecasting approach for the future read behavior of the cache block. Our experimental results show substantial reduction in soft bit line restore operations. Moreover, ARS promotes advantages of MLC to provide a preferable L2 design alternative in terms of energy, area and latency product compared to SLC STT-RAM alternatives. Whereas the popular Cell Split Mapping (CSM) for MLC STT-RAM leverages the inter-block nonuniform access frequency, the intra-block data access features remain untapped in the MLC design. Aiming to minimize the energy-hungry write request to Hard-Bit Line (HBL) and maximize the dynamic range in the advantageous Soft-Bit Line (SBL), an hybrid mapping strategy for MLC STT-RAM cache (Double-S) is advocated in the paper. Double-S couples the contemporary Cell-Split-Mapping with the novel Word-Split-Mapping (WSM). Sparse cache block detector and read depth based data allocation/ migration policy are proposed to release the full potential of Double-S.
247

Context-Centric Affect Recognition From Paralinguistic Features of Speech

Marpaung, Andreas 01 January 2019 (has links)
As the field of affect recognition has progressed, many researchers have shifted from having unimodal approaches to multimodal ones. In particular, the trends in paralinguistic speech affect recognition domain have been to integrate other modalities such as facial expression, body posture, gait, and linguistic speech. Our work focuses on integrating contextual knowledge into paralinguistic speech affect recognition. We hypothesize that a framework to recognize affect through paralinguistic features of speech can improve its performance by integrating relevant contextual knowledge. This dissertation describes our research to integrate contextual knowledge into the paralinguistic affect recognition process from acoustic features of speech. We conceived, built, and tested a two-phased system called the Context-Based Paralinguistic Affect Recognition System (CxBPARS). The first phase of this system is context-free and uses the AdaBoost classifier that applies data on the acoustic pitch, jitter, shimmer, Harmonics-to-Noise Ratio (HNR), and the Noise-to-Harmonics Ratio (NHR) to make an initial judgment about the emotion most likely exhibited by the human elicitor. The second phase then adds context modeling to improve upon the context-free classifications from phase I. CxBPARS was inspired by a human subject study performed as part of this work where test subjects were asked to classify an elicitor's emotion strictly from paralinguistic sounds, and then subsequently provided with contextual information to improve their selections. CxBPARS was rigorously tested and found to, at the worst case, improve the success rate from the state-of-the-art's 42% to 53%.
248

Solving Scaling Issues on a Single GPU

Xia, Yang January 2022 (has links)
No description available.
249

Modeling Site Specific Urban Propagation Using A Variable Terrain Radiowave Parabolic Equation - Vertical Plane Launch (VTRPE-VPL) Hybrid Technique

Cadette, Pierre 01 January 2020 (has links) (PDF)
The development of efficient algorithms for calculating propagation loss in site specific urban environments has been an active area of research for many years. This dissertation demonstrates that, for particular scenarios, a hybrid approach that combines the Variable Terrain Radiowave Parabolic Equation (VTRPE) and Vertical Plane Launch (VPL) models can be used to produce accurate results for a downrange region of interest. The hybrid approach consists of leveraging the 2-D parabolic equation method in the initial propagation region, where backscatter and out of plane energy can be neglected, then transitioning to the more computationally intensive 3-D ray launching method for the domain closer to the receiver of interest. The geometry we are concerned with is a transmitter fixed high above the average building height, with receivers located downrange near ground level. The scenario for this study includes several building structures with flat walls and roofs, superimposed on a flat ground. This research investigates the performance of the VTRPE model in an urban landscape. This study also assesses the impact of the definition of the dielectric properties of the lower boundary of the calculation domain, which includes the ground and city buildings. Finally, this dissertation demonstrates the viability of the proposed VTRPE-VPL hybrid technique via model simulations of radio frequency propagation over an urban topography.
250

Synchronization of data in heterogeneous decentralized systems

Boškov, Novak 30 May 2023 (has links)
Data synchronization is the problem of reconciling the differences between large data stores that differ in a small number of records. It is a common thread among disparate distributed systems ranging from fleets of Internet of Things (IoT) devices to clusters of distributed databases in the cloud. Most recently, data synchronization has arisen in globally distributed public blockchains that build the basis for the envisioned decentralized Internet of the future. Moreover, the parallel development of edge computing has significantly increased the heterogeneity of networks and computing devices. The merger of highly heterogeneous system resources and the decentralized nature of future Internet applications calls for a new approach to data synchronization. In this dissertation, we look at the problem of data synchronization through the prism of set reconciliation and introduce novel tools and protocols that improve the performance of data synchronization in heterogeneous decentralized systems. First, we compare the analytical properties of the state-of-the-art set reconciliation protocols, and investigate the impact of theoretical assumptions and implementation decisions on the synchronization performance. Second, we introduce GenSync, the first unified set reconciliation middleware. Using GenSync's distinctive benchmarking layer, we find that the best protocol choice is highly sensitive to the system conditions, and a bad protocol choice causes a severe hit in performance. We showcase the evaluative power of GenSync in one of the world's largest wireless network emulators, and demonstrate choosing the best GenSync protocol under a high and low user mobility in an emulated cellular network. Finally, we introduce SREP (Set Reconciliation-Enhanced Propagation), a novel blockchain transaction pool synchronization protocol with quantifiable guarantees. Through simulations, we show that SREP incurs significantly smaller bandwidth overhead than a similar approach from the literature, especially in the networks of realistic sizes (tens of thousands of participants).

Page generated in 0.1776 seconds