Spelling suggestions: "subject:"atorage lemsystems"" "subject:"atorage atemsystems""
1 |
Storage fragmentation in the context of a methodology for optimising reorganisation policiesLonge, H. O. Dele January 1983 (has links)
No description available.
|
2 |
Magneto-optical properties of multilayered systems and enhancement of the polar Kerr effectPahirathan, Selvarajah January 1996 (has links)
No description available.
|
3 |
Reward Scheduling for QoS in Cloud ApplicationsElnably, Ahmed 06 September 2012 (has links)
The growing popularity of multi-tenant, cloud-based computing platforms is increasing
interest in resource allocation models that permit flexible sharing of the underlying
infrastructure. This thesis introduces a novel IO resource allocation model
that better captures the requirements of paying tenants sharing a physical infrastructure.
The model addresses a major concern regarding application performance
stability when clients migrate from a dedicated to a shared platform. Specifically,
while clients would like their applications to behave similarly in both situations, traditional
models of fairness, like proportional share allocation, do not exhibit this
behavior in the context of modern multi-tiered storage architectures.
We also present a scheduling algorithm, the Reward Scheduler, that implements
the new allocation policy, by rewarding clients with better runtime characteristics,
resulting in benefits to both the clients and the service provider. Moreover, the Reward
scheduler also supports weight-based capacity allocation subject to a minimum
reservation and maximum limitation on the IO allocation for each task. Experimental
results indicate that the proposed algorithm proportionally allocates the system
capacity in proportion to their entitlements.
|
4 |
Context-aware data caching for mobile computing environmentsDrakatos, Stylianos 03 November 2006 (has links)
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained.
The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide
which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. Anticipated applications of these solutions include biomedical engineering, telehealth, medical information systems and business.
|
5 |
Micro-Credentialing with Fuzzy Content Matching: An Educational Data-Mining ApproachAmoruso, Paul 01 January 2023 (has links) (PDF)
There is a growing need to assess and issue micro-credentials within STEM curricula. Although one approach is to insert a free-standing academic activity into the students learning and degree path, herein the development and mechanism of an alternative approach rooted in leveraging responses on digitized quiz-based assessments is developed. An online assessment and remediation protocol with accompanying Python-based toolset was developed to engage undergraduate tutors who identify and fill knowledge gaps of at-risk learners. Digitized assessments, personalized tutoring, and automated micro-credentialing scripts for Canvas LMS are used to issue skill-specific badges which motivate the learner incrementally, while increasing self-efficacy. This consisted of building upon the available Canvas LMS application programming interface to design an algorithm that takes the given Canvas LMS data to develop the automation of dispersing badges. In addition, a user centric interface was prototyped and implemented to garner high user acceptance. As well as pioneering the potential steps to efficiently migrating the classical quizzes to New Quizzes format and investigating potential steps to provide personalized YouTube video recommendations to students, based on assessment performance. Moreover, foundational research, operational objectives, and prototyping a user interface for instructor-facing micro-credentialing was established through the work represented in this document. The approach developed is shown to provide a fine-grained analysis that credentials students understanding of material from a semester-wide perspective using a scalable automation approach evaluated within the Canvas LMS.
|
6 |
Practical Deep Learning: Utilization of Selective Transfer Learning for Biomedical ApplicationsSalem, Milad 01 January 2022 (has links) (PDF)
Over the recent years, deep learning has risen in popularity due to its capabilities in learning from data and extracting features from it in an automatic manner during training. This automatic feature extraction can be a useful tool in domains which require subject-matter-experts to manually or algorithmically extract features from the data, such as in the biomedical domain. However, automatic feature extraction requires a large amount of data, which in turn makes deep learning models data-hungry. This is a challenge for adoption of deep learning to these domains which often have small amounts of training data. In this work, deep learning is implemented in the biomedical and expert-based domains in a practical manner. Through selective transfer learning, learned knowledge from other related or unrelated datasets and tasks are transferred to the target domain, alleviating the problem of low training data. Transfer learning is studied as pre-trained model transfer or off-the-shelf feature extractor transfer in expert-based domains such as drug discovery, electrocardiogram signal arrhythmia detection, and biometric recognition. The results demonstrate that deep learning's automatic feature extraction out-performs traditional expert-made features. Moreover, transfer learning stabilizes the training when low amount of data is present and enables transfer of useful knowledge and patterns to the target domain which results in better feature extraction. Having better features or higher performance in these domains can translate to real-world changes, ranging from finding a suitable drug candidate in a timely manner, to not miss-diagnosing an Electrocardiogram arrhythmia.
|
7 |
Improving Performance and Flexibility of Fabric-Attached Memory SystemsKommareddy, Vamsee Reddy 01 January 2021 (has links) (PDF)
As demands for memory-intensive applications continue to grow, the memory capacity of each computing node is expected to grow at a similar pace. In high-performance computing (HPC) systems, the memory capacity per compute node is decided upon the most demanding application that would likely run on such a system, and hence the average capacity per node in future HPC systems is expected to grow significantly. However, diverse applications run on HPC systems with different memory requirements and memory utilization can fluctuate widely from one application to another. Since memory modules are private for a corresponding computing node, a large percentage of the overall memory capacity will likely be underutilized, especially when there are many jobs with small memory footprints. Thus, as HPC systems are moving towards the exascale era, better utilization of memory is strongly desired. Moreover, as new memory technologies come on the market, the flexibility of upgrading memory and system updates becomes a major concern since memory modules are tightly coupled with the computing nodes. To address these issues, vendors are exploring fabric-attached memories (FAM) systems. In this type of system, resources are decoupled and are maintained independently. Such a design has driven technology providers to develop new protocols, such as cache-coherent interconnects and memory semantic fabrics, to connect various discrete resources and help users leverage advances in-memory technologies to satisfy growing memory and storage demands. Using these new protocols, FAM can be directly attached to a system interconnect and be easily integrated with a variety of processing elements (PEs). Moreover, systems that support FAM can be smoothly upgraded and allow multiple PEs to share the FAM memory pools using well-defined protocols. The sharing of FAM between PEs allows efficient data sharing, improves memory utilization, reduces cost by allowing flexible integration of different PEs and memory modules from several vendors, and makes it easier to upgrade the system. However, adopting FAM in HPC systems brings in new challenges. Since memory is disaggregated and is accessed through fabric networks, latency in accessing memory (efficiency) is a crucial concern. In addition, quality of service, security from neighbor nodes, coherency, and address translation overhead to access FAM are some of the problems that require rethinking for FAM systems. To this end, we study and discuss various challenges that need to be addressed in FAM systems. Firstly, we developed a simulating environment to mimic and analyze FAM systems. Further, we showcase our work in addressing the challenges to improve the performance and increase the feasibility of such systems; enforcing quality of service, providing page migration support, and enhancing security from malicious neighbor nodes.
|
8 |
Robust Acceleration of Data-Centric Applications using Resistive Computing SystemsZhang, Baogang 01 January 2021 (has links) (PDF)
With the accessible data reaching zettabyte level, CMOS technology is reaching its limit for the data hungry applications. Moore's law has been reaching its depletion in recent studies. On the other hand, von Neumann architecture is approaching the bottleneck due to the data movement between the computing and memory units. With data movement and power budgets becoming the limiting factors of today's computing systems, in-memory computing using emerging non-volatile resistive devices has attracted an increasing amount of attention. A non-volatile resistive device may be realized using memristor, resistive random access memory (ReRAM), phase change memory (PCM), or spin-transfer torque magnetic random access memory (STT-MRAM). Resistive devices integrated into crossbar arrays simultaneously supports both dense storage and energy-efficient analog computation, which is highly desirable for processing of big data using both low-power mobile devices and high-performance computing (HPC) systems. However, analog computation is vulnerable and may suffer from robustness issues due to variations such as, array parasitics, device defects, non-ideal device characteristics, and various sources of errors. These non-ideal factors directly impact the computational accuracy of the in-memory computation and thereby the application level functional correctness. This dissertation is focused on improving the robustness and reliability of analog in-memory computing. Three directions are mainly explored: data layout organization techniques, software and hardware co-design, and hardware redundancy. Data layout organization aims to improve the robustness by masking the data to hardware according to the behavior of defective devices. Software and hardware co-design mitigates the impact by modifying the data in the neural networks or image compression applications to become amenable to device defects and data layout organizations. Hardware redundancy utilized multiple resistive device to realize each data, so each device can be programmed with different value and realize the data accurately with lower overhead.
|
9 |
Performance Isolation in Cloud Storage SystemsSingh, Akshay K. 09 1900 (has links)
Cloud computing enables data centres to provide resource sharing across multiple tenants.
This sharing, however, usually comes at a cost in the form of reduced isolation
between tenants, which can lead to inconsistent and unpredictable performance. This variability
in performance becomes an impediment for clients whose services rely on consistent,
responsive performance in cloud environments. The problem is exacerbated for applications
that rely on cloud storage systems as performance in these systems is a ffected by disk
access times, which often dominate overall request service times for these types of data
services.
In this thesis we introduce MicroFuge, a new distributed caching and scheduling middleware
that provides performance isolation for cloud storage systems. To provide performance
isolation, MicroFuge's cache eviction policy is tenant and deadline-aware, which
enables the provision of isolation to tenants and ensures that data for queries with more
urgent deadlines, which are most likely to be a ffected by competing requests, are less likely
to be evicted than data for other queries. MicroFuge also provides simplifi ed, intelligent
scheduling in addition to request admission control whose performance model of the underlying
storage system will reject requests with deadlines that are unlikely to be satisfi ed.
The middleware approach of MicroFuge makes it unique among other systems which
provide performance isolation in cloud storage systems. Rather than providing performance
isolation for some particular cloud storage system, MicroFuge can be deployed on top of
any already deployed storage system without modifying it. Keeping in mind the wide
spectrum of cloud storage systems available today, such an approach make MicroFuge very
adoptable.
In this thesis, we show that MicroFuge can provide signifi cantly better performance
isolation between tenants with di fferent latency requirements than Memcached, and with
admission control enabled, can ensure that more than certain percentage of requests meet
their deadlines.
|
10 |
Performance Isolation in Cloud Storage SystemsSingh, Akshay K. 09 1900 (has links)
Cloud computing enables data centres to provide resource sharing across multiple tenants.
This sharing, however, usually comes at a cost in the form of reduced isolation
between tenants, which can lead to inconsistent and unpredictable performance. This variability
in performance becomes an impediment for clients whose services rely on consistent,
responsive performance in cloud environments. The problem is exacerbated for applications
that rely on cloud storage systems as performance in these systems is a ffected by disk
access times, which often dominate overall request service times for these types of data
services.
In this thesis we introduce MicroFuge, a new distributed caching and scheduling middleware
that provides performance isolation for cloud storage systems. To provide performance
isolation, MicroFuge's cache eviction policy is tenant and deadline-aware, which
enables the provision of isolation to tenants and ensures that data for queries with more
urgent deadlines, which are most likely to be a ffected by competing requests, are less likely
to be evicted than data for other queries. MicroFuge also provides simplifi ed, intelligent
scheduling in addition to request admission control whose performance model of the underlying
storage system will reject requests with deadlines that are unlikely to be satisfi ed.
The middleware approach of MicroFuge makes it unique among other systems which
provide performance isolation in cloud storage systems. Rather than providing performance
isolation for some particular cloud storage system, MicroFuge can be deployed on top of
any already deployed storage system without modifying it. Keeping in mind the wide
spectrum of cloud storage systems available today, such an approach make MicroFuge very
adoptable.
In this thesis, we show that MicroFuge can provide signifi cantly better performance
isolation between tenants with di fferent latency requirements than Memcached, and with
admission control enabled, can ensure that more than certain percentage of requests meet
their deadlines.
|
Page generated in 0.0684 seconds