Spelling suggestions: "subject:"computer anda lemsystems 1rchitecture"" "subject:"computer anda lemsystems 1architecture""
111 |
Energy Efficient Spintronic Device for Neuromorphic ComputationAzam, Md Ali 01 January 2019 (has links)
Future computing will require significant development in new computing device paradigms. This is motivated by CMOS devices reaching their technological limits, the need for non-Von Neumann architectures as well as the energy constraints of wearable technologies and embedded processors. The first device proposal, an energy-efficient voltage-controlled domain wall device for implementing an artificial neuron and synapse is analyzed using micromagnetic modeling. By controlling the domain wall motion utilizing spin transfer or spin orbit torques in association with voltage generated strain control of perpendicular magnetic anisotropy in the presence of Dzyaloshinskii-Moriya interaction (DMI), different positions of the domain wall are realized in the free layer of a magnetic tunnel junction to program different synaptic weights. Additionally, an artificial neuron can be realized by combining this DW device with a CMOS buffer. The second neuromorphic device proposal is inspired by the brain. Membrane potential of many neurons oscillate in a subthreshold damped fashion and fire when excited by an input frequency that nearly equals their Eigen frequency. We investigate theoretical implementation of such “resonate-and-fire” neurons by utilizing the magnetization dynamics of a fixed magnetic skyrmion based free layer of a magnetic tunnel junction (MTJ). Voltage control of magnetic anisotropy or voltage generated strain results in expansion and shrinking of a skyrmion core that mimics the subthreshold oscillation. Finally, we show that such resonate and fire neurons have potential application in coupled nanomagnetic oscillator based associative memory arrays.
|
112 |
Memory-Aware Scheduling for Fixed Priority Hard Real-Time Computing SystemsChaparro-Baquero, Gustavo A 21 March 2018 (has links)
As a major component of a computing system, memory has been a key performance and power consumption bottleneck in computer system design. While processor speeds have been kept rising dramatically, the overall computing performance improvement of the entire system is limited by how fast the memory can feed instructions/data to processing units (i.e. so-called memory wall problem). The increasing transistor density and surging access demands from a rapidly growing number of processing cores also significantly elevated the power consumption of the memory system. In addition, the interference of memory access from different applications and processing cores significantly degrade the computation predictability, which is essential to ensure timing specifications in real-time system design. The recent IC technologies (such as 3D-IC technology) and emerging data-intensive real-time applications (such as Virtual Reality/Augmented Reality, Artificial Intelligence, Internet of Things) further amplify these challenges. We believe that it is not simply desirable but necessary to adopt a joint CPU/Memory resource management framework to deal with these grave challenges.
In this dissertation, we focus on studying how to schedule fixed-priority hard real-time tasks with memory impacts taken into considerations. We target on the fixed-priority real-time scheduling scheme since this is one of the most commonly used strategies for practical real-time applications. Specifically, we first develop an approach that takes into consideration not only the execution time variations with cache allocations but also the task period relationship, showing a significant improvement in the feasibility of the system. We further study the problem of how to guarantee timing constraints for hard real-time systems under CPU and memory thermal constraints. We first study the problem under an architecture model with a single core and its main memory individually packaged. We develop a thermal model that can capture the thermal interaction between the processor and memory, and incorporate the periodic resource sever model into our scheduling framework to guarantee both the timing and thermal constraints. We further extend our research to the multi-core architectures with processing cores and memory devices integrated into a single 3D platform. To our best knowledge, this is the first research that can guarantee hard deadline constraints for real-time tasks under temperature constraints for both processing cores and memory devices. Extensive simulation results demonstrate that our proposed scheduling can improve significantly the feasibility of hard real-time systems under thermal constraints.
|
113 |
Cyber Profiling for Insider Threat DetectionUdoeyop, Akaninyene Walter 01 August 2010 (has links)
Cyber attacks against companies and organizations can result in high impact losses that include damaged credibility, exposed vulnerability, and financial losses. Until the 21st century, insiders were often overlooked as suspects for these attacks. The 2010 CERT Cyber Security Watch Survey attributes 26 percent of cyber crimes to insiders. Numerous real insider attack scenarios suggest that during, or directly before the attack, the insider begins to behave abnormally. We introduce a method to detect abnormal behavior by profiling users. We utilize the k-means and kernel density estimation algorithms to learn a user’s normal behavior and establish normal user profiles based on behavioral data. We then compare user behavior against the normal profiles to identify abnormal patterns of behavior.
|
114 |
Cyber Profiling for Insider Threat DetectionUdoeyop, Akaninyene Walter 01 August 2010 (has links)
Cyber attacks against companies and organizations can result in high impact losses that include damaged credibility, exposed vulnerability, and financial losses. Until the 21st century, insiders were often overlooked as suspects for these attacks. The 2010 CERT Cyber Security Watch Survey attributes 26 percent of cyber crimes to insiders. Numerous real insider attack scenarios suggest that during, or directly before the attack, the insider begins to behave abnormally. We introduce a method to detect abnormal behavior by profiling users. We utilize the k-means and kernel density estimation algorithms to learn a user’s normal behavior and establish normal user profiles based on behavioral data. We then compare user behavior against the normal profiles to identify abnormal patterns of behavior.
|
115 |
Bidirectional LAO* Algorithm (A Faster Approach to Solve Goal-directed MDPs)Bhuma, Venkata Deepti Kiran 01 January 2004 (has links)
Uncertainty is a feature of many AI applications. While there are polynomial-time algorithms for planning in stochastic systems, planning is still slow, in part because most algorithms plan for all eventualities. Algorithms such as LAO* are able to find good or optimal policies more quickly when the starting state of the system is known.
In this thesis we present an extension to LAO*, called BLAO*. BLAO* is an extension of the LAO* algorithm to a bidirectional search. We show that BLAO* finds optimal or E-optimal solutions for goal-directed MDPs without necessarily evaluating the entire state space. BLAO* converges much faster than LAO* or RTDP on our benchmarks.
|
116 |
Power-Efficient and Low-Latency Memory Access for CMP Systems with Heterogeneous Scratchpad On-Chip MemoryChen, Zhi 01 January 2013 (has links)
The gradually widening speed disparity of between CPU and memory has become an overwhelming bottleneck for the development of Chip Multiprocessor (CMP) systems. In addition, increasing penalties caused by frequent on-chip memory accesses have raised critical challenges in delivering high memory access performance with tight power and latency budgets. To overcome the daunting memory wall and energy wall issues, this thesis focuses on proposing a new heterogeneous scratchpad memory architecture which is configured from SRAM, MRAM, and Z-RAM. Based on this architecture, we propose two algorithms, a dynamic programming and a genetic algorithm, to perform data allocation to different memory units, therefore reducing memory access cost in terms of power consumption and latency. Extensive and intensive experiments are performed to show the merits of the heterogeneous scratchpad architecture over the traditional pure memory system and the effectiveness of the proposed algorithms.
|
117 |
MiSFIT: Mining Software Fault Information and TypesKidwell, Billy R 01 January 2015 (has links)
As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly.
This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort.
To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes.
|
118 |
Design and Development of Geographical Information System (GIS) Map for Nuclear Waste StreamsAppunni, Sandhya 21 November 2014 (has links)
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
|
119 |
Scheduling Medical Application Workloads on Virtualized Computing SystemsDelgado, Javier 30 March 2012 (has links)
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of “cloud computing” services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves.
In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. A performance prediction methodology applicable to the target environment. A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees.
In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia.
Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20-30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
|
120 |
Design and Development of a Comprehensive and Interactive Diabetic Parameter Monitoring System - BeticTrackChowdhury, Nusrat 01 December 2019 (has links)
A novel, interactive Android app has been developed that monitors the health of type 2 diabetic patients in real-time, providing patients and their physicians with real-time feedback on all relevant parameters of diabetes. The app includes modules for recording carbohydrate intake and blood glucose; for reminding patients about the need to take medications on schedule; and for tracking physical activity, using movement data via Bluetooth from a pair of wearable insole devices. Two machine learning models were developed to detect seven physical activities: sitting, standing, walking, running, stair ascent, stair descent and use of elliptical trainers. The SVM and decision tree models produced an average accuracy of 85% for these seven activities. The decision tree model is implemented in an app that classifies human activity in real-time.
|
Page generated in 0.0832 seconds