• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 167
  • 167
  • 167
  • 60
  • 37
  • 31
  • 31
  • 31
  • 29
  • 27
  • 26
  • 23
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

EPICCONFIGURATOR COMPUTER CONFIGURATOR AND CMS PLATFORM

TANTAMANGO, IVO A 01 June 2018 (has links)
Very often when we are looking to buy new IT equipment in an online store, we face the problem that certain parts of our order are not compatible with others or sometimes one part needs additional components. From another point of view in this process, when an online store owner wants to manage the products available in stock, assign prices, set conditions to make an order, or manage customer information, he or she must often rely on information from different systems, physical files, or other sources. EpicConfigurator simplifies and solves the issues mentioned above. EpicConfigurator makes it easy for User Customers to configure computer products by making the process of product selection more straightforward. It can actively gather customer requirements and map them to a set of products and service options. These capabilities will guide users towards an optimal solution for their needs. EpicConfigurator also allows User Customers to keep track and edit saved product configurations. This system also includes a user administrator perspective that allows Store Owners to act as User Admins helping them to manage and load new products, set configuration rules for products and manage all users. Following open source technologies, EpicConfigurator is an application easy to enhance, expand and integrate with newer technologies. This is a configurator tool and does not provide any purchasing feature. To purchase, the configuration results should be provided to the local reseller or sales representative to get an official quote.
12

Analyzing Spark Performance on Spot Instances

Tian, Jiannan 27 October 2017 (has links)
Amazon Spot Instances provide inexpensive service for high-performance computing. With spot instances, it is possible to get at most 90% off as discount in costs by bidding spare Amazon Elastic Computer Cloud (Amazon EC2) instances. In exchange for low cost, spot instances bring the reduced reliability onto the computing environment, because this kind of instance could be revoked abruptly by the providers due to supply and demand, and higher-priority customers are first served. To achieve high performance on instances with compromised reliability, Spark is applied to run jobs. In this thesis, a wide set of spark experiments are conducted to study its performance on spot instances. Without stateful replicating, Spark suffers from cascad- ing rollback and is forced to regenerate these states for ad hoc practices repeatedly. Such downside leads to discussion on trade-off between compatible slow checkpointing and regenerating on rollback and inspires us to apply multiple fault tolerance schemes. And Spark is proven to finish a job only with proper revocation rate. To validate and evaluate our work, prototype and simulator are designed and implemented. And based on real history price records, we studied how various checkpoint write frequencies and bid level affect performance. In case study, experiments show that our presented techniques can lead to ~20% shorter completion time and ~25% lower costs than those cases without such techniques. And compared with running jobs on full-price instance, the absolute saving in costs can be ~70%.
13

Detecting Netflix Service Outages Through Analysis of Twitter Posts

Cushing, Cailin 01 June 2012 (has links) (PDF)
Every week there are over a billion new posts to Twitter services and many of those messages contain feedback to companies about their services. One company that has recognized this unused source of information is Netflix. That is why Netflix initiated the development of a system that will let them respond to the millions of Twitter and Netflix users that are acting as sensors and reporting all types of user visible outages. This system will enhance the feedback loop between Netflix and its customers by increasing the amount of customer feedback that is being received by Netflix and reducing the time it takes for Netflix to receive the reports and respond to them. The goal of the SPOONS (Swift Perceptions of Online Negative Situations) system is to use Twitter posts to determine when Netflix users are reporting a problem with any of the Netflix services. This work covers a subset of the meth- ods implemented in the SPOONS system. The volume methods detect outages through time series analysis of the volume of a subset of the tweets that contain the word “netflix”. The sentiment methods first process the tweets and extract a sentiment rating which is then used to create a time series. Both time series are monitored for significant increases in volume or negative sentiment which indicates that there is currently an outage in a Netflix service. This work contributes: the implementation and evaluation of 8 outage detection methods; 256 sentiment estimation procedures and an evaluation of each; and evaluations and discussions of the real time applicability of the system. It also provides explanations for each aspect of the implementation, evaluations, and conclusions so future companies and researchers will be able to more quickly create detection systems that are applicable to their specific needs.
14

A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images

Lizarraga, Gabriel M 12 October 2018 (has links)
Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer's Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children's Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias.
15

A Scalable Framework for Monte Carlo Simulation Using FPGA-based Hardware Accelerators with Application to SPECT Imaging

Kinsman, Phillip J. 04 1900 (has links)
<p>As the number of transistors that are integrated onto a silicon die continues to in- crease, the compute power is becoming a commodity. This has enabled a whole host of new applications that rely on high-throughput computations. Recently, the need for faster and cost-effective applications in form-factor constrained environments has driven an interest in on-chip acceleration of algorithms based on Monte Carlo simula- tions. Though Field Programmable Gate Arrays (FPGAs), with hundreds of on-chip arithmetic units, show significant promise for accelerating these embarrassingly paral- lel simulations, a challenge exists in sharing access to simulation data amongst many concurrent experiments. This thesis presents a compute architecture for accelerating Monte Carlo simulations based on the Network-on-Chip (NoC) paradigm for on-chip communication. We demonstrate through the complete implementation of a Monte Carlo-based image reconstruction algorithm for Single-Photon Emission Computed Tomography (SPECT) imaging that this complex problem can be accelerated by two orders of magnitude on even a modestly-sized FPGA over a 2GHz Intel Core 2 Duo Processor. Futhermore, we have created a framework for further increasing paral- lelism by scaling our architecture across multiple compute devices and by extending our original design to a multi-FPGA system nearly linear increase in acceleration with logic resources was achieved.</p> / Master of Applied Science (MASc)
16

A REUSED DISTANCE BASED ANALYSIS AND OPTIMIZATION FOR GPU CACHE

Wang, Dongwei 01 January 2016 (has links)
As a throughput-oriented device, Graphics Processing Unit(GPU) has already integrated with cache, which is similar to CPU cores. However, the applications in GPGPU computing exhibit distinct memory access patterns. Normally, the cache, in GPU cores, suffers from threads contention and resources over-utilization, whereas few detailed works excavate the root of this phenomenon. In this work, we adequately analyze the memory accesses from twenty benchmarks based on reuse distance theory and quantify their patterns. Additionally, we discuss the optimization suggestions, and implement a Bypassing Aware(BA) Cache which could intellectually bypass the thrashing-prone candidates. BA cache is a cost efficient cache design with two extra bits in each line, they are flags to make the bypassing decision and find the victim cache line. Experimental results show that BA cache can improve the system performance around 20\% and reduce the cache miss rate around 11\% compared with traditional design.
17

Building Data Visualization Applications to Facilitate Vehicular Networking Research

Carter, Noah 01 May 2018 (has links)
A web app was developed which allows any internet-connected device to remotely monitor a roadway intersection’s state over HTTP. A mapping simulation was enhanced to allow researchers to retroactively track the location and the ad-hoc connectivity of vehicle clusters. A performance analysis was conducted on the utilized network partitioning algorithm. This work was completed under and for the utility of ETSU’s Vehicular Networking Lab. It can serve as a basis for further development in the field of wireless automobile connectivity.
18

AUTOMATIC GENERATION OF WEB APPLICATIONS AND MANAGEMENT SYSTEM

Zhou, Yu 01 March 2017 (has links)
One of the major difficulties in web application design is the tediousness of constructing new web pages from scratch. For traditional web application projects, the web application designers usually design and implement web application projects step by step, in detail. My project is called “automatic generation of web applications and management system.” This web application generator can generate the generic and customized web applications based on software engineering theories. The flow driven methodology will be used to drive the project by Business Process Model Notation (BPMN). Modules of the project are: database, web server, HTML page, functionality, financial analysis model, customer, and BPMN. The BPMN is the most important section of this entire project, due to the BPMN flow engine that most of the work and data flow depends on the engine. There are two ways to deal with the project. One way is to go to the main page, then to choose one web app template, and click the generating button. The other way is for the customers to request special orders. The project then will give suitable software development methodologies to follow up. After a software development life cycle, customers will receive their required product.
19

PIPPIN MACHINE

Pamulaparthy, Kiran Reddy 01 March 2017 (has links)
The PIPPIN machine describes two simulations which are intended to help students understand the compile and execute process of a simple computer. The first simulator takes a simple mathematical expression as input and then translates it into an assembly language. The second simulator will execute an assembly language program.
20

CHILDREN’S SOCIAL NETWORK: KIDS CLUB

Alrashoud, Eiman 01 June 2017 (has links)
Young children often have a profound interest that if nurtured, would develop to great social cues and skills thereby improving their social aspects of life. Parents can conveniently benefit from a swift data sharing in the collaborative scrutiny of their kid's participation, in public activities facilitated through the internet digital technology. To facilitate the involvement of shared activities among children, an interactive website is essential. The aim of my project is to develop a website that is intended to be an interactive platform for a variety of events selection. Additionally, the website will aid parents in the creation, discovery and reach for organized local events that fit their kid's interests in description and age. A variety of events will be availed at the website for scrutiny in finding friends, sharing and learning new activities. Similarly, it will be used for fun engagement. The website is implemented by using Microsoft Visual Studio 2012 Professional, C# programming language, and SQL Server Management Studio 2012 to handle the data.

Page generated in 0.1158 seconds