• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 635
  • 2
  • 2
  • Tagged with
  • 639
  • 99
  • 49
  • 46
  • 46
  • 44
  • 44
  • 38
  • 37
  • 32
  • 29
  • 28
  • 28
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Distributed control system for demand response by servers

Hall, Joseph Edward 01 December 2015 (has links)
Within the broad topical designation of “smart grid,” research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match the availability of intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.
602

Pushing the boundaries: feature extraction from the lung improves pulmonary nodule classification

Dilger, Samantha Kirsten Nowik 01 May 2016 (has links)
Lung cancer is the leading cause of cancer death in the United States. While low-dose computed tomography (CT) screening reduces lung cancer mortality by 20%, 97% of suspicious lesions are found to be benign upon further investigation. Computer-aided diagnosis (CAD) tools can improve the accuracy of CT screening, however, current CAD tools which focus on imaging characteristics of the nodule alone are challenged by the limited data captured in small, early identified nodules. We hypothesize a CAD tool that incorporates quantitative CT features from the surrounding lung parenchyma will improve the ability of a CAD tool to determine the malignancy of a pulmonary nodule over a CAD tool that relies solely on nodule features. Using a higher resolution research cohort and a retrospective clinical cohort, two CAD tools were developed with different intentions. The research-driven CAD tool incorporated nodule, surrounding parenchyma, and global lung measurements. Performance was improved with the inclusion of parenchyma and global features to 95.6%, compared to 90.2% when only nodule features were used. The clinically-oriented CAD tool incorporated nodule and parenchyma features and clinical risk factors and identified several features robust to CT variability, resulting in an accuracy of 71%. This study supports our hypothesis that the inclusion of parenchymal features in the developed CAD tools resulted in improved performance compared to the CAD tool constructed solely with nodule features. Additionally, we identified the optimal amount of lung parenchyma for feature extraction and explored the potential of the CAD tools in a clinical setting.
603

Oral health knowledge and dental utilization among Hispanic adults in Iowa

Patino, Daisy 01 December 2015 (has links)
Objectives: To determine oral health literacy levels among Hispanic adults living in Iowa, and assess the relationship between oral health literacy and dental utilization. Methods: This cross-sectional study included a convenience sample of self-identifying Hispanic/Latino adults. Participants were recruited via mass email, word of mouth, and from faith-based organizations that provided church services in Spanish. Participants were recruited from urban and rural communities in Central and Eastern Iowa. Participants were asked to complete a questionnaire, in either English or Spanish, that contained questions pertaining to: oral health literacy, dental utilization, acculturation, language proficiency, demographic information, country of origin, number of years living in the United States, and preferences pertaining to the characteristics of their dental providers (e.g. importance of dentist to be able to speak Spanish). Oral health literacy was assessed using the Comprehensive Measure of Oral Health Knowledge (Macek and colleagues). Oral health knowledge levels were categorized as low (0-14) or high (15-23). Dental utilization was defined as visiting a dental provider within the past 12 months or more than 12 months ago. Bivariate analyses were conducted using the Chi-square test with oral health knowledge and dental utilization being the two main outcome variables. Multiple logistic regression models were created to identify the variables related to low oral health knowledge irregular dental utilization. Statistical significance was set as p<0.05. IRB approval was obtained prior to conducting the study. Results: Three hundred thirty-eight participants completed the questionnaire. Sixty-seven percent of participants (n=228) completed the questionnaire in Spanish. The mean oral health knowledge score was 14 (low knowledge =51% vs. high knowledge = 49%). Thirty-five percent reported visiting the dentist <12 months ago. Bivariate analyses revealed that the following respondents were more likely to have low oral health knowledge (p<0.05): being older (i.e. 55-71 years of age), male, self-reporting low health literacy, having less than a high-school education, earning ≤$25,000, not having dental insurance, having low acculturation, being born outside of the United States, preferring a dental provider who speaks Spanish, perceiving one’s oral health to be fair/poor/or not knowing the status of one’s oral health, seeking dental care someplace other than a private dental office, and being more likely to seek care for a problem related visit rather than routine care. Having low oral health knowledge was statistically significantly associated visiting a dentist >12 months ago. Many other variables were also associated (p<;0.05) with infrequent dental utilization: low health literacy, being male, having <12th grade degree or a high school diploma, earning ≤$25,000, not having dental insurance, having low acculturation, reporting fewer years living in the United States, preferring a dental provider who speaks Spanish, perceiving one’s oral health to be fair/poor/or not knowing the status of one’s oral health, and seeking dental care someplace other than a private dental office. Final logistic regression analyses indicated that having less than a 12th grade education, lack of dental insurance, and a preference for receiving care from a Spanish speaking dental provider were associated with low oral health literacy. Furthermore, final logistic regression results predicting irregular dental utilization demonstrated that the following variables were statistically significant: being male, earning ≤$25,000 per year, not having dental insurance and having a history of tooth decay. Conclusion: Dental utilization and oral health knowledge appear to be associated. Patients with low oral health literacy may be less likely to utilize dental care, thus decreasing the opportunity to increase dental knowledge. Dental teams should recognize which patients are more likely to have low oral health literacy and provide dental education in patients’ preferred language.
604

Intersecting literacy beliefs and practices with heritage and non-heritage learners' instruction: a case study of a novice Korean language instructor

Choi, Ho Jung 01 May 2016 (has links)
Many researchers have explored teachers’ beliefs in literacy and found that teachers’ literacy beliefs affect their instructional practices in foreign language (FL) or second language (SL) classrooms. Researchers have demonstrated that teachers’ literacy beliefs and instructional practices are generally consistent. There have been many studies regarding teachers’ literacy beliefs and classroom instruction in the context of FL/SL and more recent studies on teachers’ literacy beliefs presenting an increasing interest in heritage language (HL) such as Spanish and Chinese. However, less is known about Korean language teachers’ literacy beliefs and practices in the mixed classroom of heritage and non-­‐heritage learners. This present study had two main purposes. First, it examined and described the literacy beliefs and instructional practices of a novice Korean language instructor, who struggled primarily with heritage learners in his teaching career. The second purpose was to seek an in-­‐depth view of a novice teacher’s literacy beliefs and practices toward two different student subgroups of heritage and non-­‐heritage learners in the same classroom. In addition, this study investigated incongruences between literacy beliefs and practices toward heritage and non-­‐heritage learners. In order to examine a novice Korean instructor’s literacy beliefs and practices toward Korean heritage learners and non-­‐heritage learners, this research employed a qualitative case study and collected data through a combination of a survey, semi-­‐structured interviews, and videotaped classroom observations. The Literacy Orientation Survey (LOS) and Taxonomy Of Techniques were adopted for a survey and classroom observation, respectively. The results of the current study indicated that the novice teacher of Korean has general literacy beliefs compatible with a constructivist orientation, which is a whole-­‐ language approach and one that promotes transformative learning. For most of the instructor’s literacy instruction in the classroom, his literacy beliefs appeared to be congruent with his practices toward KHLLs. The novice teacher promoted differentiated literacy instruction by giving separate, more challenging, or instruction more connected to everyday life in an effort to meet each individual learner’s needs in literacy. Acknowledging heritage learners as mediators and community builders who could potentially promote literacy skills, the participant presented a broader understanding of literacy and multiliteracies, such as cultural and digital literacy, beyond traditional skill-­‐ focused reading and writing. However, his overall literacy beliefs were incongruent with his instructional practices toward KFLLs because of frequent accommodations for less proficient learners through more traditional or eclectic activities. This incongruence and distinctive literacy instruction toward two different learner subgroups were explained by several factors: university policy on teaching and learning, his educational background and teaching experiences, and the low proficiency of the Korean language learners. This study of a novice teacher’s literacy beliefs toward different learner groups suggests that the embracing of comprehensive and constructivist approaches to literacy instruction and curriculum is only possible when pre-­‐ and in-­‐service teachers are aware of their own premises or propositions about literacy beliefs and instructions. The findings generated by this study can serve as a good starting point to guide FL/HL teachers to professional growth and expand the field of HL literacy studies in the future.
605

New neural network for real-time human dynamic motion prediction

Bataineh, Mohammad Hindi 01 May 2015 (has links)
Artificial neural networks (ANNs) have been used successfully in various practical problems. Though extensive improvements on different types of ANNs have been made to improve their performance, each ANN design still experiences its own limitations. The existing digital human models are mature enough to provide accurate and useful results for different tasks and scenarios under various conditions. There is, however, a critical need for these models to run in real time, especially those with large-scale problems like motion prediction which can be computationally demanding. For even small changes to the task conditions, the motion simulation needs to run for a relatively long time (minutes to tens of minutes). Thus, there can be a limited number of training cases due to the computational time and cost associated with collecting training data. In addition, the motion problem is relatively large with respect to the number of outputs, where there are hundreds of outputs (between 500-700 outputs) to predict for a single problem. Therefore, the aforementioned necessities in motion problems lead to the use of tools like the ANN in this work. This work introduces new algorithms for the design of the radial-basis network (RBN) for problems with minimal available training data. The new RBN design incorporates new training stages with approaches to facilitate proper setting of necessary network parameters. The use of training algorithms with minimal heuristics allows the new RBN design to produce results with quality that none of the competing methods have achieved. The new RBN design, called Opt_RBN, is tested on experimental and practical problems, and the results outperform those produced from standard regression and ANN models. In general, the Opt_RBN shows stable and robust performance for a given set of training cases. When the Opt_RBN is applied on the large-scale motion prediction application, the network experiences a CPU memory issue when performing the optimization step in the training process. Therefore, new algorithms are introduced to modify some steps of the new Opt_RBN training process to address the memory issue. The modified steps should only be used for large-scale applications similar to the motion problem. The new RBN design proposes an ANN that is capable of improved learning without needing more training data. Although the new design is driven by its use with motion prediction problems, the consequent ANN design can be used with a broad range of large-scale problems in various engineering and industrial fields that experience delay issues when running computational tools that require a massive number of procedures and a great deal of CPU memory. The results of evaluating the modified Opt_RBN design on two motion problems are promising, with relatively small errors obtained when predicting approximately 500-700 outputs. In addition, new methods for constraint implementation within the new RBN design are introduced. Moreover, the new RBN design and its associated parameters are used as a tool for simulated task analysis. This work initiates the idea that output weights (W) can be used to determine the most critical basis functions that cause the greatest reduction in the network test error. Then, the critical basis functions can specify the most significant training cases that are responsible for the proper performance achieved by the network. The inputs with the most change in value can be extracted from the basis function centers (U) in order to determine the dominant inputs. The outputs with the most change in value and their corresponding key body degrees-of-freedom for a motion task can also be specified using the training cases that are used to create the network's basis functions.
606

Using PCSWMM to simulate first flush and assess performance of extended dry detention ponds as structural stormwater BMPs in a large polluted urban watershed

Kabbani, Muhieddine Saadeddine 01 May 2015 (has links)
Urbanization and increase of impervious areas impact stormwater runoff and can pollute receiving waters. Total suspended solids (TSS) are of particular concern as they can act as a transport agent for other pollutants. Moreover, the existence of the first flush phenomenon (FF), whereby the first stage of storm runoff is the most concentrated, can also have profound ecological effects on receiving waters. Understanding the various types of pollutants in watershed stormwater, their correlation with rainfall parameters (precipitation depth and previous dry days) and with TSS, and the existence of FF is crucial to the design of the most suitable structural best management practice (BMP) that can mitigate their harm. Personal Computer Storm Water Management Model (PCSWMM) is a well-known computer model that can simulate urban runoff quantity and quality and model BMPs. The use of PCSWMM to simulate the first flush phenomenon and to evaluate the effectiveness of structural BMPs has not been previously investigated for a large urban watershed with seriously polluted stormwater runoff. This research is concerned with the study of a framework for designing structural best management practices (BMPs) for stormwater management in a large watershed that is based on comprehensive analysis of pollutants of concern, rainfall parameters of influence, and the existence of FF. The framework was examined using the PCSWMM computer model in the St Anthony Park watershed, an urban watershed in St Paul, Minnesota with a large drainage area of 3,418 acres that discharges directly into the Mississippi River via a storm tunnel. A comprehensive study was undertaken to characterize the overall St. Anthony Park watershed stormwater quality trends for the period of record 2005-2013 for heavy metals, nutrients (ammonia and total phosphorus), sediment (TSS), and bacteria (E. coli). Stormwater was found to be highly contaminated as measured by exceedance of the Minnesota Pollution Control Agency (MPCA) water quality standards and as compared to data obtained from the National Stormwater Quality Database (NSQD). None of the examined parameters significantly correlated with precipitation depth. Concentrations of most heavy metals, total phosphorus and TSS positively correlated with previous dry days, and most pollutants correlated positively with TSS, which provided a strong rationale for using TSS as a representative pollutant in PCSWMM and in examining BMP efficiency. Moreover, BMPs that targeted the particulate fraction in stormwater would be the most efficient in reducing stormwater pollution. A PCSWMM model was built based on the existing drainage system of the watershed, which consisted of inlet structures, manholes, pipes and deep manholes that connect the network pipes to a deep drainage tunnel discharging directly into Mississippi River. The model was calibrated and validated using recorded storm and runoff data. FF was numerically investigated by simulating pollutant generation and washoff. Using three different numerical definitions of FF, the existence of FF could be simulated, and was subsequently reduced by simulating extended dry detention ponds in the watershed. Extended dry detention ponds (EDDPs) are basins whose outlets are designed to detain stormwater runoff for a calculated time that allows particles and associated pollutants to settle. Extended dry detention ponds are a potential BMP option that could efficiently control both water quantity (by diverting initial volumes of stormwater, thus addressing FF) and quality (by reducing suspending pollutants, thus addressing TSS and co-contaminants). Moreover, they are the least-expensive stormwater treatment practice on a cost per treated unit area. Two location-based designs were examined. The first was an EDDP at the main outfall (OFmain), while the second was a set of seven smaller EDDPs within the vicinity of deeper manholes of the deep tunnel (distributed EDDPs). Distributed EDDPs were similar to the OFmain EDDP at reducing peak stormwater flow (52-61%) but superior in reducing TSS loads (20-25% for small particles and 43-45% for larger particles based on the particle sedimentation rate removal constant k) and in reducing peak TSS loads (67-75%). These efficiencies were obtained using the dynamic and kinematic wave routing methods, indicating that they could be used interchangeably for this watershed. The steady state routing method produced unrealistic results and was subsequently excluded from FF analysis. Finally, distributed EDDPs were superior to OFmain EDDP at eliminating FF per the stringent fifth definition (Δ > 0.2). This was true for small values of k. However, larger values of k and other FF tests (above the 45º no-flush line and FF coefficient b < 1) showed that BMP implementation overall failed to completely eliminate FF. This suggested that the extended time required by EDDPs to efficiently remove pollutants from stormwater via settling would compromise their ability to completely eliminate FF. In conclusion, a comprehensive framework was applied so as to better design the most efficient BMPs by characterizing the overall St. Anthony Park watershed stormwater pollutants, their correlation with rainfall parameters and with TSS, and the magnitude of FF. A cost-effective, rapid, and accurate method to simulate FF and study the optimal BMP design was thus implemented for a large urban watershed through the PCSWMM model.
607

In vitro evaluation of carbon-nanotube-reinforced bioprintable vascular conduits

Dolati, Farzaneh 01 December 2014 (has links)
Vascularization of thick engineered tissue and organ constructs like the heart, liver, pancreas or kidney remains a major challenge in tissue engineering. Vascularization is needed to supply oxygen and nutrients and remove waste in living tissues and organs through a network that should possess high perfusion ability and significant mechanical strength and elasticity. In this thesis, we introduce a fabrication process to print vascular conduits directly, where conduits were reinforced with carbon nanotubes (CNTs) to enhance their mechanical properties and bioprintability. The generation of vascular conduit with a natural polymer hydrogel such as alginate needs to have improved mechanical properties in order to biomimic the natural vascular system. Carbon nanotube (CNT) is one of the best candidates for this goal because it is known as the strongest material and possesses a simple structure. In this thesis, multi-wall carbon nanotube (MWCNT) is dispersed homogenously in the hydrogel and fabricated through an extrusion-based system.In vitro evaluation of printed conduits encapsulated in human coronary artery smooth muscle cells was performed to characterize the effects of CNT reinforcement on the mechanical, perfusion and biological performance of the conduits. Perfusion and permeability, cell viability, extracellular matrix formation and tissue histology were assessed and discussed, and it was concluded that CNT-reinforced vascular conduits provided a foundation for mechanically appealing constructs where CNTs could be replaced with natural protein nanofibers for further integration of these conduits in large-scale tissue fabrication. It was concluded that MWCNT has a significant effect on mechanical properties, vascular conduit swelling ratio and biological characterization in short-term and long-term cellular viability.
608

Self-collision avoidance through keyframe interpolation and optimization-based posture prediction

Degenhardt, Richard Kennedy, III 01 January 2014 (has links)
Simulating realistic human behavior on a virtual avatar presents a difficult task. Because the simulated environment does not adhere to the same scientific principles that we do in the existent world, the avatar becomes capable of achieving infeasible postures. In an attempt to obtain realistic human simulation, real world constraints are imposed onto the non-sentient being. One such constraint, and the topic of this thesis, is self-collision avoidance. For the purposes of this topic, a posture will be defined solely as a collection of angles formed by each joint on the avatar. The goal of self-collision avoidance is to eliminate the formation of any posture where multiple body parts are attempting to occupy the exact same space. My work necessitates an extension of this definition to also include collision avoidance with objects attached to the body, such as a backpack or armor. In order to prevent these collisions from occurring, I have implemented an effort-based approach for correcting afflicted postures. This technique specifically pertains to postures that are sequenced together with the objective of animating the avatar. As such, the animation's coherence and defining characteristics must be preserved. My approach to this problem is unique in that it strategically blends the concept of keyframe interpolation with an optimization-based strategy for posture prediction. Although there has been considerable work done with methods for keyframe interpolation, there has been minimal progress towards integrating a realistic collision response strategy. Additionally, I will test this optimization-based approach through the use of a complex kinematic human model and investigate the use of the results as input to an existing dynamic motion prediction system.
609

Effects of an iPad-based early reading intervention with students with complex needs

Lucas, Kristin Goodwin 01 December 2015 (has links)
Early reading literacy is foundational to all other academic learning. It is imperative that elementary students with and without disabilities be provided with evidence-based reading instruction. Elementary students with developmental disabilities (DD) and complex communication needs (CCN) benefit from evidence-based reading instruction that incorporates individualized, explicit instruction and appropriate assistive technology. Research to identify evidence-based practices for students with DD and CCN is necessary to assist teachers to close the gap in overall achievement for this group of learners. The purpose of this study was to determine the efficacy of the early reading program Go Talk Phonics (Ahlgrim-Delzell, Browder, &Wood, 2014) that incorporated evidence-based systematic instruction delivered through assistive technology to teach reading to elementary students ( n = 2 ) with DD and CCN. The two participants in this single-case designed study did not make adequate progress toward the objectives of Lesson One of the intervention in order to continue on to Lessons Two and Three. Although the participants in this study were less successful in the objectives of the lesson than participants in the Ahlgrim-Delzell et al., (2014) study, there were differences in the participants, assistive technology, and design of the experiment. Important considerations were revealed when selecting academic interventions for students with CCN and DD. Assessment of broader aspects of the students' skills and literacy experience, as well as differential reinforcement procedures specific to instructional demands may be necessary to see gains from instruction.
610

Efficient optimization for labeling problems with prior information: applications to natural and medical images

Bai, Junjie 01 May 2016 (has links)
Labeling problem, due to its versatile modeling ability, is widely used in various image analysis tasks. In practice, certain prior information is often available to be embedded in the model to increase accuracy and robustness. However, it is not always straightforward to formulate the problem so that the prior information is correctly incorporated. It is even more challenging that the proposed model admits efficient algorithms to find globally optimal solution. In this dissertation, a series of natural and medical image segmentation tasks are modeled as labeling problems. Each proposed model incorporates different useful prior information. These prior information includes ordering constraints between certain labels, soft user input enforcement, multi-scale context between over-segmented regions and original voxel, multi-modality context prior, location context between multiple modalities, star-shape prior, and gradient vector flow shape prior. With judicious exploitation of each problem's intricate structure, efficient and exact algorithms are designed for all proposed models. The efficient computation allow the proposed models to be applied on large natural and medical image datasets using small memory footprint and reasonable time assumption. The global optimality guarantee makes the methods robust to local noise and easy to debug. The proposed models and algorithms are validated on multiple experiments, using both natural and medical images. Promising and competitive results are shown when compared to state-of-art.

Page generated in 0.0743 seconds