• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 54
  • 43
  • 11
  • 8
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 82
  • 78
  • 67
  • 55
  • 46
  • 35
  • 33
  • 32
  • 32
  • 31
  • 30
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

The Simulation & Evaluation of Surge Hazard Using a Response Surface Method in the New York Bight

Bredesen, Michael H 01 January 2015 (has links)
Atmospheric features, such as tropical cyclones, act as a driving mechanism for many of the major hazards affecting coastal areas around the world. Accurate and efficient quantification of tropical cyclone surge hazard is essential to the development of resilient coastal communities, particularly given continued sea level trend concerns. Recent major tropical cyclones that have impacted the northeastern portion of the United States have resulted in devastating flooding in New York City, the most densely populated city in the US. As a part of national effort to re-evaluate coastal inundation hazards, the Federal Emergency Management Agency used the Joint Probability Method to re-evaluate surge hazard probabilities for Flood Insurance Rate Maps in the New York – New Jersey coastal areas, also termed the New York Bight. As originally developed, this method required many combinations of storm parameters to statistically characterize the local climatology for numerical model simulation. Even though high-performance computing efficiency has vastly improved in recent years, researchers have utilized different “Optimal Sampling” techniques to reduce the number of storm simulations needed in the traditional Joint Probability Method. This manuscript presents results from the simulation of over 350 synthetic tropical cyclones designed to produce significant surge in the New York Bight using the hydrodynamic Advanced Circulation numerical model, bypassing the need for Optimal Sampling schemes. This data set allowed for a careful assessment of joint probability distributions utilized for this area and the impacts of current assumptions used in deriving new flood-risk maps for the New York City area.
272

The Effects of the Use of Technology In Mathematics Instruction on Student Achievement

Myers, Ron Y 30 March 2009 (has links)
The purpose of this study was to examine the effects of the use of technology on students’ mathematics achievement, particularly the Florida Comprehensive Assessment Test (FCAT) mathematics results. Eleven schools within the Miami-Dade County Public School System participated in a pilot program on the use of Geometers Sketchpad (GSP). Three of these schools were randomly selected for this study. Each school sent a teacher to a summer in-service training program on how to use GSP to teach geometry. In each school, the GSP class and a traditional geometry class taught by the same teacher were the study participants. Students’ mathematics FCAT results were examined to determine if the GSP produced any effects. Students’ scores were compared based on assignment to the control or experimental group as well as gender and SES. SES measurements were based on whether students qualified for free lunch. The findings of the study revealed a significant difference in the FCAT mathematics scores of students who were taught geometry using GSP compared to those who used the traditional method. No significant differences existed between the FCAT mathematics scores of the students based on SES. Similarly, no significant differences existed between the FCAT scores based on gender. In conclusion, the use of technology (particularly GSP) is likely to boost students’ FCAT mathematics test scores. The findings also show that the use of GSP may be able to close known gender and SES related achievement gaps. The results of this study promote policy changes in the way geometry is taught to 10th grade students in Florida’s public schools.
273

Segmentation des images radiographiques à rayon-X basée sur la fusion entropique et Reconstruction 3D biplanaire des os basée sur la modélisation statistique non-linéaire

Nguyen, Dac Cong Tai 08 1900 (has links)
Dans cette thèse, nous présentons une méthode de segmentation d’images radiographiques des membres inférieurs en régions d’intérêt (ROIs), une méthode de recalage rigide tridimensionnel (3D) / bidimensionnel (2D) des prothèses du genou sur les deux images biplanaires radiographiques calibrées et une méthode de reconstruction 3D des membres inférieurs à partir de deux images biplanaires radiographiques calibrées. Le premier article présente une méthode de segmentation de rotule, astragale et bassin des images radiographiques en régions d’intérêt basée sur la fusion de multi-atlas et superpixels. Cette méthode utilise l’apprentissage d’une base de données d’images radiographiques de ces os segmentées manuellement et recalées entre elles pour estimer un ensemble de superpixels permettant de tenir compte de toute la variabilité locale et non linéaire existante dans la base, puis la propagation d’étiquettes basée sur le concept d’entropie pour raffiner la carte de segmentations en régions internes afin d’obtenir le résultat final. Le deuxième article présente une méthode de recalage rigide 3D / 2D des composants tibiaux et fémoraux de prothèse du genou sur deux images biplanaires radiographiques calibrées. Cette méthode utilise une mesure de similarité hybride basée sur les notions de contours et régions puis un algorithme d’optimisation stochastique pour estimer la position des composants. La similarité basée sur les régions est stable et robuste contre les bruits. Cependant, cette mesure n’est pas précise car le nombre de pixels aux contours est inférieur au celui à l’intérieur de la région. Au contraire, la similarité basée sur les contours est précise mais plus sensible au bruit ou à d’autres artefacts existant dans les images. C’est pourquoi la combinaison de ces deux similarités fournit une méthode de recalage robuste et précise. Le troisième article représente une méthode statistique biplanaire de reconstruction 3D de rotule, astragale et bassin. Cette méthode utilise un algorithme de réduction de dimensionnalité pour définir un modèle déformable paramétrique qui contient toutes les déformations statistiques admissibles apprises à partir d’une base de données des structures osseuses. Puis un algorithme d’optimisation stochastique est utilisé pour minimiser la différence entre la projection des contours / régions des modèles surfaciques osseux avec ceux segmentés sur les deux images radiographiques. / In this thesis, we present a segmentation method of lower limbs of X-ray images into regions of interest (ROIs), a three-dimensional (3D) / two-dimensional (2D) rigid registration method of knee implant components to biplanar X-ray images, and a 3D reconstruction method of the lower limbs using biplanar X-ray images. The first paper presents a superpixel and multi-atlas-based segmentation method of the patella, talus, and pelvis into regions of interest. This method uses a training dataset of pre-segmented and co-registered X-ray images of these bones to estimate a collection of superpixels allowing to take into account all the nonlinear and local variability existing in the dataset, then a propagation of label based on the entropy concept for refining the segmentation map into internal regions to the final result. The second paper presents a 3D / 2D rigid registration method of tibial and femoral components of knee implants to calibrated biplanar X-ray images. This method uses a hybrid edge- and region-based similarity measure then a stochastic optimization algorithm to estimate the component position. The region-based similarity is stable and robust to noise. However, this measure is not precise because the number of pixels in the border is fewer than the number of pixels inside the region. On the contrary, the edge-based similarity is accurate but more sensitive to noise or other artifacts existing in the images. That’s why the combination of these two similarity types provides a robust and accurate registration method. The third paper presents a statistical biplanar 3D reconstruction method of the patella, talus, and pelvis. This method uses a dimensionality reduction algorithm to define a deformable parametric model which contains all admissible statistical deformations learned from the bone structure dataset. Then a stochastic optimization algorithm is used to minimize the difference between the contour / region projection of bone models and the contours / regions in two segmented X-ray images.
274

Software Internationalization: A Framework Validated Against Industry Requirements for Computer Science and Software Engineering Programs

Vũ, John Huân 01 March 2010 (has links)
View John Huân Vũ's thesis presentation at http://youtu.be/y3bzNmkTr-c. In 2001, the ACM and IEEE Computing Curriculum stated that it was necessary to address "the need to develop implementation models that are international in scope and could be practiced in universities around the world." With increasing connectivity through the internet, the move towards a global economy and growing use of technology places software internationalization as a more important concern for developers. However, there has been a "clear shortage in terms of numbers of trained persons applying for entry-level positions" in this area. Eric Brechner, Director of Microsoft Development Training, suggested five new courses to add to the computer science curriculum due to the growing "gap between what college graduates in any field are taught and what they need to know to work in industry." He concludes that "globalization and accessibility should be part of any course of introductory programming," stating: A course on globalization and accessibility is long overdue on college campuses. It is embarrassing to take graduates from a college with a diverse student population and have to teach them how to write software for a diverse set of customers. This should be part of introductory software development. Anything less is insulting to students, their family, and the peoples of the world. There is very little research into how the subject of software internationalization should be taught to meet the major requirements of the industry. The research question of the thesis is thus, "Is there a framework for software internationalization that has been validated against industry requirements?" The answer is no. The framework "would promote communication between academia and industry ... that could serve as a common reference point in discussions." Since no such framework for software internationalization currently exists, one will be developed here. The contribution of this thesis includes a provisional framework to prepare graduates to internationalize software and a validation of the framework against industry requirements. The requirement of this framework is to provide a portable and standardized set of requirements for computer science and software engineering programs to teach future graduates.

Page generated in 0.0801 seconds