• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 262
  • 65
  • 51
  • 32
  • 26
  • 16
  • 15
  • 13
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 629
  • 333
  • 140
  • 129
  • 62
  • 56
  • 45
  • 43
  • 43
  • 41
  • 39
  • 38
  • 37
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Full scale instrumented testing and analysis of matting systems for airfield parking ramps and taxiways

Gartrell, Chad A 15 December 2007 (has links)
The U.S. military requires the ability to rapidly deploy troops, equipment, and materials anywhere in the world. Recent operations have brought attention to the need to utilize austere, unsurfaced, and sometimes sub-standard airfields within a theater of interest. These airfields may require additional taxiways and aprons. One option for the rapid construction of such is airfield matting systems. The focus of the work for this thesis was commercially available airfield matting systems to support large military transport aircraft, such as the C 17. Several test sections with differing strength soils were built with chosen mats tested in an elimination method, using a load cart that simulates contingency loading of one main gear of the C 17. Matting systems were evaluated based on logistical and assembly requirements, and deformation and damage sustained during traffic. A modeling effort was performed to investigate the potential of a simple model to predict the response of these matting systems under full-scale testing.
322

Characterization of Pedestrian Electromagnetic Scattering at 76-77GHz

Chen, Ming January 2013 (has links)
No description available.
323

On Bodies Whose Shadows Are Related Via Rigid Motions

Cordier, Michelle Renee 23 April 2015 (has links)
No description available.
324

STRUCTURAL BEHAVIOR AND DESIGN OF TWO CUSTOM ALUMINUM EXTRUDED SHAPES IN CUSTOM UNITIZED CURTAIN WALL SYSTEMS

WANG, YONGBING 21 July 2006 (has links)
No description available.
325

Uncertainty Analysis In Lattice Reactor Physics Calculations

Ball, Matthew R. 04 1900 (has links)
<p>Comprehensive sensitivity and uncertainty analysis has been performed for light-water reactor and heavy-water reactor lattices using three techniques; adjoint-based sensitivity analysis, Monte Carlo sampling, and direct numerical perturbation. The adjoint analysis was performed using a widely accepted, commercially available code, whereas the Monte Carlo sampling and direct numerical perturbation were performed using new codes that were developed as part of this work. Uncertainties associated with fundamental nuclear data accompany evaluated nuclear data libraries in the form of covariance matrices. As nuclear data are important parameters in reactor physics calculations, any associated uncertainty causes a loss of confidence in the calculation results. The quantification of output uncertainties is necessary to adequately establish safety margins of nuclear facilities. In this work, the propagation of uncertainties associated with both physics parameters (e.g. microscopic cross-sections) and lattice model parameters (e.g. material temperature) have been investigated, and the uncertainty of all relevant lattice calculation outputs, including the neutron multiplication constant and few-group, homogenized cross-sections have been quantified. Sensitivity and uncertainty effects arising from the resonance self-shielding of microscopic cross-sections were addressed using a novel set of resonance integral corrections that are derived from perturbations in their infinite-dilution counterparts. It was found that the covariance of the U238 radiative capture cross-section was the dominant contributor to the uncertainties of lattice properties. Also, the uncertainty associated with the prediction of isotope concentrations during burnup is significant, even when uncertainties of fission yields and decay rates were neglected. Such burnup related uncertainties result solely due to the uncertainty of fission and radiative capture rates that arises from physics parameter covariance. The quantified uncertainties of lattice calculation outputs that are described in this work are suitable for use as input uncertainties to subsequent reactor physics calculations, including reactor core analysis employing neutron diffusion theory.</p> / Doctor of Philosophy (PhD)
326

Nuclear fragmentation in particle therapy and space radiation protection: from the standard approach to the FOOT experiment

Colombi, Sofia 23 February 2021 (has links)
Today, the application of particle beams in cancer therapy is a well-established strategy and its combination with surgery and chemotherapy is becoming an increasingly reliable approach for some several clinical cases (e.g. skull base tumors). Currently, protons and 12C ions are used for patients’ treatment, due to their characteristic depth-dose deposition profile featuring a pronounced peak (the Bragg Peak) at the end of range. Clinical energies typically span between 60 and 250 MeV for protons and up to 400 MeV/u for 12C ions, in order to deliver treatments to various disease sites. Interactions between the primary beam and the patient’s body always occur during treatment, changing the primary radiation composition, energy and direction and thus affecting its depth dose and lateral profile. In carbon therapy, both projectile and target fragments can be generated during a treatment: the former are characterized by a kinetic energy spectrum peaked at the same energy of the primary beam and are mostly emitted in the forward direction; the latter are emitted with a much lower energy because they are produced from the target, which is at rest before the collision, and they are generated isotropically in the target frame. Moreover, the interaction of carbon ions with the patient's body is currently modeled in the treatment planning on the basis of experimental data measured in water. For all other biological materials, the contribution of nuclear interactions is taken into account by rescaling the values measured in water with a density factor. This approximation neglects the influence of the elemental composition, which might become relevant in cases where the material encountered by the beam significantly differs from water (e.g. bone or lung tissues) and result in a non-uniform and incorrect dose profile. Thus, experimental data with target different from water are clearly needed in order to correctly evaluate the contribution of all biological elements inside the human body. Treatments with protons can only generate target fragments, leading to the production of low-energy and therefore short-range fragments. Heavy secondary fragments will have a higher biological effectiveness than to protons, thus affecting the proton Relative Biological Effectiveness (RBE, i.e. the ratio of photon to charged particles dose necessary to achieve the same biological effect), nowadays assumed as a constant value (RBE=1.1) in clinical practice. Another aspect related to nuclear interactions is the overlap between radiotherapy and space radiation protection. The group of particle species either currently available in radiotherapy or considered promising alternative candidates (i.e. Helium, Lithium and Oxygen) are among the most abundant in the space radiation environment. Moreover, the proton energy range used in radiotherapy is similar to that of Solar Particle Events (SPEs) and Van Allen trapped protons. The radiation environment in space can lead to serious health risks for astronauts, especially in long duration and far from Earth space missions (like human explorations to Mars). Protection against space radiation are of paramount importance for preserving the astronauts’ life. Today, the only possible countermeasure is passive shielding. Nuclear fragmentation processes can occur inside the spaceship hull, causing the production of lighter and highly penetrating radiation that must be considered when a shielding is designed. Therefore, experimental data for beam and targets combinations relevant in space radiation applications must be collected for characterizing the interaction of mixed generated radiation field and assess the radiation-induced health risk. Despite the many fundamental open issues in particle therapy and space radiation protection fields, such the ones mentioned above, the current lack of experimental fragmentation cross section data in their energy range of interest is undeniable. Thus, accurate measurements for different ions species with energies up to 1000 MeV/u would be of great importance in order to further optimize particles treatments and improve the shielding design of spaceship. Moreover, additional experimental data would be of great importance for benchmarking Monte Carlo codes, which are extensively used by the scientific communities in both research fields. In fact, the available transport codes suffer from many uncertainties and they need to be verified with reliable experimental data. Due to high energy and long range of projectile fragments, the standard approach for their identification is collect data from several detector types, usually two plastic scintillators coupled with a Barium Fluoride or LYSO crystal, placed both upstream or downstream the target, providing information about the charge, energy loss, the residual kinetic energy and the time of flight of the emitted fragments. This experimental setup allows the identification of particle species in terms of charge, isotope, emission angle and kinetic energy and it has been widely exploited to perform several fragmentation measurements, both in particle therapy and space application fields. An example is the ROSSINI (RadiatiOn Shielding by ISRU and/or INnovative materIals for EVA, Vehicle and Habitat) project financed by the European Space Agency (ESA) to select innovative shielding materials and provide recommendations on space radioprotection for different mission scenarios. However, such standard approach is not useful for the characterization of target fragments. In fact, because of their low energy and short range, a much more complex setup and finer experimental strategies are required for their detection. The FOOT (FragmentatiOn Of Target) experiment has been designed to measure fragment production cross sections with ~5% uncertainty. Target fragmentation induced by 50-250 MeV proton beams will be studied taking advantage of an inverse kinematic approach. Specifically, O, C and He beams impinging on different targets (e.g., C, C2H4) will be employed, thus boosting the fragments energy and making their detection possible. Fragmentation cross section of hydrogen will be then obtained by subtraction. The same configuration provides also a measurement of projectile fragments with the direct kinematics approach. FOOT experimental setup consists of two different apparatus: a dedicated “table-top” electronic setup, based on a magnetic spectrometer, were conceived for the detection of heavier fragments (Z≥3). Alternatively, an emulsion spectrometer was designed in order to measure the production of low Z fragments (Z≤3) that would not cross the whole magnetic spectrometer. The purpose of the work presented in this doctoral thesis is the experimental characterization of particles originated in nuclear fragmentation processes for targets and beams of interest for particle therapy and space radiation protection, providing inputs to improve the accuracy of Monte Carlo transport codes presently used. Data collected in experimental campaigns using the standard setup to study the interaction of 400 MeV/u 12C ions beam with bone-like materials and 1000 MeV/u 58Ni ions beam with targets relevant for space applications have been analyzed. The presented fragments characterization comprehends the fraction of primary particles surviving the target and the yield and kinetic energy spectra of charged particles emitted at several angles with respect to the primary beam direction. The )*Ni beam data were collected in the frame of the ROSSINI experiment and focused on characterizing secondary neutrons production. Moreover, the analysis of the performances and fragments reconstruction capabilities of the FOOT electronic setup has been accomplished with Monte Carlo simulations. A dedicated analysis software has been developed in order to reconstruct fragments charge and mass, energy yields and production cross sections. A preliminary analysis of experimental data collected by a partial FOOT electronic setup is presented as well.
327

Capacity Modeling of Freeway Weaving Sections

Zhang, Yihua 27 June 2005 (has links)
The dissertation develops analytical models that estimate the capacity of freeway weaving sections. The analytical models are developed using simulated data that were compiled using the INTEGRATION software. Consequently, the first step of the research effort is to validate the INTEGRATION lane-changing modeling procedures and the capacity estimates that are derived from the model against field observations. The INTEGRATION software is validated against field data gathered by the University of California at Berkeley by comparing the lateral and longitudinal distribution of simulated and field observed traffic volumes categorized by O-D pair on nine weaving sections in the Los Angeles area. The results demonstrate a high degree of consistency between simulated and field observed traffic volumes within the various weaving sections. Subsequently, the second validation effort compares the capacity estimates of the INTEGRATION software to field observations from four weaving sections operating at capacity on the Queen Elizabeth Way (QEW) in Toronto, Canada. Again, the results demonstrate that the capacity estimates of the INTEGRATION software are consistent with the field observations both in terms of absolute values and temporal variability across different days. The error was found to be in the range of 10% between simulated and field observed capacities. Prior to developing the analytical models, the dissertation presents a systematic analysis of the factors that impact the capacity of freeway weaving sections, which were found to include the length of the weaving section, the weaving ratio (a new parameter that is developed as part of this research effort), the percentage of heavy vehicles, and the speed limit differential between freeway and on- and off-ramps. The study demonstrates that the weaving ratio, which is currently defined as the ratio of the lowest weaving volume to the total weaving volume in the 2000 Highway Capacity Manual, has a significant impact on the capacity of weaving sections. The study also demonstrates that the weaving ratio is an asymmetric function and thus should reflect the source of the weaving volume. Consequently, a new definition for the weaving ratio is introduced that explicitly identifies the source of the weaving volume. In addition, the study demonstrates that the length of the weaving section has a larger impact on the capacity of weaving sections for short lengths and high traffic demands. Furthermore, the study demonstrates that there does not exist enough evidence to conclude that the speed limit differential between mainline freeway and on- and off-ramps has a significant impact on weaving section capacities. Finally, the study demonstrates that the HCM procedures model the heavy duty vehicle impacts reasonably well. This dissertation presents the development of new capacity models for freeway weaving sections. In these models, a new definition of the weaving ratio that explicitly accounts for the source of weaving volume is introduced. The proposed analytical models estimate the capacity of weaving sections to within 12% of the simulated data, while the HCM procedures exhibit errors in the range of 114%. Among the newly developed models, the Artificial Neural Network (ANN) models performs slightly better that the statistical models in terms of model prediction errors. However, the sensitivity analysis results demonstrate unrealistic behavior of the ANN models under certain conditions. Consequently, the use of a statistical model is recommended because it provides a high level of accuracy while providing accurate model responses to changes in model input parameters (good response to the gradient of the input parameters). / Ph. D.
328

Radar cross-section data encoding based on parametric spectral estimation techniques

Williams, Mary Moulton 16 June 2009 (has links)
Parametric modeling has many applications. These applications include data estimation and interpolation, modern spectral estimation, and data encoding. This research applies parametric modeling to radar cross section data in an attempt to encode it as well as preserve its spectrum. Traditionally, radar data has been processed through Fourier spectral estimation techniques. These methods not only require large amounts of data, for good spectral estimates, but assume the unobserved data values are zero which leads to spectral smearing. Modern spectral estimation methods alleviate these problems by basing the spectral estimate on a parametric model fit to the data set. The spectral estimate is then derived from the parameters of the model. For models which give a good fit to the data, a good spectral estimate can be made. The most common parametric models are the autoregressive moving-average (ARMA), the moving-average (MA) and the autoregressive (AR) model. These models represent filters, which when excited by a white Gaussian noise sequence give some output sequence. If the parameters of the models and the noise sequence are selected properly, a desired output data sequence can be modeled. The variance of the white noise is often small compared to the variance of the data sequence. This means that the model parameters plus the noise can be stored with fewer bits than the original data sequence while maintaining the same amount of accuracy in the data. The model parameters and noise sequence can be used to reproduce the original data sequence. Further, if only the spectrum of the data is of interest, only the noise variance plus the parameters need to be stored. This could lead to an even greater amount of data reduction. Most high resolution radar data applications require only that the spectrum of the data be preserved which makes modern spectral estimation appealing. This research project applies parametric modeling and modern spectral estimation to high resolution radar data as a means of encoding it. Several different parametric modeling techniques are evaluated to see which would be most useful in radar data encoding. The Burg AR parametric model was chosen due to its computational efficiency and its good spectral estimates. The Burg method applied to two radar range profile data sets gave a reduction in data storage by a factor of four. Further encoding was achieved by fitting the Burg AR parameters to a set of basis functions. This produced an additional data reduction by a factor of 36, for a total compression factor of 144. The latter led to some distortion of the high resolution range profiles, yet targets were still sufficiently characterized. / Master of Science
329

Field based study of thrust faults in the Appalachian Valley and Ridge Province Newport, Virginia

Overby, Kyle Eugene 24 March 2016 (has links)
This study focuses on a series of thrust sheets exposed in the Appalachian Valley and Ridge Province Blacksburg-Pembroke area in southwest Virginia. Structures in the hanging wall of the Saltville thrust (Saltville thrust sheet) and the footwall of the Saltville thrust (Narrows thrust sheet) are examined. The first part of this study involves the construction of a series of thrust transport-parallel 1:24,000 scale geologic cross sections to constrain the subsurface geometry of fault and fold structures within the Saltville and Narrows thrust sheets. The second part of the study involves an outcrop-scale study of geologic structures exposed along a series road cuts in the footwall of the Saltville thrust and the geometric and relative timing relationships between folding, cleavage formation and thrust faulting. The cross sections show a series of interconnected splay faults branching off of the Saltville thrust and cutting both its hanging wall and footwall. Angle of dip and magnitude of dip-slip displacement on thrust and splay faults progressively decrease from hinterland to foreland within this fault system that is referred to as the Spruce Run Mountain-Newport (SRMN) fault system. Bedding within this fault system essentially forms a structural transition zone between the Saltville and Narrows thrust sheets, defining a km-scale fractured synform-antiform fold structure that has many structural attributes usually associated with fault propagation folding. In the road cut outcrops, early meter-scale faults are folded by later foreland-(NW) vergent folds. Although cleavage defines convergent cleavage fans about these folds, subtle obliquities between folds and cleavage indicate that folding post-dates early layer-parallel shortening and associated foreland-vergent thrusting. / Master of Science
330

"Inte i våra kanaler" : Journalistisk innehållsmoderering av kommentarsfält på sociala medier / "Not on our channels" : Journalistic content moderation of comment sections on social media

Lund Hanefjord, Malva January 2024 (has links)
The purpose of this study is to investigate the role and practices of journalism in moderating comment sections on social media. The study addresses the following questions: How do journalists determine which comments to delete and which to keep in the comment section? Why does journalism engage in moderation? What problems and solutions exist? And how do approaches differ between private news media and public service in regulating comment sections? The study is conducted using qualitative interviews as a method, based on six interviews with journalists from both private news media and public service. We have thematically analyzed the empirical material using analytical tools from discourse psychology, dividing it into three prominent interpretive repertoires. These are the journalist's democratic dilemma, the journalist's role as a content moderator, and the journalist as a protector. The analysis is supported by theory and previous research on the journalist's role in society, the changing role of journalism, the journalist's role as a content moderator, and journalism and participation. The results of the study show that all participating journalists had an editorial policy to rely on. Although the journalists reflected on the democratic factor and the public's right to freedom of speech, they felt they could moderate the comment sections as long as it was supported by their policy. All interviewees believe that moderating the comment sections is necessary to find a balance between allowing freedom of speech and democracy to flow while preventing the comment sections from being overwhelmed by hate and threats. Something they found necessary to maintain their legitimacy. Additionally, they wanted to protect their news subjects so that the public would dare to participate in the news without fear of facing hateful comments. Furthermore, it emerged that the journalists we interviewed who work for private news media were more relaxed about moderating the comment sections and removing comments. The interviewees who worked for public service were more cautious and wanted full support from the policy before removing comments.

Page generated in 0.0203 seconds