• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3195
  • 962
  • 514
  • 325
  • 277
  • 160
  • 74
  • 65
  • 60
  • 53
  • 52
  • 27
  • 26
  • 23
  • 23
  • Tagged with
  • 6980
  • 1288
  • 653
  • 647
  • 610
  • 567
  • 555
  • 477
  • 463
  • 416
  • 347
  • 346
  • 339
  • 330
  • 328
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Development of a Computer Program for the Verification and Validation of Numerical Simulations in Roadside Safety

Mongiardini, Mario 06 May 2010 (has links)
Roadside safety hardware has traditionally been approved on the basis of full-scale crash tests. In recent years, nonlinear dynamic Finite Element (FE) programs like LS-DYNA, PAM-Crash or ABAQUS Explicit have been widely used in evaluating new or improved design of roadside hardware. Although a powerful tool, numerical models must be properly verified and validated in order to provide reliable results. Typically, the verification and validation (V&V) process involves a visual comparison of two curves and is based on a purely subjective judgment. This research investigated the use of comparison metrics, which are mathematical measures that quantify the level of agreement between two curves, for comparing simulation and experimental outcomes in an objective manner. A computer program was developed in Matlab® to automatically evaluate most of the comparison metrics available in literature. The software can be used to preprocess and compare either single or multiple channels, guiding the user through friendly graphical interfaces. Acceptance criteria suitable to represent the typical scatter of experimental tests in roadside safety were determined by comparing ten essentially identical full-scale vehicle crash tests. The robustness and reliability of the implemented method were tested by comparing the qualitative score of the computed metrics for a set of velocity waveforms with the corresponding subjective judgment of experts. Moreover, the implemented method was applied to two real validation cases involving a numerical model in roadside safety and a model in biomechanics respectively. Eventually, the program showed to be an effective tool to be used for assessing the similarities and differences between two curves and, hence, for assisting engineers and analysts in performing verification and validation activities objectively.
602

A MATLAB Program to implement the band-pass method for discovering relevant scales in surface roughness measurement

Agunwamba, Chukwunomso 14 January 2010 (has links)
This project explores how to use band-pass filtering with a variety of filters to filter both two and three dimensional surface data. The software developed collects and makes available these filtering methods to support a larger project. It is used to automate the filtering procedure. This paper goes through the work-flow of the program, explaining how each filter was implemented. Then it demonstrates how the filters work by applying them to surface data used to test correlation between friction and roughness [Berglund and Rosen, 2009]. It also provides some explanations of the mathematical development of the filtering procedures as obtained from literature.
603

An investigation into the use of Cross Correlation Velocimetry

Rockwell, Scott R 12 January 2010 (has links)
This study analyses the applicability of cross correlating the signal between two thermocouples to obtain simultaneous measurement of velocity, integral turbulent length scales, and temperature in fire induced turbulent flows. This sensor is based on the classical Taylor's hypothesis which states that turbulent structures should retain their shape and identity over a small period of time. If sampling rate is fast enough such that the signal from two thermocouples is sampled within this time duration, the turbulent eddy can be used as a tracer to measure flow velocity and fluctuation. Experiments performed in two laboratory scale devices: a heated turbulent jet and a variable diameter natural gas burner show that sampling rate, sampling time, and angular orientation with respect to the bulk flow are the most sensitive parameters in velocity measurements. Flows with Reynolds numbers between 300 (u=0.1m/s) and 6000 (u=2.0 m/s) were tested.
604

Smoke Movement Analysis (Smoke Transport Within a Corridor)

Cummings, W. Mark 18 November 2004 (has links)
"A series of full-scale fire tests were performed, using a fire compartment and an adjoining long (30+ m) corridor, as part of an effort to quantify the dynamics associated with smoke transport within a corridor. The tests were performed at the U.S. Coast Guard Research and Development Center’s Fire and Safety Test Detachment in Mobile, Alabama on board the Test Vessel Mayo Lykes. The resulting empirical data was analyzed in an effort to develop a method that could be used to estimate the movement of smoke within a corridor. The objective is to potentially incorporate this method into a smoke movement analysis “tool” that could, in turn, be used in conjunction with a fire safety analysis methodology previously developed by the U.S. Coast Guard; the Ship fire Safety Engineering Methodology (SFSEM). The goal is to develop a smoke movement analysis “module” that can be utilized in conjunction with the SFSEM when conducting an overall fire safety analysis of a ship. Of particular interest is the speed at which the smoke propagates along the length of the corridor. The focus of a smoke movement module would be life safety. A conservative assumption is made that if smoke is present in sufficient quantities to fill a corridor, then the corridor is to be considered untenable and not available as a means of egress. No attempt is made to address toxicity or density issues associated with smoke. This analysis developed correlations for the corridor smoke velocity, both as a function of the heat release rate of the associated fire and the upper layer temperatures within the fire compartment. Problems associated with the data collection and the narrow range of fire sizes used had a detrimental impact on the confidence level in the correlation based on heat release rate. The data do appear to confirm the results of previous efforts that indicated a weak relationship between the heat release rate and smoke velocity, on the order of the one-third to one-fourth power. The temperature data tended to be less problematic. This correlation shows promise for potential use with both the SFSEM and other existing computer models/routines. However, unlike previous studies of this relationship, the results of these data suggest that the velocity-temperature relationship is linear and not a square-root function. The test data were compared to predictive results using the CORRIDOR routine within FPETOOL. In general, the CORRIDOR results provided a reasonable good correlation to the tests data. Both the wave depth and temperature loss within the wave, as a function of distance, were consistently over-predicted. The velocity results were mixed, but were generally within 20 percent of the test data. The results of this study show promise, with respect to developing a correlation that can be used a method for predicting smoke movement in a corridor. However, due to the questionable nature of some of the data estimates, coupled with both a lack of sufficient number of tests and a limited range of fire sizes used, additional test data will be required to further validate the accuracy and refine the correlation(s) suggested by this work."
605

Poder de mercado, escala e a produtividade da indústria brasileira entre 1994 e 2007

Clezar, Rômulo Viana 12 March 2010 (has links)
Made available in DSpace on 2015-03-05T18:58:03Z (GMT). No. of bitstreams: 0 Previous issue date: 12 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O presente trabalho visa estudar a produtividade da indústria de transformação brasileira e seu comportamento diante de choques estruturais ou conjunturais no período de 1994 a 2007. Para tanto, apresenta-se um modelo capaz de estimar o poder de mercado e a escala de produção com o que se torna possível medir a produtividade total de fatores em concorrência imperfeita. A análise realizada para o conjunto da indústria indica elevado poder de mercado até 1999 e aumento significativo após a mudança de regime cambial. Para o conjunto da indústria são identificados retornos decrescentes de escala entre 1996 e 1998, retornos crescentes de 1999 até 2002 e novamente retornos decrescentes entre 2003 e 2007. Contudo, não foi possível apontar em quais setores as quebras de estrutura são significativas. A produtividade mostrou elevado crescimento em 1999 e taxas negativas após 2003. / This work focuses on the time path of productivity in the Brazilian manufacturing sector and its behavior in the face of structural shocks during the period 1994 to 2007. It presents a model that is capable of estimating the market power and scale effects in order to measure the total factor productivity under imperfect competition. The analysis for the entire industry indicates a high degree of market power by 1999 and a significant increase in it after the change of the exchange rate regime. Also, for the entire industry, decreasing returns to scale are identified between 1996 and 1998, followed by increasing returns from 1999 to 2002 and, again diminishing returns between 2003 and 2007. However, it was not possible to identify sectors in which the structural shocks were significant. The productivity showed high growth rates in 1999 and negative rates after 2003.
606

Microscopic forces and flows due to temperature gradients

Ganti, Raman S. January 2018 (has links)
Nano-scale fluid flow is unlike transport on the macro-scale. Pressure gradients typically dominate effects on a large scale while thermal gradients contribute negligibly to the motion of fluid. The situation entirely reverses on the nano-scale. At a microscopic level, flows induced by thermal gradients are caused by forces that act on atoms or molecules near an interface. These thermo-osmotic forces cannot, at present, be derived analytically or measured experimentally. Clearly, it would be useful to calculate these forces via molecular simulations, but direct approaches fail because in the steady-state, the average force per particle vanishes, as the thermo-osmotic force is balanced by a gradient in shear stress. In our journey to indirectly calculate the osmotic force, we met another unknown in the field of molecular theory at interfaces: the microscopic pressure tensor. The latter is an open problem since the microscopic pressure near an interface is not uniquely defined. Using local thermodynamics theories, we relate the thermo-osmotic force to the gradient of the microscopic pressure tensor. Yet, because the pressure is not uniquely defined, we arrive at multiple answers for the thermo-osmotic force, where at most one can be correct. To resolve the latter puzzle, we develop a direct, non-equilibrium simulation protocol to measure the thermo-osmotic force, whereby a thermal gradient is imposed and the osmotic force is measured by eliminating the shear force. Surprisingly, we find that the osmotic force cannot be derived from the gradient of well-known microscopic pressure expressions. We, therefore, derive a thermodynamic expression that gets close. In this work, we report the first, direct calculation of the thermo-osmotic force while simultaneously showing that standard microscopic pressure expressions fail to predict pressure gradients.
607

Querying big data with bounded data access

Cao, Yang January 2016 (has links)
Query answering over big data is cost-prohibitive. A linear scan of a dataset D may take days with a solid state device if D is of PB size and years if D is of EB size. In other words, polynomial-time (PTIME) algorithms for query evaluation are already not feasible on big data. To tackle this, we propose querying big data with bounded data access, such that the cost of query evaluation is independent of the scale of D. First of all, we propose a class of boundedly evaluable queries. A query Q is boundedly evaluable under a set A of access constraints if for any dataset D that satisfies constraints in A, there exists a subset DQ ⊆ D such that (a) Q(DQ) = Q(D), and (b) the time for identifying DQ from D, and hence the size |DQ| of DQ, are independent of |D|. That is, we can compute Q(D) by accessing a bounded amount of data no matter how big D grows.We study the problem of deciding whether a query is boundedly evaluable under A. It is known that the problem is undecidable for FO without access constraints. We show that, in the presence of access constraints, it is decidable in 2EXPSPACE for positive fragments of FO queries, but is already EXPSPACE-hard even for CQ. To handle the undecidability and high complexity of the analysis, we develop effective syntax for boundedly evaluable queries under A, referred to as queries covered by A, such that, (a) any boundedly evaluable query under A is equivalent to a query covered by A, (b) each covered query is boundedly evaluable, and (c) it is efficient to decide whether Q is covered by A. On top of DBMS, we develop practical algorithms for checking whether queries are covered by A, and generating bounded plans if so. For queries that are not boundedly evaluable, we extend bounded evaluability to resource-bounded approximation and bounded query rewriting using views. (1) Resource-bounded approximation is parameterized with a resource ratio a ∈ (0,1], such that for any query Q and dataset D, it computes approximate answers with an accuracy bound h by accessing at most a|D| tuples. It is based on extended access constraints and a new accuracy measure. (2) Bounded query rewriting tackles the problem by incorporating bounded evaluability with views, such that the queries can be exactly answered by accessing cached views and a bounded amount of data in D. We study the problem of deciding whether a query has a bounded rewriting, establish its complexity bounds, and develop effective syntax for FO queries with a bounded rewriting. Finally, we extend bounded evaluability to graph pattern queries, by extending access constraints to graph data. We characterize bounded evaluability for subgraph and simulation patterns and develop practical algorithms for associated problems.
608

Superstatistics and symbolic dynamics of share price returns on different time scales

Xu, Dan January 2017 (has links)
Share price returns on different time scales can be well modeled by a superstatistical dynamics. We provide an investigation which type of superstatistics is most suitable to properly describe share price dynamics on various time scales. It is shown that while chi-square-superstatistics works well on a time scale of days, on a much smaller time scale of minutes the price changes are better described by lognormal superstatistics. The system dynamics thus exhibits a transition from lognormal to chi-square-superstatistics as a function of time scale. We discuss a more general model interpolating between both statistics which fits the observed data very well. We also present results on correlation functions of the extracted superstatistical volatility parameter, which exhibits exponential decay for returns on large time scales, whereas for returns on small time scales there are long-range correlations and power-law decays. We also apply the symbolic dynamics technique from dynamical system theory to analyse the coarse-grained evolution of share price returns. A nontrivial spectrum of Renyi entropies is found. We study how the spectrum depends on the time scale of returns, the sector of stocks considered, as well as the number of symbols used for the symbolic description. Overall our analysis confirms that in the symbol space transition probabilities of observed share price returns depend on the entire history of previous symbols, thus emphasizing the need for a model of share price evolution based on non-Markovian stochastic processes. Our method allows for quantitative comparisons of entirely different complex systems, for example the statistics of coarse-grained share price returns using 4 symbols can be compared with that of other complex systems.
609

Large scale pattern detection in videos and images from the wild

Henderson, Craig Darren Mark January 2017 (has links)
Pattern detection is a well-studied area of computer vision, but still current methods are unstable in images of poor quality. This thesis describes improvements over contemporary methods in the fast detection of unseen patterns in a large corpus of videos that vary tremendously in colour and texture definition, captured "in the wild" by mobile devices and surveillance cameras. We focus on three key areas of this broad subject; First, we identify consistency weaknesses in existing techniques of processing an image and it's horizontally reflected (mirror) image. This is important in police investigations where subjects change their appearance to try to avoid recognition, and we propose that invariance to horizontal reflection should be more widely considered in image description and recognition tasks too. We observe online Deep Learning system behaviours in this respect, and provide a comprehensive assessment of 10 popular low level feature detectors. Second, we develop simple and fast algorithms that combine to provide memory- and processing-efficient feature matching. These involve static scene elimination in the presence of noise and on-screen time indicators, a blur-sensitive feature detection that finds a greater number of corresponding features in images of varying sharpness, and a combinatorial texture and colour feature matching algorithm that matches features when either attribute may be poorly defined. A comprehensive evaluation is given, showing some improvements over existing feature correspondence methods. Finally, we study random decision forests for pattern detection. A new method of indexing patterns in video sequences is devised and evaluated. We automatically label positive and negative image training data, reducing a task of unsupervised learning to one of supervised learning, and devise a node split function that is invariant to mirror reflection and rotation through 90 degree angles. A high dimensional vote accumulator encodes the hypothesis support, yielding implicit back-projection for pattern detection.
610

Extensão, gravidade e fatores associados à hipersensibilidade dentinária : estudo transversal / Extent, severity and factors associated with dentin hypersensitivity : a cross-sectional study

Silveira, Carina Folgearini January 2016 (has links)
A hipersensibilidade dentinária (HD) é descrita na literatura como uma dor aguda, de curta duração, provocada por estímulos térmico, tátil, osmótico, químico ou evaporativo em região de exposição dentinária, não sendo atribuída à dor causada pela presença de defeito ou doença de origem dentária. Esse tipo de acometimento tem sido cada vez mais reportado pelos pacientes, em vista disso, o objetivo desse estudo foi avaliar a extensão e gravidade de HD, bem como sua associação com indicadores periodontais [índice de placa (IP), índice gengival (IG) e recessão gengival (RG)] em uma amostra composta por 132 indivíduos com HD diagnosticada por meio de estímulo térmico/evaporativo (jato de ar) associada à escala Schiff. Análises descritivas foram feitas e as porcentagens médias de IP e IG foram estimadas considerando 6 sítios por dente e também 3 sítios das faces vestibulares. Além disso, os dados foram analisados através de modelos uni e multivariados utilizando análises de regressão linear. A média de idade foi de 30.66±10.33, sendo o sexo feminino mais afetado pela HD (83.3%). Foi possível observar que a HD esteve associada à recessão gengival. A prevalência de dentes com recessão gengival foi de 17.17%, enquanto que a prevalência de recessão em dentes com HD foi de 77.1% e a média de recessão vestibular foi de 1.58±0.87. Quando considerada a média de dentes com 1 ou mais milímetros de recessão, observou-se que, em média, 4.48 dos dentes apresentaram esta condição. Um maior número médio de dentes com recessão e menores escores médios de IP nos sítios vestibulares apresentaram-se signicativamente associados ao número médio de dentes com HD. A gravidade da HD nos pacientes foi significativamente influenciada por maior média de recessão gengival e foi maior nos pacientes do sexo feminino. Dentes que possuíam maiores médias de recessão e maiores escores médios de IP e menores de IG nos sítios vestibulares apresentaram maiores valores na escala Schiff (p<0.05). Frente aos achados, é possível observar que portadores de HD têm um grande número de dentes afetados por esta condição e que estes apresentam recessão. Além disto, nos dentes que apresentam HD, a gravidade desta está associada a presença de mais placa e melhor condição gengival, além da extensão da recessão. / Dentin hypersensitivity (DH) is described in the literature as an acute short-term pain caused by thermal, tactile, osmotic, chemical or evaporative stimuli in the region of dentin exposure and not attributed to pain caused by the presence of defect or disease of dental origin. Therefore, the aim of this study was to evaluate the extent and severity of DH, as well as its correlation with periodontal indicators [plaque index (PI), gingival index (GI) and gingival recession (GR)] in 132 individuals with DH diagnosed by thermal/evaporative (air blast) stimulation associated with the Schiff scale. Descriptive analyzes were made and the mean percentages of PI and GI were estimated considering 6 sites per tooth and also 3 buccal sites. In addition, the data were analyzed using univariate and multivariate models using linear regression analysis. The mean age was 30.66 ± 10.33, female sex being more affected by DH (83.3%). It was possible to observe that DH was associated to gingival recession. The prevalence of teeth with gingival recession was 17.17%, while the prevalence of recession in teeth with DH was 77.1%. The mean of vestibular recession was 1.58 ± 0.87, and when considered the mean teeth with 1 or more millimeters recession, it was observed that, a mean of 4.48 teeth presented this condition. A higher mean number of teeth with recession and lower mean PI scores in the vestibular sites were associated (p<0.001) with the mean number of teeth with DH. The severity of DH in number of patients was significantly influenced by the higher mean of gingival recession and was higher in the female patients. Teeth that had higher recession mean and higher mean PI scores at the buccal sites, but lower GI scores at this sites, presented significantly higher values on the Schiff scale (p <0.05). In view of the findings, it is possible to observe that DH patients have a large number of teeth affected by this condition and that these present gingival recession. In addition, in the teeth that present DH, the severity is associated with the presence of more plaque and better gingival condition, in addition to the extent of the recession.

Page generated in 0.0228 seconds