• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 250
  • 83
  • 62
  • 52
  • 14
  • 10
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 3
  • 3
  • Tagged with
  • 615
  • 96
  • 85
  • 68
  • 60
  • 53
  • 52
  • 47
  • 46
  • 41
  • 40
  • 39
  • 39
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

複合応力下における木材 (ヒノキ) の破壊挙動 (載荷方式および載荷経路の影響)

山崎, 真理子, YAMASAKI, Mariko, 佐々木, 康寿, SASAKI, Yasutoshi 08 1900 (has links)
No description available.
152

き裂エネルギ密度に基づくき裂の安定・不安定クライテリオンの提案と従来のクライテリオンの物理的位置付け

渡辺, 勝彦, Watanabe, Katsuhiko, 畔上, 秀幸, Azegami, Hideyuki 04 1900 (has links)
No description available.
153

Parametric classification and variable selection by the minimum integrated squared error criterion

January 2012 (has links)
This thesis presents a robust solution to the classification and variable selection problem when the dimension of the data, or number of predictor variables, may greatly exceed the number of observations. When faced with the problem of classifying objects given many measured attributes of the objects, the goal is to build a model that makes the most accurate predictions using only the most meaningful subset of the available measurements. The introduction of [cursive l] 1 regularized model titling has inspired many approaches that simultaneously do model fitting and variable selection. If parametric models are employed, the standard approach is some form of regularized maximum likelihood estimation. While this is an asymptotically efficient procedure under very general conditions, it is not robust. Outliers can negatively impact both estimation and variable selection. Moreover, outliers can be very difficult to identify as the number of predictor variables becomes large. Minimizing the integrated squared error, or L 2 error, while less efficient, has been shown to generate parametric estimators that are robust to a fair amount of contamination in several contexts. In this thesis, we present a novel robust parametric regression model for the binary classification problem based on L 2 distance, the logistic L 2 estimator (L 2 E). To perform simultaneous model fitting and variable selection among correlated predictors in the high dimensional setting, an elastic net penalty is introduced. A fast computational algorithm for minimizing the elastic net penalized logistic L 2 E loss is derived and results on the algorithm's global convergence properties are given. Through simulations we demonstrate the utility of the penalized logistic L 2 E at robustly recovering sparse models from high dimensional data in the presence of outliers and inliers. Results on real genomic data are also presented.
154

Constitutive Modelling of High Strength Steel

Larsson, Rikard January 2007 (has links)
This report is a review on aspects of constitutive modelling of high strength steels. Aspects that have been presented are basic crystallography of steel, martensite transformation, thermodynamics and plasticity from a phenomenological point of view. The phenomenon called mechanical twinning is reviewed and the properties of a new material type called TWIP-steel have been briefly presented. Focus has been given on phenomenological models and methods, but an overview over multiscale methods has also been given.
155

Two-Dimensional Anisotropic Cartesian Mesh Adaptation for the Compressible Euler Equations

Keats, William A. January 2004 (has links)
Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This document discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for transient compressible flow. This technique, originally developed for laminar incompressible flow, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this document the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant.
156

Determining equation of state binary interaction parameters using K- and L-points

Mushrif, Samir Hemant 01 November 2004 (has links)
The knowledge of the phase behaviour of heavy oils and bitumen is important in order to understand the phenomenon of coke formation. Computation of their phase behaviour, using an equation of state, faces problems due to their complex composition. Hence n-alkane binaries of polyaromatic hydrocarbons are used to approximate the phase behaviour of heavy oils and bitumen. Appropriate values of binary interaction parameters are required for an equation of state to predict the correct phase behaviour of these model binary fluids. This thesis deals with fitting of the binary interaction parameter for the Peng-Robinson equation of state using landmarks in the binary phase space such as K- and L-points. A K- or an L-point is a point in the phase space where two phases become critical in the presence of another phase in equilibrium. An algorithm to calculate K- and L-points using an equation of state was developed. The variation of calculated K- and L-points with respect to the binary interaction parameter was studied and the results were compared with the experimental data in the literature. The interaction parameter was then fitted using the best match of experimental results with the computed ones. The binary interaction parameter fitted using a K- or an L-point was then used to predict the P-T projection of the binary system in phase space. Also, the qualitative effect of the binary interaction parameter on the P-T projection was studied. A numerical and thermodynamic study of the algorithm was done. Numerical issues like the initial guesses, convergence criterion and numerical techniques were studied and the thermodynamic constraints in the generalization of the algorithm are discussed. It was observed that the binary interaction parameter not only affects the location of K- and L-points in the phase space but also affects the calculation procedure of K- and L-points. Along with the propane binaries of polyaromatic hydrocarbons, K- and L-points were also calculated for systems like methane binaries of higher n-alkanes and the ethane + ethanol binary. In the case of the ethane + ethanol system, K- and L-points, matching the experimental results were calculated with different values of the binary interaction parameter. But the Peng-Robinson equation of state was unable to predict the correct type of phase behaviour using any value of the binary interaction parameter. The Peng-Robinson equation of state was able to predict the correct type of phase behaviour with the binary interaction parameter, fitted using K- and/or L-points for methane + n-alkane systems. The systems studied were the methane binaries of n-pentane, n-hexane and n-heptane. For the propane binaries of polyaromatic hydrocarbons, no value of the binary interaction parameter was able to predict the K-point with a good accuracy. The binary interaction parameter which gave the best possible results for a K-point failed to predict the correct type of phase behaviour. The binary interaction parameter fitted using the P-T projection enabled the Peng-Robinson equation of state to give a qualitative match for the high pressure complex phase behaviour of these systems. Solid phase equilibria were not taken into consideration.
157

Neural Correlates of Speed-Accuracy Tradeoff: An Electrophysiological Analysis

Heitz, Richard Philip 29 March 2007 (has links)
Recent computational models and physiological studies suggest that simple, two-alternative forced-choice decision making can be conceptualized as the gradual accumulation of sensory evidence. Accordingly, information is sampled over time from a sensory stimulus, giving rise to an activation function. A response is emitted when this function reaches a criterion level of activity. Critically, the phenomenon known as speed-accuracy tradeoff (SAT) is modeled as a shift in the response boundaries (criterion). As speed stress increases and criterion is lowered, the information function travels less distance before reaching threshold. This leads to faster overall responses, but also an increase in error rate, given that less information is accumulated. Psychophysiological data using EEG and single-unit recordings from monkey cortex suggest that these accumulator models are biologically plausible. The present work is an effort to strengthen this position. Specifically, it seeks to demonstrate a neural correlate of criterion and demonstrate its relationship to behavior. To do so, subjects performed a letter discrimination paradigm under three levels of speed stress. At the same time, electroencephalogram (EEG) was used to derive a measure known as the lateralized readiness potential, which is known to reflect ongoing motor preparation in motor cortex. In Experiment 1, the amplitude of the LRP was related to speed stress: as subjects were forced to respond more quickly, less information was accumulated before making a response. In other words, criterion lowered. These data are complicated by Experiment 2, which found that there are boundary conditions for this effect to obtain.
158

Comparison and Oscillation Theorems for Second Order Half-Linear Differential Equations

Hsiao, Wan-ling 07 June 2012 (has links)
This thesis is a short survey for the comparison theorems and oscillation theorems for the second order half-linear equation [c(x)u'^{(p-1)}]'+a(x)u^{(p-1)}=0, where u^{(p-1)}=|u|^{p-2}u. Some examples are also given. The above equation is said to be oscillatory ( O ) if there exists a nontrivial solution having an infinite number of zeros in (0,¡Û); and non-oscillatory ( NO ) if otherwise. Oscillation theorems help to determine whether an equation is ( O ) or ( NO ). These comparison theorem and oscillation theorems give information for the number and position of zeros in (0,¡Û) for a nontrivial solution of the above equation. Materials in this thesis originate from the papers of Li-Yeh and the monograph of Dosly and Rehak. But Reid type comparison theorem is new.
159

Evaluation on Mathematics Texttbook of Elementary Samples of Grade 1-9 Curriculum Second Learning Stages

Mai, Chang-jen 23 July 2004 (has links)
The purpose of this study is to investigate the relationships between the second learning stages competence indicators of mathematics and mathematic textbook, building a criterion of mathematic textbook choosing and evaluation, and collecting teachers' opinions of second learning stages mathematic in textbook using. This study is first based on the definition of mathematic competence indicators by Education Department to analyze mathematic textbooks. The sample textbooks which will be analyzed include ¡uKang Shaun¡v, ¡uHan Lin¡v and ¡uNan I¡v. Using these samples to understand the competence indicators and structures of three publishing mathematic textbooks. Furthermore, researcher probes the connecting learning materials between new and old curriculum (87 and 92 published version), and then offers references to textbook publishers and teachers. Next, building a standard evaluation table for teachers to choose mathematic textbook. The scale is according to the specialist¡¦s questionnaire in order to make more validity. This standard table can offer references to teachers when they choose mathematic textbooks. At last, applying the mathematic textbook evaluation scale to three sample mathematic textbooks. Each publisher random samples 50 teachers from grade-fourth and grade-fifth. Total 300 teachers participate in the questionnaire survey. Gathering all teachers' opinions and take them into statistics analysis and then give references to the mathematic textbook publisher. Conclusion: 1. Three samples accomplish 100 percent indicator goals. 2. Three samples in numeral and quantity of [Quantity and Measure] appear more and often. 3. The appearance times of competence indicators in three samples in sequence are: numeral and quantity; graphic and space; algebra; statistics and probability. 4. Building a criterion of mathematic textbook choosing and evaluation. Providing teachers references when they choose textbooks. 5. There are different satisfactions of three samples on the fourth and fifth grades. 6. Teachers all give the highest evaluation and agreements to the [physical characteristics]; and lowest to [content characteristics]. 7. Development was supposed to articulation learning materials to between the differences of new and old curriculum. Base on the results of this study, researcher addresses suggestions for school administrators, teachers, publisher and future research study.
160

A New Offline Path Search Algorithm For Computer Games That Considers Damage As A Feasibility Criterion

Bayili, Serhat 01 August 2008 (has links) (PDF)
Pathfinding algorithms used in today&rsquo / s computer games consider path length or a similar criterion as the only measure of optimality. However, these games usually involve opposing parties, whose agents can inflict damage on those of the others&rsquo / . Therefore, the shortest path in such games may not always be the safest one. Consequently, a new suboptimal offline path search algorithm that takes the threat sources into consideration was developed, based on the A* algorithm. Given an upper bound value as the tolerable amount of damage for an agent, this algorithm searches for the shortest path from a starting location to a destination that would cause the agent suffer no more damage than the specified maximum. Due to its mentioned behavior, the algorithm is called Limited-Damage A* (LDA*). Performance of LDA* was tested in randomly-generated and hand-crafted fully-observable maze-like square environments with 8-way grid-abstractions against Multiobjective A* (MOA*), which is a complete and optimal algorithm. It was found to perform much faster than MOA* with allowable sub-optimality in path length.

Page generated in 0.0769 seconds