• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 604
  • 429
  • 79
  • 58
  • 43
  • 34
  • 33
  • 13
  • 11
  • 10
  • 6
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1518
  • 535
  • 509
  • 425
  • 179
  • 179
  • 167
  • 151
  • 119
  • 116
  • 105
  • 92
  • 89
  • 89
  • 88
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Precision i modellering av bågbro i stål

Sörensen, Johanna, Wenne, Emma January 2018 (has links)
Infrastrukturen i Sverige åldras, uppskattningsvis finns det drygt 2000 broar i landet som är 70 år eller äldre. Det finns flera aspekter att titta på för att utvärdera äldre broars kondition och FE modellering är ett vanligt verktyg som används vid utvärdering. För stålbroar är det ofta utmattning som sätter gränsen för hur länge de kan hållas i drift under säkra förhållanden. Syftet med arbetet var att utvärdera nyttan av FE modeller med olika precision med hänsyn till noggrannheten i deras genererade resultat och kostnad. En stålbro har studerats i detalj, Gamla Lidingöbron och specifikt två punkter på dess bågspann. Ritningar av bron och mätdata från forskningsprojektet \textit{Smart tillståndsbedömning, övervakning och förvaltning av kritiska broar} har legat till grund för arbetet. Mätdata utgjordes av tidshistorier över spänningsvariationer i de två studerade punkterna vid flertalet tågpassager över bron. Punkterna finns på bågspannets sekundära bärverk. Fyra modeller har skapats i BRIGADE/Plus med olika precisionsgrad och spänningshistorier för de studerade punkterna har genererats. Spänningshistorierna har sedan utvärderas med Palmgren-Miners delskadeteori för utmattning. Den förväntade ekonomiska nyttan för varje modell har uppskattats beroende av analyskostnad, sannolikheten för utmattningsbrott samt kostnad för ett eventuellt brott. På grund av brons strukturella verkningssätt blev de förenklade modellernas utmattningsresultat mycket lika resultaten från modellen med hög precisionsgrad. De verkliga axellasterna var mindre än de dimensionerande som användes i modellerna. Detta ledde till att de uppkomna spänningarna i stålet från modellerna blev större än de verkliga, men visade ett liknande beteende. Resultatet blev att modellernas utmattningskapacitet enbart utgjorde ca en femtedel av kapaciteten enligt mätningarna. Resultatet från beräkningen av den förväntade nyttan visade att det inte är ekonomiskt motiverat att använda en modell med hög precision framför en eller flera förenklade modeller. Slutsatsen blev att hög precision i en teoretisk modell inte entydigt är bättre än en förenklad modell. Arbetet har utförts vid institutionen för bro och stålbyggnad på KTH i samarbete med Sweco Civil AB. / The infrastructure in Sweden is aging. More than 2 000 bridges in the country are 70 years or older. When assessing the condition of older bridges, several aspects should be taken into account. FE modeling is one common tool to use in a bridge assessment. Fatigue is generally what limits the service life of steel bridges. The aim of this work was to evaluate the utility of higher precision in FE models, regarding the accuracy of their generated results and costs. One steel bridge has been studied in detail, Old Lidingö Bridge and specifically two points on its arc span. Drawings of the bridge and measurement data from the research project \textit{Smart Condition Assessment, Surveillance and Management of Critical Bridges} has provided the basis of this work. Measurement data has been collected from the two selected points on the bridge with strain gauges, registering the time history of the variations in tension during train passages on the bridge. Four models with different levels of precision have been created in BRIGADE/Plus. These models have generated time histories of the varying tension during  train passages. The time histories have been evaluated with Palmgren-Miner's cumulative damage model for fatigue. The expected economic utility of each set of models has been estimated based on the cost of the analysis, the likelihood failure caused by fatigue and the cost of failure. Because of the structural behavior of the bridge, the results of the simplified models became very similar to the results of the high precision model. The actual axle loads were less than the design loads used in the models. Because of this, the calculated tensions in the models became larger than the actual tensions. This also resulted in the fatigue capacity of the models only being about one fifth of the capacity according to the measurements. The calculation of the expected utility showed that it is not economically justified to use a model with higher precision over models with less precision. High precision in a theoretical model is not unambiguously better than a simplified model. The work has been carried out at the Department of Structural engineering and bridges at KTH in cooperation with Sweco Civil AB.
102

Algorithms for Large Matrix Multiplications : Assessment of Strassen's Algorithm / Algoritmer för Stora Matrismultiplikationer : Bedömning av Strassens Algoritm

Johansson, Björn, Österberg, Emil January 2018 (has links)
1968 var Strassens algoritm en av de stora genombrotten inom matrisanalyser. I denna rapport kommer teorin av Volker Strassens algoritm för matrismultiplikationer tillsammans med teorier om precisioner att presenteras. Även fördelar med att använda denna algoritm jämfört med naiva matrismultiplikation och dess implikationer, samt hur den presterar jämfört med den naiva algoritmen kommer att presenteras. Strassens algoritm kommer också att bli bedömd på hur dess resultat skiljer sig för olika precisioner när matriserna blir större, samt hur dess teoretiska komplexitet skiljer sig gentemot den erhållna komplexiteten. Studier hittade att Strassens algoritm överträffade den naiva algoritmen för matriser av storlek 1024×1024 och större. Den erhållna komplexiteten var lite större än Volker Strassens teoretiska. Den optimala precisionen i detta fall var dubbelprecisionen, Float64. Sättet algoritmen implementeras på i koden påverkar dess prestanda. Ett flertal olika faktorer behövs ha i åtanke för att förbättra Strassens algoritm: optimera dess avbrottsvillkor, sättet som matriserna paddas för att de ska vara mer användbara för rekursiv tillämpning och hur de implementeras t.ex. parallella beräkningar. Även om det kunde bevisas att Strassen algoritm överträffade den naiva efter en viss matrisstorlek så är den inte den mest effektiva; t.ex visades detta med Strassen-Winograd. Man behöver vara uppmärksam på hur undermatriserna allokeras, för att inte ta upp onödigt minne. För fördjupning kan man läsa på om cache-oblivious och cache-aware algoritmer. / Strassen’s algorithm was one of the breakthroughs in matrix analysis in 1968. In this report the thesis of Volker Strassen’s algorithm for matrix multipli- cations along with theories about precisions will be shown. The benefits of using this algorithm compared to naive matrix multiplication and its implica- tions, how its performance compare to the naive algorithm, will be displayed. Strassen’s algorithm will also be assessed on how the output differ when the matrix sizes grow larger, as well as how the theoretical complexity of the al- gorithm differs from the achieved complexity. The studies found that Strassen’s algorithm outperformed the naive matrix multiplication at matrix sizes 1024 1024 and above. The achieved complex- ity was a little higher compared to Volker Strassen’s theoretical. The optimal precision for this case were the double precision, Float64. How the algorithm is implemented in code matters for its performance. A number of techniques need to be considered in order to improve Strassen’s algorithm, optimizing its termination criterion, the manner by which it is padded in order to make it more usable for recursive application and the way it is implemented e.g. parallel computing. Even tough it could be proved that Strassen’s algorithm outperformed the Naive after reaching a certain matrix size, it is still not the most efficient one; e.g. as shown with Strassen-Winograd. One need to be careful of how the sub-matrices are being allocated, to not use unnecessary memory. For further reading one can study cache-oblivious and cache-aware algorithms.
103

System modelling and evaluation of main battle tank fire precision / Modellering och prestandavärdering av en stridsvagns precision under gång

Hallbeck, Viktor January 2021 (has links)
This master thesis describes a study of the main battle tank dynamics in order to investigate the fire precision when a tank is driving in terrain. A model has been developed to simulate the dynamic interaction between the tank's hull and the ground irregularity in MATLAB and SIMULINK through the modelling of the tank's dynamics. Two different models of suspension system have been analysed. One linear model and one hydro-pneumatic model. Further the contribution from the cannon's recoil has been modelled to investigate its contribution to the dynamics of the vehicle. The models developed are on system level and is to be implemented in a larger model. Therefore are the models simplified and the thesis investigates to what degree of simplification the models will accurately predict the movement of the tank. / Denna examensarbete beskriver en studie på stridsvagnsdynamik för att undersöka precision av målträff när stridsvagnen framförs i terräng. En modell har utvecklats för att simulera hur stridsvagnar påverkas av underlagets variation i MATLAB och SIMULINK genom att modellera stridsvagnens dynamik. Två olika former av stötdämpare har undersökts, en linjär modell samt en hydro-pneumatisk modell. Även bidraget från kanonen's avfyrning har modellerats för att se hur rekylens bidrag påverkar rörelsen av stridsvagnen. Målet med studien var att ta fram en så förenklad modell som möjligt. Flera modeller har därför utvecklats för att jämföra förenklingsgraden.
104

Dynamic changes of RNA-sequencing expression for precision medicine: N-of-1-pathways Mahalanobis distance within pathways of single subjects predicts breast cancer survival

Schissler, Grant A., Li, Qike, Gardeux, Vincent, Achour, Ikbel, Li, Haiquan, Piegorsch, Walter W., Lussier, Yves A. 24 February 2016 (has links)
Poster exhibited at GPSC Student Showcase, February 24th, 2016, University of Arizona. / Motivation: The conventional approach to personalized medicine relies on molecular data analytics across multiple patients. The path to precision medicine lies with molecular data analytics that can discover interpretable single-subject signals (N-of-1). We developed a global framework, N-of-1-pathways, for a mechanistic-anchored approach to single-subject gene expression data analysis. We previously employed a metric that could prioritize the statistical significance of a deregulated pathway in single subjects, however, it lacked in quantitative interpretability (e.g. the equivalent to a gene expression fold-change). Results: In this study, we extend our previous approach with the application of statistical Mahalanobis distance (MD) to quantify personal pathway-level deregulation. We demonstrate that this approach, N-of-1-pathways Paired Samples MD (N-OF-1-PATHWAYS-MD), detects deregulated pathways (empirical simulations), while not inflating false-positive rate using a study with biological replicates. Finally, we establish that N-OF-1-PATHWAYS-MD scores are, biologically significant, clinically relevant and are predictive of breast cancer survival (P<0.05, n¼80 invasive car- cinoma; TCGA RNA-sequences). Conclusion: N-of-1-pathways MD provides a practical approach towards precision medicine. The method generates the magnitude and the biological significance of personal deregulated pathways results derived solely from the patient’s transcriptome. These pathways offer the opportunities for deriving clinically actionable decisions that have the potential to complement the clinical interpret- ability of personal polymorphisms obtained from DNA acquired or inherited polymorphisms and mutations. In addition, it offers an opportunity for applicability to diseases in which DNA changes may not be relevant, and thus expand the ‘interpretable ‘omics’ of single subjects (e.g. personalome).
105

Highly Precise and Fast Digital Image Stabilization Technique Based on the Control Grid Interpolation

Kim, Jin-Hyung, Nam, Ju-Hun, Seon, Jong-Nak, Han, Jeongwoo 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / In this paper, we propose a highly precise and fast digital image stabilization technique based on the control grid interpolation. To obtain more stable video sequence than the one from other existing DIS techniques, the small instability should be removed in as small accuracy with sub-pixel. Experimental results show that the proposed digital image stabilizer gives considerable improvement in the sense of computational complexity and the performance of stabilizing compared to conventional DIS techniques.
106

Monitoring Bone Micro-architecture with a Special Focus on Bone Strength

2015 August 1900 (has links)
Introduction. Osteoporosis is a chronic disease characterized by the loss of bone mass and the deterioration of bone micro-architecture leading to a subsequent increase in fracture risk. High-resolution peripheral quantitative computed tomography (HR-pQCT) provides non-invasive measures of bone micro-architecture and strength in live humans but its ability to monitor small skeletal changes is yet poorly understood. The objectives of this thesis were to 1) determine HR-pQCT precision for volumetric density, geometry, cortical and trabecular micro-architecture, as well as estimates of bone strength; 2) determine the monitoring time interval (MTI) and least significant change (LSC) metrics; and 3) to characterize annual changes in bone area, density, and micro-architecture at the distal radius and tibia using HR-pQCT in postmenopausal women. Methods. To determine precision error as well as monitoring and change metrics of the distal radius and tibia, 34 postmenopausal women (mean age 74, SD±7 years) from the Saskatoon cohort of the Canadian Multicentre Osteoporosis Study (CaMos) were measured using HR-pQCT. To characterize the annual change in bone outcomes of this same cohort, 51 women (mean age±SD: 77±7 years) were measuring at baseline and again 1 year later. Precision errors were calculated as coefficient of variation (CV% and CV%RMS). The LSC was determined from precision errors and then divided by the median annual percent changes to define MTIs for bone area, density, and micro-architecture. Repeated measures analysis of variance (ANOVA) with Bonferroni adjustment for multiple comparisons were used to characterize the mean annual change in total density, cortical perimeter, trabecular and cortical bone area, density, content, and micro-architecture. Significant changes were accepted at P<0.05. Results and Discussion. HR-pQCT precision errors were <10% for bone densitometric, geometric, and mechanical properties; while precision errors were <16% for cortical and trabecular micro-architectural outcomes. Further, the use of either automatic or modified contour methods for the dual-threshold technique for cortical micro-architectural analysis provided similar precision. Densitometric and geometric outcomes had longer monitoring times (>3 years), while micro-architecture had monitoring times of ~2 years. The observed annual changes were statistically significant for several outcomes; however, only cortical and trabecular area, as well as cortical density at the distal tibia changed beyond the LSC. Overall, thesis findings will assist design and interpretation of prospective HR-pQCT studies in postmenopausal women.
107

Integrating Variable Rate Technologies for Soil-applied Herbicides in Arizona Vegetable Production

Nolte, Kurt, Siemens, Mark C., Andrade-Sanchez, Pedro 02 1900 (has links)
5 pp. / Precision herbicide application is an effective tool for placing soil incorporated herbicides which have a tendency for soil adherence. And while field implementation depends on previous knowledge of soil textural variability (soil test and texture evaluations), site-specific technologies show promise for Arizona vegetable producers in non-uniform soils. Regardless of the method used for textural characterization, growers should keep in mind that textural differences do not change in the short/medium term, so the costs associated with defining texture-based management zones can be spread over many years.
108

Precision Planting--Planting to Final Stand

Cannon, M. D., Larsen,W. E. 02 1900 (has links)
This item was digitized as part of the Million Books Project led by Carnegie Mellon University and supported by grants from the National Science Foundation (NSF). Cornell University coordinated the participation of land-grant and agricultural libraries in providing historical agricultural information for the digitization project; the University of Arizona Libraries, the College of Agriculture and Life Sciences, and the Office of Arid Lands Studies collaborated in the selection and provision of material for the digitization project.
109

The Effects of Hearsee/Say and Hearsee/Write on Acquisition, Generalization and Retention.

Zanatta, Laraine Theresa 05 1900 (has links)
This study examines the effects of training in two yoked learning channels (hearsee/say and hearsee/write) on the acquisition, generalization and retention of learning. Four fifth-grade participants were taught the lower-case letters of the Greek alphabet. Twelve letters were taught in the hearsee/say channel and twelve letters taught in the hearsee/write channel for equal amounts of time. The see/say channel reached higher frequencies at the end of training and showed higher acquisition celerations than the see/write channel. However, the see/write channel showed higher accuracy and retention than the see/say channel. The see/write channel also showed greater generalization across learning channels including the see/say, think/say, think/write and see-name/draw-symbol.
110

Decision criteria for the use of cannon-fired precision munitions

La Rock, Harold L. 06 1900 (has links)
The U.S. Army and Marine Corps are developing guided munitions for cannon artillery. These munitions provide a significant increase in range and accuracy, but the tactics, techniques, and procedures used to employ them have yet to be developed. This study is intended to assist with that development by providing a method to determine when to use these munitions rather than conventional munitions in order to achieve a tactical-level commander's desired objectives. A combination of multi-attribute utility theory and simulation are used to determine the best ammunition (precision or conventional) to fire under certain battlefield conditions. The simulation, developed by the U.S. Army Research Laboratory, provides results on the full range of artillery effects by varying the different battlefield conditions that have the greatest effect on the accuracy of artillery. The results of simulated artillery fire missions are studied to determine the combination of battlefield conditions that produce the best results for each type of ammunition. A decision model is used to account for a commander's expected preferences based on tactical considerations. The results vary greatly depending on the battlefield conditions and the commander's preferences. One type of projectile does not clearly dominate the other.

Page generated in 0.0803 seconds