• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 78
  • 8
  • 8
  • 6
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 145
  • 145
  • 24
  • 24
  • 18
  • 16
  • 14
  • 13
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

An Evaluation of Female Arm Strength Predictions based on Hand Location, Arm Posture and Force Direction

La, Delfa J. Nicholas 10 1900 (has links)
<p>The primary purpose of this thesis was to measure arm strengths, in combinations of exertion directions, and to evaluate the importance of knowing the precise posture of the arm and specific joint locations in 3D space when predicting female arm strength. A stepwise multiple regression approach was utilized in the prediction of female arm strengths, using kinematic measures of hand location, arm posture and 26-force directions from 17 subjects and 8 hand locations as inputs. When including measures of arm posture, the regression model was indeed improved, explaining 75.4% of the variance, with an RMS error of 9.1 N, compared to an explained variance of 67.3% and an RMS error of 10.5 N without those postural variables. A comparison was also made between the empirical strength data from this thesis and the outputs from the University of Michigan’s Center for Ergonomics 3-Dimensional Static Strength Prediction Program (3DSSPP) software. A poor correlation (R-square = 0.305) and high RMS error (39 N) was found, indicating a definite need for further evaluation of the 3DSSPP package, as it is one of the most commonly used ergonomic tools in industry. <strong> </strong></p> / Master of Science (MSc)
62

Exploring the Complexity of Event Query Discovery

Kleest-Meißner, Sarah 15 November 2024 (has links)
Sequentielle Daten sind meist zeitlich geordnete (un)endliche Datenströme von Events über einem multi-dimensionalen Eventschema. Systeme über sequentiellen Daten nutzen Anfragen, um Zusammenhänge von besonderem Interesse in sequentiellen Daten zu beschreiben. Basierend auf historischen Daten müssen solche Anfragen zunächst definiert werden. Diese komplexe Aufgabe wird zumeist nicht automatisiert gelöst. In dieser Dissertation behandeln wir multi-dimensionale Teilfolge-Anfragen mit Platzhaltern und beschränkten Lücken als Anfragesprache für sequentielle Daten. Anfragen bestehen aus einer Zeichenkette s über einem Alphabet aus Symbolen und Variablen, einem globalen Fenster w und einem Tupel c aus lokalen Lückenbeschränkungen. Eine Anfrage passt zu einer Folge t über der Menge an Symbolen, falls die in s vorkommenden Variablen so durch einzelne Symbole ersetzt werden können, dass die daraus resultierende Zeichenkette s' als Teilfolge in t vorkommt. Die Gesamtlänge des Vorkommens darf dabei nicht mehr als w Events umfassen und die Distanz zwischen konsekutiven Positionen der Teilfolge muss c entsprechen. Wir untersuchen, wie zu einer Menge von Folgen S eine Anfrage gefunden werden kann, die S bestmöglich beschreibt (Suchproblem). Wir geben einen Algorithmus an, der dieses Problem löst, und analysieren dessen Komplexität. Zu entscheiden, ob eine Anfrage zu einer Folge passt (Matchingproblem), dominiert die Laufzeit des Algorithmus. Wir führen disjunktive multi-dimensionale Teilfolge-Anfragen mit Platzhaltern und beschränkten Lücken, sowie multi-dimensionale Teilfolge-Anfragen mit Platzhaltern und verallgemeinerten beschränkten Lücken als Erweiterungen ein, und passen den oben genannter Algorithmus an, um das Suchproblem für diese Anfragemodelle zu lösen. Die theoretischen Ergebnisse werden durch die Beschreibung der prototypischen Implementierung der genannten Algorithmen und der experimentellen Evaluation basierend auf synthetischen und realen Datensätzen ergänzt. / Sequence data are (usually temporally) ordered finite or infinite streams over events that are instances of a multi-dimensional schema. Systems which deal with sequence data usually use queries to detect situations of interest. However, finding such queries from historical sequence data is notoriously hard and is often assumed to be a non-automated task. In this dissertation, we propose multi-dimensional subsequence queries with wildcards and gap-size constraints (mswg-queries) as an expressive query model for sequence data. These queries consist of a query string s over an alphabet of variables and types, as well as a global window size w and a tuple c of local gap-size constraints. A query matches a trace t, i.e., a sequence of events, if the variables in s can be replaced by single types in such a way that the resulting string s' occurs as a subsequence in t that spans an area of at most w events, and the distance between consecutive positions in the subsequence conforms with c. We study the task of discovering an mswg-query that describes best a given sample, i.e. a finite set of traces. For that, we provide an algorithm solving this problem, and investigate its complexity. Our analysis identifies the subroutine for solving the matching problem (i.e., deciding whether a given query q matches in a given trace t) as the only potential bottleneck. We propose extensions of mswg-queries for the one-dimensional setting, namely, subsequence queries with generalised gap-size constraints (swgg-queries) and disjunctive subsequence queries (dswg-queries), and discuss how the aforementioned algorithm can be adapted to compute swgg- and dswg-queries that describes best a sample. The formal results are complemented by a description of our prototypical implementation of query discovery and an experimental evaluation based on both, synthetic and real-world data.
63

Low-Complexity Multi-Dimensional Filters for Plenoptic Signal Processing

Edussooriya, Chamira Udaya Shantha 02 December 2015 (has links)
Five-dimensional (5-D) light field video (LFV) (also known as plenoptic video) is a more powerful form of representing information of dynamic scenes compared to conventional three-dimensional (3-D) video. In this dissertation, the spectra of moving objects in LFVs are analyzed, and it is shown that such moving objects can be enhanced based on their depth and velocity by employing 5-D digital filters, what is defined as depth-velocity filters. In particular, the spectral region of support (ROS) of a Lambertian object moving with constant velocity and at constant depth is shown to be a skewed 3-D hyperfan in the 5-D frequency domain. Furthermore, it is shown that the spectral ROS of a Lambertian object moving at non-constant depth can be approximated as a sequence of ROSs, each of which is a skewed 3-D hyperfan, in the 5-D continuous frequency domain. Based on the spectral analysis, a novel 5-D finite-extent impulse response (FIR) depth-velocity filter and a novel ultra-low complexity 5-D infinite-extent impulse response (IIR) depth-velocity filter are proposed for enhancing objects moving with constant velocity and at constant depth in LFVs. Furthermore, a novel ultra-low complexity 5-D IIR adaptive depth-velocity filter is proposed for enhancing objects moving at non-constant depth in LFVs. Also, an ultra-low complexity 3-D linear-phase IIR velocity filter that can be incorporated to design 5-D IIR depth-velocity filters is proposed. To the best of the author’s knowledge, the proposed 5-D FIR and IIR depth-velocity filters and the proposed 5-D IIR adaptive depth-velocity filter are the first such 5-D filters applied for enhancing moving objects in LFVs based on their depth and velocity. Numerically generated LFVs and LFVs of real scenes, generated by means of a commercially available Lytro light field (LF) camera, are used to test the effectiveness of the proposed 5-D depth-velocity filters. Numerical simulation results indicate that the proposed 5-D depth-velocity filters outperform the 3-D velocity filters and the four-dimensional (4-D) depth filters in enhancing moving objects in LFVs. More importantly, the proposed 5-D depth-velocity filters are capable of exposing heavily occluded parts of a scene and of attenuating noise significantly. Considering the ultra-low complexity, the proposed 5-D IIR depth-velocity filter and the proposed 5-D IIR adaptive depth-velocity filter have significant potentials to be employed in real-time applications. / Graduate / 0544
64

Il linguaggio dei blogs artistici / THE LANGUAGE OF ART BLOGS

ALLAIS, CATERINA 17 March 2016 (has links)
In questo studio si analizza il linguaggio utilizzato nei blog che si occupano di storia dell'arte, genere diffuso online, tramite un corpus di 54 blogs pubblicati nel 2013. Nell'ambito della linguistica dei corpora, l'analisi permette di ridefinire le tradizionali categorie utilizzate per descrivere i blog, fornisce una descrizione tipologica dei post e dei commenti tramite l'utilizzo di un apposito software, oltre a descrivere il livello di specializzazione del linguaggio. Vengono infine riportati esempi di creatività linguistica in questo genere digitale. / This study analyses the language of “art blogs”, i.e. blogs dealing with art, through a specialised corpus of fifty-four blogs published during 2013. Both posts and comments are included in the present investigation into the linguistic character of art blogs. The methodological choices reflect the need for a multifaceted analysis which covers different aspects, from text typology to linguistic creativity and popularisation. A review of the relevant literature on blogs brings to light the need for a specific characterisation of art blogs, since they tend to have a blended style, which cannot be ascribed to the traditional categories of personal and thematic blogging. The distinctive features of posts and comments are then investigated through a multidimensional analysis which reveals that posts and comments are two different text types. Successively, the corpus is compared to a specialised corpus of art announcements, within the field of popularised and specialised discourse. Finally, several examples of linguistic creativity are explored and presented, thus showing that traditional descriptive paradigms are unsuited to analyse the outcomes of art bloggers.
65

Évaluation et modélisation de la qualité de vie des patients traités pour un cancer de la prostate. / Evaluation and modelisation of quality of life of patients treated for prostate cancer.

Phung Thi Thanh, Tu 11 March 2011 (has links)
Évaluation et modélisation de la qualité de vie des patients traités pour un cancer de la prostate.Le cancer de la prostate « CaP » est le cancer le plus fréquent en France, chez l'homme avec 72.5000 nouveau cas chaque année. Il y a quelques années la problématique du CaP était d'augmenter le taux de survie, à présent elle se concentre sur l'amélioration de la «QdV» des patients. La plupart des études de QdV de patients atteints de CaP utilisent des questionnaires de QdV qui conduisent à la construction d'échelle de QdV ne donnant pas d'information sur l'utilité individuelle. Il nous parait donc essentiel d'évaluer d'une part la QdV des cas incidents ou avec progression / récidive (tous les types confondus) afin de connaître pour chaque type de prise en charge la QdV et d'estimer d'autre part son utilité par la méthode dite de Standard Gamble. Ces deux aspects de la QdV répondent à une réalité clinique et à un besoin évaluatif et comparatif voire médico-économique (analyse en QALYs ou Espérance de vie ajustée sur la QdV). Nous avons mené une étude de type cohorte-prospective de 305 patients traités pour un CaP dans le département de l'Hérault de du Gard. Le recueil de données était effectué à partir de 7 questionnaires de QDV « standardisés et validés en France » et l'utilité étaient renseignés avant le début du traitement à 2 mois, 6 et 12 mois après le début du traitement. L'étude retrouvait que la QdV des patients était différents selon les modalités thérapeutiques. Les patients dans le groupe « prostatectomie » avaient plus de dysfonctions sexuelles et problèmes d'incontinence urinaire. Les patients dans le groupe « radiothérapie » avaient plus de dysfonctions intestinales et des symptômes liés au traitements hormonaux du CaP. Les symptômes liés aux troubles hormonaux du CaP, les dysfonctions sexuelles, l'insomnie, l'incontinence urinaire, et la somme de co-morbidités sont des facteurs prédictifs significatifs de l'utilité. / Treatments for prostate cancer "PCa" are almost equivalent in terms of survival; however,they can provoke different side effects affecting quality of life "QoL" patients. It is thereforeessential to assess on one hand the QoL of patients newly diagnosed with PCa as wells asthose newly diagnosed with a recurrence of PCa to know the quality of life associated witheach kind of treatment is and on the other hand to evaluate their utility scores by the methodof “ Standard Gamble”. We conducted a prospectively cohort study of 305 patients treated forPCa in the departments of Hérault and Gard. The study found that QoL of patients wasdifferent according to treatment modalities. Patients in the group "prostatectomy" had moresexual dysfunction and urinary incontinence. Patients in the group “radiotherapy" had morebowel dysfunction and symptoms related to hormone treatment. Symptoms related tohormonal disorders of PCa, sexual dysfunction, insomnia, urinary incontinence, and the sumof co-morbidities are significant predictors of utility score.
66

Mapeamento de dados multi-dimensionais - integrando mineração e visualização / Multidimensional data mapping - integrating mining and visualization

Paulovich, Fernando Vieira 07 October 2008 (has links)
As técnicas de projeção ou posicionamento de pontos no plano, que servem para mapear dados multi-dimensionais em espaços visuais, sempre despertaram grande interesse da comunidade de visualização e análise de dados por representarem uma forma útil de exploração baseada em relações de similaridade e correlação. Apesar disso, muitos problemas ainda são encontrados em tais técnicas, limitando suas aplicações. Em especial, as técnicas de projeção multi-dimensional de maior qualidade têm custo computacional proibitivo para grandes conjuntos de dados. Adicionalmente, problemas referentes à escalabilidade visual, isto é, à capacidade da metáfora visual empregada de representar dados de forma compacta e amigável, são recorrentes. Esta tese trata o problema da projeção multi-dimensional de vários pontos de vista, propondo técnicas que resolvem, até certo ponto, cada um dos problemas verificados. Também é fato que a complexidade e o tamanho dos conjuntos de dados indicam que a visualização deve trabalhar em conjunto com técnicas de mineração, tanto embutidas no processo de mapeamento, como por meio de ferramentas auxiliares de interpretação. Nesta tese incorporamos alguns aspectos de mineração integrados ao processo de visualização multi-dimensional, principalmente na aplicação de projeções para visualização de coleções de documentos, propondo uma estratégia de extração de tópicos. Como suporte ao desenvolvimento e teste dessas técnicas, foram criados diferentes sistemas de software. O principal inclui as técnicas desenvolvidas e muitas das técnicas clássicas de projeção, podendo ser usado para exploração de conjuntos de dados multi-dimensionais em geral, com funcionalidade adicional para mapeamento de coleções de documentos. Como principal contribuição desta tese propomos um entendimento mais profundo dos problemas encontrados nas técnicas de projeção vigentes e o desenvolvimento de técnicas de projeção (ou mapeamento) que são rápidas, tratam adequadamente a formação visual de grupos de dados altamente similares, separam satisfatoriamente esses grupos no layout, e permitem a exploração dos dados em vários níveis de detalhe / Projection or point placement techniques, useful for mapping multidimensional data into visual spaces, have always risen interest in the visualization and data analysis communities because they can support data exploration based on similarity or correlation relations. Regardless of that interest, various problems arise when dealing with such techniques, impairing their widespread application. In particularly the projections that yield highest quality layouts have prohibitive computational cost for large data sets. Additionally, there are issues regarding visual scalability, i.e., the capability of visually fit the individual points in the exploration space as the data set grows large. This thesis treats the problems of projections from various perspectives, presenting novel techniques that solve, to certain extent, several of the verified problems. It is also a fact that size and complexity of data sets suggest the integration of data mining capabilities into the visualization pipeline, both during the mapping process and as a tools to extract additional information after the data have been layed out. This thesis also add some aspects of mining to the multidimensional visualization process, mainly for the particular application of analysis of document collections, proposing and implementing an approach for topic extraction. As supporting tools for testing these techniques and comparing them to existing ones different software systems were written. The main one includes the techniques developed here as well as several of the classical projection and dimensional reduction techniques, and can be used for exploring various kinds of data sets, with addition functionality to support the mapping of document collections. This thesis contributes to the understanding of the projection or mapping problem and develops new techniques that are fast, treat adequately the visual formation of groups of highly related data items, separate those groups properly and allow exploration of data in various levels of detail
67

Multi-period optimization of pavement management systems

Yoo, Jaewook 30 September 2004 (has links)
The purpose of this research is to develop a model and solution methodology for selecting and scheduling timely and cost-effective maintenance, rehabilitation, and reconstruction activities (M & R) for each pavement section in a highway network and allocating the funding levels through a finite multi-period horizon within the constraints imposed by budget availability in each period, frequency availability of activities, and specified minimum pavement quality requirements. M & R is defined as a chronological sequence of reconstruction, rehabilitation, and major/minor maintenance, including a "do nothing" activity. A procedure is developed for selecting an M & R activity for each pavement section in each period of a specified extended planning horizon. Each activity in the sequence consumes a known amount of capital and generates a known amount of effectiveness measured in pavement quality. The effectiveness of an activity is the expected value of the overall gains in pavement quality rating due to the activity performed on a highway network over an analysis period. It is assumed that the unused portion of the budget for one period can be carried over to subsequent periods. Dynamic Programming (DP) and Branch-and-Bound (B-and-B) approaches are combined to produce a hybrid algorithm for solving the problem under consideratioin. The algorithm is essentially a DP approach in the sense that the problem is divided into smaller subproblems corresponding to each single period problem. However, the idea of fathoming partial solutions that could not lead to an optimal solution is incorporated within the algorithm to reduce storage and computational requirements in the DP frame using the B-and-B approach. The imbedded-state approach is used to reduce a multi-dimensional DP to a one-dimensional DP. For bounding at each stage, the problem is relaxed in a Lagrangean fashion so that it separates into longest-path network model subproblems. The values of the Lagrangean multipliers are found by a subgradient optimization method, while the Ford-Bellman network algorithm is employed at each iteration of the subgradient optimization procedure to solve the longest-path network problem as well as to obtain an improved lower and upper bound. If the gap between lower and upper bound is sufficiently small, then we may choose to accept the best known solutions as being sufficiently close to optimal and terminate the algorithm rather than continue to the final stage.
68

CSOM/PL : a virtual machine product line

Haupt, Michael, Marr, Stefan, Hirschfeld, Robert January 2011 (has links)
CSOM/PL is a software product line (SPL) derived from applying multi-dimensional separation of concerns (MDSOC) techniques to the domain of high-level language virtual machine (VM) implementations. For CSOM/PL, we modularised CSOM, a Smalltalk VM implemented in C, using VMADL (virtual machine architecture description language). Several features of the original CSOM were encapsulated in VMADL modules and composed in various combinations. In an evaluation of our approach, we show that applying MDSOC and SPL principles to a domain as complex as that of VMs is not only feasible but beneficial, as it improves understandability, maintainability, and configurability of VM implementations without harming performance. / CSOM/PL ist eine Softwareproduktfamilie (software product line, SPL), die erstellt wurde, indem Techniken der mehrdimensionalen Belangtrennung (multi-dimensional separation of concerns, MDSOC) auf die Domäne der virtuellen Maschinen (VM) für höhere Programmiersprachen angewendet wurden. Dazu wurde CSOM, eine in C implementierte Smalltalk-VM, mittels VMADL (virtual machine architecture description language) in Module zerlegt. Etliche Eigenschaften von CSOM wurden in VMADL-Module gekapselt und auf unterschiedliche Weisen komponiert. Die Auswertung des Ansatzes zeigt, dass die Anwendung von MDSOC- und SPL-Prinzipien auf die komplexe VM-Domäne nicht nur machbar ist, sondern darüber hinaus auch Vorteile mit sich bringt, da die Verständlichkeit, Wartbarkeit und Konfigurierbarkeit von VM-Implementierungen ohne Beeinträchtigung der Ausführungsgeschwindigkeit verbessert werden.
69

Financial Derivatives Pricing and Hedging - A Dynamic Semiparametric Approach

Huang, Shih-Feng 26 June 2008 (has links)
A dynamic semiparametric pricing method is proposed for financial derivatives including European and American type options and convertible bonds. The proposed method is an iterative procedure which uses nonparametric regression to approximate derivative values and parametric asset models to derive the continuation values. Extension to higher dimensional option pricing is also developed, in which the dependence structure of financial time series is modeled by copula functions. In the simulation study, we valuate one dimensional American options, convertible bonds and multi-dimensional American geometric average options and max options. The considered one-dimensional underlying asset models include the Black-Scholes, jump-diffusion, and nonlinear asymmetric GARCH models and for multivariate case we study copula models such as the Gaussian, Clayton and Gumbel copulae. Convergence of the method is proved under continuity assumption on the transition densities of the underlying asset models. And the orders of the supnorm errors are derived. Both the theoretical findings and the simulation results show the proposed approach to be tractable for numerical implementation and provides a unified and accurate technique for financial derivative pricing. The second part of this thesis studies the option pricing and hedging problems for conditional leptokurtic returns which is an important feature in financial data. The risk-neutral models for log and simple return models with heavy-tailed innovations are derived by an extended Girsanov change of measure, respectively. The result is applicable to the option pricing of the GARCH model with t innovations (GARCH-t) for simple eturn series. The dynamic semiparametric approach is extended to compute the option prices of conditional leptokurtic returns. The hedging strategy consistent with the extended Girsanov change of measure is constructed and is shown to have smaller cost variation than the commonly used delta hedging under the risk neutral measure. Simulation studies are also performed to show the effect of using GARCH-normal models to compute the option prices and delta hedging of GARCH-t model for plain vanilla and exotic options. The results indicate that there are little pricing and hedging differences between the normal and t innovations for plain vanilla and Asian options, yet significant disparities arise for barrier and lookback options due to improper distribution setting of the GARCH innovations.
70

Implementation and evaluation of a two-dimensional laser doppler vibrometer system for non-contact monitoring of external stress loading of aluminum samples

Langston, Paul Wesley 08 April 2009 (has links)
This thesis details the development and validation of a laser Doppler vibrometer-based measurement system that is capable of quantifying not only the normal vibration of a solid body but also the component of vibration that is parallel to the plane containing the surface of interest. LDV manufacturers produce various devices that capture 3D measurements in a beam configuration that can be used to decompose the measured signals into not only the normal velocity of the surface of an object but also two orthogonal in-plane components of the vibration. It was a slightly simplified two-dimensional version of this approach that was chosen for implementation in the Wave Physics Lab using individual components to create a cheaper, more flexible system than those produced by companies such as Polytec. The goal of this system is to facilitate the exploration and discovery of areas and applications where 2D measurement may provide a more complete and precise view of the physics of different phenomena. Upon completion of the system development and validation, a study was done that sought to measure the acoustoelastic effect in an Aluminum plate by measuring how increasing loads alter both the normal and in-plane components of Lamb wave propagation in the plate. The acoustoelastic effect is the effect of stress on wave speed caused by non-linearity in the propagation medium.

Page generated in 0.0973 seconds