• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 188
  • 56
  • 24
  • 10
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 383
  • 232
  • 87
  • 73
  • 70
  • 66
  • 48
  • 46
  • 46
  • 40
  • 39
  • 37
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Woven Forms : creating three-dimensional objects transformed from flat woven textile

Burkhardt, Leonie Annett January 2022 (has links)
Technological developments in digital Jacquard weaving, as well as material research, have a strong influence on today‘s possibilities of textile production. These advancements enable to shift the perspective of textile as a flat surface to textile as a three-dimensional form and push two-dimensional weaving into the third dimension. Utilizing recent technologies in the form of applying multi-layering weaving techniques and embedding heat-reactive shrinking material, the research of Woven Forms aims to explore the forming method of construction through weaving to create abstract forms transformed from flat and to investigate its textile-form properties of shape, texture, color, and scale. The developed method of Embedded Form Weaving is set within experimental design research and structures a systematical approach to generate three-dimensional forms activated from flat surfaces. The outcome in form of abstract, self-supporting textile-forms showcases the multitude of form expressions and variety of formal variables within two construction-form-thinking families. This research contributes to the field of 3D weaving, demonstrates the potential for further research and application possibilities in other disciplines and fields, and evaluates the potential of seeing the weaving loom as a forming tool. While the fundamental base is the interlacement of warp and weft, technology, material science, and textile engineering shift the perception of woven textiles: from a rectangular piece of cloth to the opportunity to construct textile-forms.
232

A General Model for Continuous Noninvasive Pulmonary Artery Pressure Estimation

Smith, Robert Anthony 15 December 2011 (has links) (PDF)
Elevated pulmonary artery pressure (PAP) is a significant healthcare risk. Continuous monitoring for patients with elevated PAP is crucial for effective treatment, yet the most accurate method is invasive and expensive, and cannot be performed repeatedly. Noninvasive methods exist but are inaccurate, expensive, and cannot be used for continuous monitoring. We present a machine learning model based on heart sounds that estimates pulmonary artery pressure with enough accuracy to exclude an invasive diagnostic operation, allowing for consistent monitoring of heart condition in suspect patients without the cost and risk of invasive monitoring. We conduct a greedy search through 38 possible features using a 109-patient cross-validation to find the most predictive features. Our best general model has a standard estimate of error (SEE) of 8.28 mmHg, which outperforms the previous best performance in the literature on a general set of unseen patient data.
233

Increasing speaker invariance in unsupervised speech learning by partitioning probabilistic models using linear siamese networks / Ökad talarinvarians i obevakad talinlärning genom partitionering av probabilistiska modeller med hjälp av linjära siamesiska nätverk

Fahlström Myrman, Arvid January 2017 (has links)
Unsupervised learning of speech is concerned with automatically finding patterns such as words or speech sounds, without supervision in the form of orthographical transcriptions or a priori knowledge of the language. However, a fundamental problem is that unsupervised speech learning methods tend to discover highly speaker-specific and context-dependent representations of speech. We propose a method for improving the quality of posteriorgrams generated from an unsupervised model through partitioning of the latent classes discovered by the model. We do this by training a sparse siamese model to find a linear transformation of input posteriorgrams, extracted from the unsupervised model, to lower-dimensional posteriorgrams. The siamese model makes use of same-category and different-category speech fragment pairs obtained through unsupervised term discovery. After training, the model is converted into an exact partitioning of the posteriorgrams. We evaluate the model on the minimal-pair ABX task in the context of the Zero Resource Speech Challenge. We are able to demonstrate that our method significantly reduces the dimensionality of standard Gaussian mixture model posteriorgrams, while also making them more speaker invariant. This suggests that the model may be viable as a general post-processing step to improve probabilistic acoustic features obtained by unsupervised learning. / Obevakad inlärning av tal innebär att automatiskt hitta mönster i tal, t ex ord eller talljud, utan bevakning i form av ortografiska transkriptioner eller tidigare kunskap om språket. Ett grundläggande problem är dock att obevakad talinlärning tenderar att hitta väldigt talar- och kontextspecifika representationer av tal. Vi föreslår en metod för att förbättra kvaliteten av posteriorgram genererade med en obevakad modell, genom att partitionera de latenta klasserna funna av modellen. Vi gör detta genom att träna en gles siamesisk modell för att hitta en linjär transformering av de givna posteriorgrammen, extraherade från den obevakade modellen, till lågdimensionella posteriorgram. Den siamesiska modellen använder sig av talfragmentpar funna med obevakad ordupptäckning, där varje par består av fragment som antingen tillhör samma eller olika klasser. Den färdigtränade modellen görs sedan om till en exakt partitionering av posteriorgrammen. Vi följer Zero Resource Speech Challenge, och evaluerar modellen med hjälp av minimala ordpar-ABX-uppgiften. Vi demonstrerar att vår metod avsevärt minskar posteriorgrammens dimensionalitet, samtidigt som posteriorgrammen blir mer talarinvarianta. Detta antyder att modellen kan vara användbar som ett generellt extra steg för att förbättra probabilistiska akustiska särdrag från obevakade modeller.
234

Feature Extraction and FeatureSelection for Object-based LandCover Classification : Optimisation of Support Vector Machines in aCloud Computing Environment

Stromann, Oliver January 2018 (has links)
Mapping the Earth’s surface and its rapid changes with remotely sensed data is a crucial tool to un-derstand the impact of an increasingly urban world population on the environment. However, the impressive amount of freely available Copernicus data is only marginally exploited in common clas-sifications. One of the reasons is that measuring the properties of training samples, the so-called ‘fea-tures’, is costly and tedious. Furthermore, handling large feature sets is not easy in most image clas-sification software. This often leads to the manual choice of few, allegedly promising features. In this Master’s thesis degree project, I use the computational power of Google Earth Engine and Google Cloud Platform to generate an oversized feature set in which I explore feature importance and analyse the influence of dimensionality reduction methods. I use Support Vector Machines (SVMs) for object-based classification of satellite images - a commonly used method. A large feature set is evaluated to find the most relevant features to discriminate the classes and thereby contribute most to high clas-sification accuracy. In doing so, one can bypass the sensitive knowledge-based but sometimes arbi-trary selection of input features.Two kinds of dimensionality reduction methods are investigated. The feature extraction methods, Linear Discriminant Analysis (LDA) and Independent Component Analysis (ICA), which transform the original feature space into a projected space of lower dimensionality. And the filter-based feature selection methods, chi-squared test, mutual information and Fisher-criterion, which rank and filter the features according to a chosen statistic. I compare these methods against the default SVM in terms of classification accuracy and computational performance. The classification accuracy is measured in overall accuracy, prediction stability, inter-rater agreement and the sensitivity to training set sizes. The computational performance is measured in the decrease in training and prediction times and the compression factor of the input data. I conclude on the best performing classifier with the most effec-tive feature set based on this analysis.In a case study of mapping urban land cover in Stockholm, Sweden, based on multitemporal stacks of Sentinel-1 and Sentinel-2 imagery, I demonstrate the integration of Google Earth Engine and Google Cloud Platform for an optimised supervised land cover classification. I use dimensionality reduction methods provided in the open source scikit-learn library and show how they can improve classification accuracy and reduce the data load. At the same time, this project gives an indication of how the exploitation of big earth observation data can be approached in a cloud computing environ-ment.The preliminary results highlighted the effectiveness and necessity of dimensionality reduction methods but also strengthened the need for inter-comparable object-based land cover classification benchmarks to fully assess the quality of the derived products. To facilitate this need and encourage further research, I plan to publish the datasets (i.e. imagery, training and test data) and provide access to the developed Google Earth Engine and Python scripts as Free and Open Source Software (FOSS). / Kartläggning av jordens yta och dess snabba förändringar med fjärranalyserad data är ett viktigt verktyg för att förstå effekterna av en alltmer urban världsbefolkning har på miljön. Den imponerande mängden jordobservationsdata som är fritt och öppet tillgänglig idag utnyttjas dock endast marginellt i klassifikationer. Att hantera ett set av många variabler är inte lätt i standardprogram för bildklassificering. Detta leder ofta till manuellt val av få, antagligen lovande variabler. I det här arbetet använde jag Google Earth Engines och Google Cloud Platforms beräkningsstyrkan för att skapa ett överdimensionerat set av variabler i vilket jag undersöker variablernas betydelse och analyserar påverkan av dimensionsreducering. Jag använde stödvektormaskiner (SVM) för objektbaserad klassificering av segmenterade satellitbilder – en vanlig metod inom fjärranalys. Ett stort antal variabler utvärderas för att hitta de viktigaste och mest relevanta för att diskriminera klasserna och vilka därigenom mest bidrar till klassifikationens exakthet. Genom detta slipper man det känsliga kunskapsbaserade men ibland godtyckliga urvalet av variabler.Två typer av dimensionsreduceringsmetoder tillämpades. Å ena sidan är det extraktionsmetoder, Linjär diskriminantanalys (LDA) och oberoende komponentanalys (ICA), som omvandlar de ursprungliga variablers rum till ett projicerat rum med färre dimensioner. Å andra sidan är det filterbaserade selektionsmetoder, chi-två-test, ömsesidig information och Fisher-kriterium, som rangordnar och filtrerar variablerna enligt deras förmåga att diskriminera klasserna. Jag utvärderade dessa metoder mot standard SVM när det gäller exakthet och beräkningsmässiga prestanda.I en fallstudie av en marktäckeskarta över Stockholm, baserat på Sentinel-1 och Sentinel-2-bilder, demonstrerade jag integrationen av Google Earth Engine och Google Cloud Platform för en optimerad övervakad marktäckesklassifikation. Jag använde dimensionsreduceringsmetoder som tillhandahålls i open source scikit-learn-biblioteket och visade hur de kan förbättra klassificeringsexaktheten och minska databelastningen. Samtidigt gav detta projekt en indikation på hur utnyttjandet av stora jordobservationsdata kan nås i en molntjänstmiljö.Resultaten visar att dimensionsreducering är effektiv och nödvändig. Men resultaten stärker också behovet av ett jämförbart riktmärke för objektbaserad klassificering av marktäcket för att fullständigt och självständigt bedöma kvaliteten på de härledda produkterna. Som ett första steg för att möta detta behov och för att uppmuntra till ytterligare forskning publicerade jag dataseten och ger tillgång till källkoderna i Google Earth Engine och Python-skript som jag utvecklade i denna avhandling.
235

Information Retrieval Performance Enhancement Using The Average Standard Estimator And The Multi-criteria Decision Weighted Set

Ahram, TAREQ 01 January 2008 (has links)
Information retrieval is much more challenging than traditional small document collection retrieval. The main difference is the importance of correlations between related concepts in complex data structures. These structures have been studied by several information retrieval systems. This research began by performing a comprehensive review and comparison of several techniques of matrix dimensionality estimation and their respective effects on enhancing retrieval performance using singular value decomposition and latent semantic analysis. Two novel techniques have been introduced in this research to enhance intrinsic dimensionality estimation, the Multi-criteria Decision Weighted model to estimate matrix intrinsic dimensionality for large document collections and the Average Standard Estimator (ASE) for estimating data intrinsic dimensionality based on the singular value decomposition (SVD). ASE estimates the level of significance for singular values resulting from the singular value decomposition. ASE assumes that those variables with deep relations have sufficient correlation and that only those relationships with high singular values are significant and should be maintained. Experimental results over all possible dimensions indicated that ASE improved matrix intrinsic dimensionality estimation by including the effect of both singular values magnitude of decrease and random noise distracters. Analysis based on selected performance measures indicates that for each document collection there is a region of lower dimensionalities associated with improved retrieval performance. However, there was clear disagreement between the various performance measures on the model associated with best performance. The introduction of the multi-weighted model and Analytical Hierarchy Processing (AHP) analysis helped in ranking dimensionality estimation techniques and facilitates satisfying overall model goals by leveraging contradicting constrains and satisfying information retrieval priorities. ASE provided the best estimate for MEDLINE intrinsic dimensionality among all other dimensionality estimation techniques, and further, ASE improved precision and relative relevance by 10.2% and 7.4% respectively. AHP analysis indicates that ASE and the weighted model ranked the best among other methods with 30.3% and 20.3% in satisfying overall model goals in MEDLINE and 22.6% and 25.1% for CRANFIELD. The weighted model improved MEDLINE relative relevance by 4.4%, while the scree plot, weighted model, and ASE provided better estimation of data intrinsic dimensionality for CRANFIELD collection than Kaiser-Guttman and Percentage of variance. ASE dimensionality estimation technique provided a better estimation of CISI intrinsic dimensionality than all other tested methods since all methods except ASE tend to underestimate CISI document collection intrinsic dimensionality. ASE improved CISI average relative relevance and average search length by 28.4% and 22.0% respectively. This research provided evidence supporting a system using a weighted multi-criteria performance evaluation technique resulting in better overall performance than a single criteria ranking model. Thus, the weighted multi-criteria model with dimensionality reduction provides a more efficient implementation for information retrieval than using a full rank model.
236

Dynamic risk assessment of sexual offenders in the real world : study of predictive validity and dimensionality of the Static-99R and Stable-2007 on a French-speaking Canadian sample

Brien-Robidoux, Emmanuelle 09 1900 (has links)
La première étape de cette étude vise à évaluer, selon les données de terrain, la validité prédictive des outils les plus utilisés quant à prédire le risque de récidive auprès des délinquants sexuels, soit la Statique-99R et la Stable-2007. Au cours de la première étape de cette étude, la validité prédictive de la Statique-99R et de la Stable-2007 a été évaluée via les données d’expertise d’un échantillon d’un échantillon de 797 délinquants sexuels hommes. Ces données ont été obtenues par la recension des archives du Centre d’Intervention en Délinquance sexuelle (CIDS) au Québec, Canada, de 1998 à 2021 et pairées aux données officielles de récidive obtenues par la Sûreté du Québec pour la même période. Les scores totaux et les niveaux de risques évalués par la Statique-99R prédisaient significativement la récidive générale, sexuelle et violente (non-sexuelle). Bien que cela n’ait pas été le cas pour la Stable-2007, cet outil contribuait à prédire, lorsqu’ajoutée à la Statique-99R, la récidive générale, sexuelle et violente (non-sexuelle) pour ces trois types de récidive. La seconde partie de cette étude s’intéressait aux dimensions latentes de ces instruments. Les analyses factorielles exploratoire ont permis de relever 3 dimensions pour la Statique-99R, soit Jeune/Célibataire, Persistance (sexuelle et non-sexuelle) et Conduite de Prédation détachée, similaires à certain de ceux identifiés par Barbaree et al. (2006). Pour la Stable-2007, deux dimensions ont été identifiées, soit la présence de Caractéristiques antisociales et la Déviance sexuelle. Toutefois, aucune des dimensions extraites pour la Statique-99R et la Stable-2007 ne permettait de prédire significativement la récidive sexuelle. L’interprétation de ces résultats, les limites de cette étude et les implications possibles pour de plus amples recherches sont discutées. / The first phase of this study aimed to evaluate, based on field data, the predictive validity of the most used tools for predicting the risk of recidivism among sexual offenders, the Static-99R and the Stable-2007. Predictive validity of the Static-99R and the Stable-2007 was firstly assessed using legal expertise data from a sample of 797 male sex offenders. These data were obtained by reviewing the archives of the Centre d'Intervention en Délinquance sexuelle (CIDS) in Quebec, Canada, from 1998 to 2021 and matched with official recidivism data obtained by the Sûreté du Québec for the same period. The total scores and risk categories assessed by the Static-99R significantly predicted general, sexual, and violent (non-sexual) recidivism. Although this was not the case for the Stable-2007, scores and risk categories from the Stable-2007, when added to the Static-99R, helped predict general, sexual, and violent (non-sexual) recidivism for all three types of recidivism. The second part of this study focused on the latent dimensions of these instruments. Exploratory factor analyses identified 3 dimensions for the Static-99R, namely Youth/Single, Persistence (sexual and non-sexual), and Detached Predatory Conduct, similar to some of those identified by Barbaree et al. (2006). For the Stable-2007, two dimensions were identified, which were Antisociality and Sexual Deviance. However, none of the dimensions extracted for the Static-99R and the Stable-2007 significantly predicted sexual recidivism. The interpretation of these results, the limitations of this study, and possible implications for further research are discussed.
237

Feature Extraction using Dimensionality Reduction Techniques: Capturing the Human Perspective

Coleman, Ashley B. January 2015 (has links)
No description available.
238

Modifying Cellular Behavior Through the Control of Insoluble Matrix Cues: The Influence of Microarchitecture, Stiffness, Dimensionality, and Adhesiveness on Cell Function

Hogrebe, Nathaniel James January 2016 (has links)
No description available.
239

Extracting key features for analysis and recognition in computer vision

Gao, Hui 13 March 2006 (has links)
No description available.
240

REGION-BASED GEOMETRIC ACTIVE CONTOUR FOR CLASSIFICATION USING HYPERSPECTRAL REMOTE SENSING IMAGES

Yan, Lin 20 October 2011 (has links)
No description available.

Page generated in 0.0775 seconds