• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 37
  • 24
  • 14
  • 13
  • 8
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 276
  • 34
  • 33
  • 30
  • 30
  • 30
  • 29
  • 28
  • 28
  • 27
  • 26
  • 26
  • 24
  • 24
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Type I multiplier representations of locally compact groups /

Holzherr, A. K. January 1982 (has links) (PDF)
Thesis (Ph. D.)--University of Adelaide, Dept. of Pure Mathematics, 1984. / Includes bibliographical references.
32

Integration in locally compact spaces by means of uniformly distributed sequences

Post, Karel Albertus. January 1900 (has links)
Proefschrift--Eindhoven. / "Stellingen": [6] p. inserted. Summary in Dutch. Bibliography: p. 77-78.
33

Robust algorithms for linear regression and locally linear embedding / Algoritmos robustos para regressão linear e locally linear embedding

Rettes, Julio Alberto Sibaja January 2017 (has links)
RETTES, Julio Alberto Sibaja. Robust algorithms for linear regression and locally linear embedding. 2017. 105 f. Dissertação (Mestrado em Ciência da Computação)- Universidade Federal do Ceará, Fortaleza, 2017. / Submitted by Weslayne Nunes de Sales (weslaynesales@ufc.br) on 2017-03-30T13:15:27Z No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5) / Approved for entry into archive by Rocilda Sales (rocilda@ufc.br) on 2017-04-04T11:10:44Z (GMT) No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5) / Made available in DSpace on 2017-04-04T11:10:44Z (GMT). No. of bitstreams: 1 2017_dis_rettesjas.pdf: 3569500 bytes, checksum: 46cedc2d9f96d0f58bcdfe3e0d975d78 (MD5) Previous issue date: 2017 / Nowadays a very large quantity of data is flowing around our digital society. There is a growing interest in converting this large amount of data into valuable and useful information. Machine learning plays an essential role in the transformation of data into knowledge. However, the probability of outliers inside the data is too high to marginalize the importance of robust algorithms. To understand that, various models of outliers are studied. In this work, several robust estimators within the generalized linear model for regression framework are discussed and analyzed: namely, the M-Estimator, the S-Estimator, the MM-Estimator, the RANSAC and the Theil-Sen estimator. This choice is motivated by the necessity of examining algorithms with different working principles. In particular, the M-, S-, MM-Estimator are based on a modification of the least squares criterion, whereas the RANSAC is based on finding the smallest subset of points that guarantees a predefined model accuracy. The Theil Sen, on the other hand, uses the median of least square models to estimate. The performance of the estimators under a wide range of experimental conditions is compared and analyzed. In addition to the linear regression problem, the dimensionality reduction problem is considered. More specifically, the locally linear embedding, the principal component analysis and some robust approaches of them are treated. Motivated by giving some robustness to the LLE algorithm, the RALLE algorithm is proposed. Its main idea is to use different sizes of neighborhoods to construct the weights of the points; to achieve this, the RAPCA is executed in each set of neighbors and the risky points are discarded from the corresponding neighborhood. The performance of the LLE, the RLLE and the RALLE over some datasets is evaluated. / Na atualidade um grande volume de dados é produzido na nossa sociedade digital. Existe um crescente interesse em converter esses dados em informação útil e o aprendizado de máquinas tem um papel central nessa transformação de dados em conhecimento. Por outro lado, a probabilidade dos dados conterem outliers é muito alta para ignorar a importância dos algoritmos robustos. Para se familiarizar com isso, são estudados vários modelos de outliers. Neste trabalho, discutimos e analisamos vários estimadores robustos dentro do contexto dos modelos de regressão linear generalizados: são eles o M-Estimator, o S-Estimator, o MM-Estimator, o RANSAC e o Theil-Senestimator. A escolha dos estimadores é motivada pelo principio de explorar algoritmos com distintos conceitos de funcionamento. Em particular os estimadores M, S e MM são baseados na modificação do critério de minimização dos mínimos quadrados, enquanto que o RANSAC se fundamenta em achar o menor subconjunto que permita garantir uma acurácia predefinida ao modelo. Por outro lado o Theil-Sen usa a mediana de modelos obtidos usando mínimos quadradosno processo de estimação. O desempenho dos estimadores em uma ampla gama de condições experimentais é comparado e analisado. Além do problema de regressão linear, considera-se o problema de redução da dimensionalidade. Especificamente, são tratados o Locally Linear Embedding, o Principal ComponentAnalysis e outras abordagens robustas destes. É proposto um método denominado RALLE com a motivação de prover de robustez ao algoritmo de LLE. A ideia principal é usar vizinhanças de tamanhos variáveis para construir os pesos dos pontos; para fazer isto possível, o RAPCA é executado em cada grupo de vizinhos e os pontos sob risco são descartados da vizinhança correspondente. É feita uma avaliação do desempenho do LLE, do RLLE e do RALLE sobre algumas bases de dados.
34

Locally D-optimal Designs for Generalized Linear Models

January 2018 (has links)
abstract: Generalized Linear Models (GLMs) are widely used for modeling responses with non-normal error distributions. When the values of the covariates in such models are controllable, finding an optimal (or at least efficient) design could greatly facilitate the work of collecting and analyzing data. In fact, many theoretical results are obtained on a case-by-case basis, while in other situations, researchers also rely heavily on computational tools for design selection. Three topics are investigated in this dissertation with each one focusing on one type of GLMs. Topic I considers GLMs with factorial effects and one continuous covariate. Factors can have interactions among each other and there is no restriction on the possible values of the continuous covariate. The locally D-optimal design structures for such models are identified and results for obtaining smaller optimal designs using orthogonal arrays (OAs) are presented. Topic II considers GLMs with multiple covariates under the assumptions that all but one covariate are bounded within specified intervals and interaction effects among those bounded covariates may also exist. An explicit formula for D-optimal designs is derived and OA-based smaller D-optimal designs for models with one or two two-factor interactions are also constructed. Topic III considers multiple-covariate logistic models. All covariates are nonnegative and there is no interaction among them. Two types of D-optimal design structures are identified and their global D-optimality is proved using the celebrated equivalence theorem. / Dissertation/Thesis / Doctoral Dissertation Statistics 2018
35

Direct Images Of Locally Constant Sheaves on Complements to Plane Line Arrangements

Alvarinho Gonçalves, Iara January 2015 (has links)
No description available.
36

Locally linear embedding algorithm:extensions and applications

Kayo, O. (Olga) 25 April 2006 (has links)
Abstract Raw data sets taken with various capturing devices are usually multidimensional and need to be preprocessed before applying subsequent operations, such as clustering, classification, outlier detection, noise filtering etc. One of the steps of data preprocessing is dimensionality reduction. It has been developed with an aim to reduce or eliminate information bearing secondary importance, and retain or highlight meaningful information while reducing the dimensionality of data. Since the nature of real-world data is often nonlinear, linear dimensionality reduction techniques, such as principal component analysis (PCA), fail to preserve a structure and relationships in a highdimensional space when data are mapped into a low-dimensional space. This means that nonlinear dimensionality reduction methods are in demand in this case. Among them is a method called locally linear embedding (LLE), which is the focus of this thesis. Its main attractive characteristics are few free parameters to be set and a non-iterative solution avoiding the convergence to a local minimum. In this thesis, several extensions to the conventional LLE are proposed, which aid us to overcome some limitations of the algorithm. The study presents a comparison between LLE and three nonlinear dimensionality reduction techniques (isometric feature mapping (Isomap), self-organizing map (SOM) and fast manifold learning based on Riemannian normal coordinates (S-LogMap) applied to manifold learning. This comparison is of interest, since all of the listed methods reduce high-dimensional data in different ways, and it is worth knowing for which case a particular method outperforms others. A number of applications of dimensionality reduction techniques exist in data mining. One of them is visualization of high-dimensional data sets. The main goal of data visualization is to find a one, two or three-dimensional descriptive data projection, which captures and highlights important knowledge about data while eliminating the information loss. This process helps people to explore and understand the data structure that facilitates the choice of a proper method for the data analysis, e.g., selecting simple or complex classifier etc. The application of LLE for visualization is described in this research. The benefits of dimensionality reduction are commonly used in obtaining compact data representation before applying a classifier. In this case, the main goal is to obtain a low-dimensional data representation, which possesses good class separability. For this purpose, a supervised variant of LLE (SLLE) is proposed in this thesis.
37

Locally Nilpotent Derivations on Polynomial Rings in Two Variables over a Field of Characteristic Zero.

Nyobe Likeng, Samuel Aristide January 2017 (has links)
The main goal of this thesis is to present the theory of Locally Nilpotent Derivations and to show how it can be used to investigate the structure of the polynomial ring in two variables k[X;Y] over a field k of characteristic zero. The thesis gives a com- plete proof of Rentschler's Theorem, which describes all locally nilpotent derivations of k[X;Y]. Then we present Rentschler's proof of Jung's Theorem, which partially describes the group of automorphisms of k[X;Y]. Finally, we present the proof of the Structure Theorem for the group of automorphisms of k[X;Y].
38

Dysphagia progression-free survival in patients with locally advanced and metastatic oesophageal cancer receiving palliative radiation therapy

Bhim, Nazreen 04 January 2021 (has links)
Purpose: In patients with advanced oesophageal carcinoma palliation of dysphagia is important to maintaining a reasonable quality of life. The primary aim of this study was to determine the dysphagia progression-free survival (DPFS) in patients with advanced oesophageal carcinoma treated with palliative radiotherapy (RT). Methods: The medical records of all patients with oesophageal carcinoma presenting to Groote Schuur Hospital, Cape Town between January 2015-December 2016 were reviewed and patients who were not candidates for curative treatment and received palliative RT were selected. For these patients, the dysphagia score (DS) was recorded prior to RT, 6 weeks after RT and at each follow-up visit. The DPFS was calculated as the time from completion of RT to worsening in DS by ≥1 point or until death. Other outcomes measured were objective change in DS and survival post RT. Results: The study population comprised 84 patients. Squamous cell cancer was the primary histological subtype (93%). The median duration of DPFS after RT was 73 days, with approximately two-thirds of patients remaining able to swallow at least liquids and soft diet until death. The difference in median duration of DPFS was not statistically significant in stented versus non-stented patients (54 days vs 83 days; p =0.224). The mean change in DS was 0.45 ± 0.89 points following RT and the post RT survival was significantly shorter in patients with stent insertion (81 days vs 123 days; p=0.042). Conclusion: Palliative RT can be used successfully to prolong DPFS in patients with locally advanced and metastatic squamous cell cancer of the oesophagus.
39

Equal pay for equal work and work of equal value : bridging the gender pay gab

Basson, Devon January 2019 (has links)
Bridging the gender wage gap-South African history on women and the disadvantages suffered-South African legislation governing discrimination-international instruments governing equal pay between genders-international instruments on how to bridge the gender wage gap-consider international instruments in South Africa to bridge the gender wage gap / Mini Dissertation (LLM)--University of Pretoria, 2019. / Mercantile Law / LLM / Unrestricted
40

Methodological advances in benefit transfer and hedonic analysis

Puri, Roshan 19 September 2023 (has links)
This dissertation introduces advanced statistical and econometric methods in two distinct areas of non-market valuation: benefit transfer (BT) and hedonic analysis. While the first and the third chapters address the challenge of estimating the societal benefits of prospective environmental policy changes by adopting locally weighted regression (LWR) technique in an environmental valuation context, the second chapter combines the output from traditional hedonic regression and matching estimators and provides guidance on the choice of model with low risk of bias in housing market studies. The economic and societal benefits associated with various environmental conservation programs, such as improvement in water quality, or increment in wetland acreages, can be directly estimated using primary studies. However, conducting primary studies can be highly resource-intensive and time-consuming as they typically involve extensive data collection, sophisticated models, and a considerable investment of financial and human resources. As a result, BT offers a practical alternative, which involves employing valuation estimates, functions, or models from prior primary studies to predict the societal benefit of conservation policies at a policy site. Existing studies typically fit one single regression model to all observations within the given metadata and generate a single set of coefficients to predict welfare (willingness-to-pay) in a prospective policy site. However, a single set of coefficients may not reflect the true relationship between dependent and independent variables, especially when multiple source studies/locations are involved in the data-generating process which, in turn, degrades the predictive accuracy of the given meta-regression model (MRM). To address this shortcoming, we employ the LWR technique in an environmental valuation context. LWR allows an estimation of a different set of coefficients for each location to be used for BT prediction. However, the empirical exercise carried out in the existing literature is rigorous from a computational perspective and is cumbersome for practical adaptation. In the first chapter, we simplify the experimental setup required for LWR-BT analysis by taking a closer look at the choice of weight variables for different window sizes and weight function settings. We propose a pragmatic solution by suggesting "universal weights" instead of striving to identify the best of thousands of different weight variable settings. We use the water quality metadata employed in the published literature and show that our universal weights generate more efficient and equally plausible BT estimates for policy sites than the best weight variable settings that emerge from a time-consuming cross-validation search over the entire universe of individual variable combinations. The third chapter expands the scope of LWR to wetland meta-data. We use a conceptually similar set of weight variables as in the first chapter and replicate the methodological approach of that chapter. We show that LWR, under our proposed weight settings, generates substantial gain in both predictive accuracy and efficiency compared to the one generated by standard globally-linear MRM. Our second chapter delves into a separate yet interrelated realm of non-market valuation, i.e., hedonic analysis. Here, we explore the combined inferential power of traditional hedonic regression and matching estimators to provide guidance on model choice for housing market studies where researchers aim to estimate an unbiased binary treatment effect in the presence of unobserved spatial and temporal effects. We examine the potential sources of bias within both hedonic regression and basic matching. We discuss the theoretical routes to mitigate these biases and assess their feasibility in practical contexts. We propose a novel route towards unbiasedness, i.e., the "cancellation effect" and illustrate its empirical feasibility while estimating the impact of flood hazards on housing prices. / Doctor of Philosophy / This dissertation introduces novel statistical and econometric methods to better understand the value of environmental resources that do not have an explicit market price, such as the benefits we get from the changes in water quality, size of wetlands, or the impact of flood risk zoning in the sales price of residential properties. The first and third chapters tackle the challenge of estimating the value of environmental changes, such as cleaner water or more wetlands. To figure out how much people benefit from these changes, we can look at how much they would be willing to pay for such improved water quality or increased wetland area. This typically requires conducting a primary survey, which is expensive and time-consuming. Instead, researchers can draw insights from prior studies to predict welfare in a new policy site. This approach is analogous to applying a methodology and/or findings from one research work to another. However, the direct application of findings from one context to another assumes uniformity across the different studies which is unlikely, especially when past studies are associated with different spatial locations. To address this, we propose a ``locally-weighting" technique. This places greater emphasis on the studies that closely align with the characteristics of the new (policy) context. Determining the weight variables/factors that dictate this alignment is a question that requires an empirical investigation. One recent study attempts this locally-weighting technique to estimate the benefits of improved water quality and suggests experimenting with different factors to find the similarity between the past and new studies. However, their approach is computationally intensive, making it impractical for adaptation. In our first chapter, we propose a more pragmatic solution---using a "universal weight" that does not require assessing multiple factors. With our proposed weights in an otherwise similar context, we find more efficient and equally plausible estimates of the benefits as previous studies. We expand the scope of the local weighting to the valuation of gains or losses in wetland areas in the third chapter. We use a conceptually similar set of weight variables and replicate the empirical exercise from the first chapter. We show that the local-weighting technique, under our proposed settings, substantially improves the accuracy and efficiency of estimated benefits associated with the change in wetland acreage. This highlights the diverse potential of the local weighting technique in an environmental valuation context. The second chapter of this dissertation attempts to understand the impact of flood risk on housing prices. We can use "hedonic regression" to understand how different features of a house, like its size, location, sales year, amenities, and flood zone location affect its price. However, if we do not correctly specify this function, then the estimates will be misleading. Alternatively, we can use "matching" technique where we pair the houses inside and outside of the flood zone in all observable characteristics, and differentiate their price to estimate the flood zone impact. However, finding identical houses in all aspects of household and neighborhood characteristics is practically impossible. We propose that any leftover differences in features of the matched houses can be balanced out by considering where the houses are located (school zone, for example) and when they were sold. We refer to this route as the "cancellation effect" and show that this can indeed be achieved in practice especially when we pair a single house in a flood zone with many houses outside that zone. This not only allows us to accurately estimate the effect of flood zones on housing prices but also reduces the uncertainty around our findings.

Page generated in 0.0458 seconds