• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 612
  • 143
  • 104
  • 91
  • 87
  • 78
  • 78
  • 70
  • 68
  • 68
  • 62
  • 61
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

A legitimação de posse na dimensão registrária da regularização fundiária / The legitimacy of tenure in the register dimension of land regularization

Khuri, Naila de Rezende 09 February 2017 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2017-02-16T12:31:17Z No. of bitstreams: 1 Naila de Rezende Khuri.pdf: 3608760 bytes, checksum: ba831ca89be3135e5d532c18fb3b46b0 (MD5) / Made available in DSpace on 2017-02-16T12:31:17Z (GMT). No. of bitstreams: 1 Naila de Rezende Khuri.pdf: 3608760 bytes, checksum: ba831ca89be3135e5d532c18fb3b46b0 (MD5) Previous issue date: 2017-02-09 / The purpose of this study is to study the legitimization of tenure regulated by Law n. 11.977 of 2009, with emphasis on the registration dimension of land regularization of informal settlements occupied by low income population. The registration dimension aims at the titling of the occupants of these settlements with the premise of conferring legal security to the possession. The paper deals with the problem of the housing deficit that gave rise to informal settlements and land regularization as an instrument of solution in the light of the right to decent housing and the right to a just city. Finally, the steps of land regularization before the Real Estate Registry and the interface with the ownership legitimation are discussed / O presente trabalho tem por finalidade o estudo do instituto da legitimação de posse, regulado pela Lei n. 11.977, de 2009, com ênfase na dimensão registrária da regularização fundiária dos assentamentos informais ocupados por população de baixa renda. A dimensão registrária visa à titulação dos ocupantes desses assentamentos com a premissa de conferir segurança jurídica à posse. O trabalho aborda a problemática do déficit habitacional que deu origem aos assentamentos informais e a regularização fundiária como um instrumento de solução à luz do direito à moradia digna e ao direito à cidade justa. Por fim, abordam-se as etapas da regularização fundiária perante o Registro de Imóveis e a interface com a legitimação da posse
272

Usucapião coletiva urbana e regularização fundiária em favelas paulistanas

Chaccur, Ricardo Cotrim 29 August 2014 (has links)
Made available in DSpace on 2016-03-15T19:34:12Z (GMT). No. of bitstreams: 1 Ricardo Cotrim Chaccur.pdf: 3042642 bytes, checksum: 91fd921ab009eb79d59ac1bbeba8d321 (MD5) Previous issue date: 2014-08-29 / This dissertation aims to analyze the institution of collective urban adverse possession as an instrument of urban policy to promote land tenure regularization and urbanization of favelas in São Paulo. In Brazil, currently, the law provides some mechanisms for the realization of the right to social housing, through the acquisition of land for the purpose of social inclusion of people living in poor or substandard dwellings and therefore scope for developing citizenship. The right to housing is a social and human right, as recognized by Art. 6, caput, of the Federal Constitution of 1988 and the Habitat Agenda, organized by the United Nations Conference on Human Settlements - Habitat II, held in Instambul in 1996. And yet, the need to overcome the problem of land in slums and squatter settlements located in major urban centers, makes relevant the present work, in order to identify the causes that led to the agrarian conflict in the country, the problems arising from these areas and that reflect on the lives of urban dwellers as well as an analysis of one of the legal instruments to deal with this problem. / A presente dissertação objetiva analisar o instituto da usucapião coletiva urbana, enquanto instrumento de Política Urbana para promover a regularização fundiária e urbanização das favelas na cidade de São Paulo. No Brasil, atualmente, a legislação dispõe de alguns mecanismos para a efetivação do direito social de moradia, por meio da aquisição da propriedade imobiliária, com a finalidade de inclusão social dos habitantes que residem em moradias precárias ou subnormais e, consequentemente, alcance do desenvolvimento da cidadania. O direito à moradia é, um direito social e humano, conforme reconhecido pelo art. 6º, caput, da Constituição Federal de 1988 e pela Agenda Habitat, organizada pela Conferência das Nações Unidas sobre Assentamentos Humanos Habitat II, realizada em Instambul, em 1996. E, ainda, a necessidade de superação do problema fundiário nos assentamentos irregulares e favelas localizadas em importantes centros urbanos brasileiros, faz pertinente o presente trabalho, a fim de identificar as causas que originaram o conflito fundiário no País, os problemas decorrentes dessas áreas e que refletem na vida dos habitantes urbanos, bem como uma analise de um dos instrumentos jurídicos de tratamento deste problema.
273

Determinação de espectros de energia de elétrons clínicos do eixo central a partir de curvas de porcentagem de dose em profundidade de feixes largos / Determination of central axis energy spectra of clinical electron beam from percentage depth dose curves of broad beams

Jorge Homero Wilches Visbal 15 August 2018 (has links)
Em radioterapia, o espectro de energia é o componente mais importante dos feixes de elétrons. Espectros de energia de elétrons são relevântes para o cálculo acurado da dose, aplicações do sistema de planejamento e simulações realistas. Reconstrução inversa consiste na derivação do espectro de energia de elétrons a partir de curvas de porcentagem de dose em profundidade utilizando um apropiado modelo matemático. Reconstrução inversa é considerada a melhor dentre muitas abordagens porque: i) não requer nenhum equipamento suplementar ou do conhecimento detalhado da geometria e composição do cabeçote do acelerador; ii) equipamentos para a medição de curvas de porcentagem de dose em profundidade estão disponíveis em qualquer clínica e iii) é computacionalmente rápida. Neste trabalho, usou-se o método de reconstrução inversa baseado na sinergia recozimento simulado generalizado-regularização de Tikhonov. A validação da reconstrução foi realizada através do índice gama sob critérios clínicos de aceitação restritivos. Resultados mostraram que os espectros de energia reconstruídos reproduzem com precisão a porcentagem de dose em profundidade clínica bem como valores de dose fora do eixo central. Assim, concluí-se que o método empregado é ecaz para reconstruir espectros de energia que representam efetivamente espectros de energia do acelerador que atingem na supercie do fantoma. Consequentemente, sob certos limites, eles poderiam auxiliar em simulações realistas do tratamento. / In radiotherapy, energy spectrum is the most critical component of any electron beam. Knowledge of energy spectrum is important for accurate dose calculation, treatment planning applications and realistic simulations. Inverse reconstruction derives energy spectrum from the measured percentage depth dose using an appropriate mathematical model. There are several advantages to using inverse reconstruction: i) it does not require any supplementary equipment or detailed knowledge of the geometry head and composition; ii) the equipment for measurement of the percentage depth dose is standard and already available in any clinic and iii) it is computationally fast. In this work, we used the inverse reconstruction method based on the synergy simulated annealing generalized-Tikhonov regularization. Validation of inverse reconstruction was done by comparing the measured and reconstructed percentage depth dose via the gamma index. Results show the reconstructed electron energy spectra accurately reproduce the clinical dose percentage as well as o-axis dose values. Therefore, it was concluded that the method employed is eective to reconstruct energy spectra that eectively represent accelerator energy spectra reaching the phantom surface. Consequently, under certain limits, they could aid in realistic simulations of treatment.
274

Programa de regularização fundiária sustentável de assentamentos irregulares de São Bernardo do Campo: avanços e desafios ao processo de regularização fundiária no âmbito da política habitacional municipal / Program of sustainable environmental regularization of irregular settings of São Bernardo do Campo: advances and challenges to the process of environmental regularization in the framework of municipal housing policy

Brandimiller, Tatiana Urbanovik 19 April 2017 (has links)
A pesquisa pretende identificar e analisar criticamente os avanços trazidos e os desafios enfrentados na implementação do Programa de Regularização Fundiária Sustentável no município de São Bernardo do Campo, estado de São Paulo. Ao abordar a regularização fundiária, a pesquisa busca compreender o seu papel como política pública estratégica na promoção do acesso à moradia e à cidade por parte da população de baixa renda. A intenção do trabalho é contribuir para construção de um panorama crítico às diferentes compreensões e proposições de atuação sobre a Regularização Fundiária de Interesse Social. Para tanto, problematiza a questão em suas diversas acepções e concepções, abordando criticamente e relacionando-a às diferentes correntes de pensamento sobre o tema. Além do aporte conceitual e da literatura específica, contribui para a pesquisa as experiências profissionais vividas pela autora ao longo dos últimos cinco anos na implementação dessa política pública no município de São Bernardo do Campo. / The research aims to critically identify and analyse the advances brought and the challenges faced in the implementation of a Sustainable Land Tenure Regularization Program in the municipality of São Bernardo do Campo, state of São Paulo. When addressing land tenure regularization, the research seeks to understand its role as a public policy strategic in the promotion of access to housing and to the city by the low-income population. The intention of the paper is to contribute to the construction of a panorama critical to the different understandings and propositions of action about the Land Tenure Regularization of Social Interest. To do so, it problematizes the issue in its various meanings and conceptions, critically approaching and relating it to the different lines of thought on the subject. Besides the conceptual addition and specific literature, contributes to the research the professional experiences of the author over the past five years in the implementation of this public policy in the municipality of São Bernardo do Campo.
275

Data Poisoning Attacks on Linked Data with Graph Regularization

January 2019 (has links)
abstract: Social media has become the norm of everyone for communication. The usage of social media has increased exponentially in the last decade. The myriads of Social media services such as Facebook, Twitter, Snapchat, and Instagram etc allow people to connect with their friends, and followers freely. The attackers who try to take advantage of this situation has also increased at an exponential rate. Every social media service has its own recommender systems and user profiling algorithms. These algorithms use users current information to make different recommendations. Often the data that is formed from social media services is Linked data as each item/user is usually linked with other users/items. Recommender systems due to their ubiquitous and prominent nature are prone to several forms of attacks. One of the major form of attacks is poisoning the training set data. As recommender systems use current user/item information as the training set to make recommendations, the attacker tries to modify the training set in such a way that the recommender system would benefit the attacker or give incorrect recommendations and hence failing in its basic functionality. Most existing training set attack algorithms work with ``flat" attribute-value data which is typically assumed to be independent and identically distributed (i.i.d.). However, the i.i.d. assumption does not hold for social media data since it is inherently linked as described above. Usage of user-similarity with Graph Regularizer in morphing the training data produces best results to attacker. This thesis proves the same by demonstrating with experiments on Collaborative Filtering with multiple datasets. / Dissertation/Thesis / Masters Thesis Computer Science 2019
276

<em>η'</em> Decay to π<sup>+</sup>π<sup>-</sup>π<sup>+</sup>π<sup>−</sup>

Jafari, Ehsan 01 January 2018 (has links)
With the use of chiral theory of mesons [1], [2] we evaluate the decay rate of η′ → π+π−π+π−. Our theoretical study of this problem is different from the previous theo- retical study [3] and our predicted result is in a good agreement with the experiment. In this chiral theory we evaluate Feynman diagrams up to one loop and the decay rate is calculated with the use of triangle and box diagrams. The ρ0 meson includes in both type of diagrams as a resonance state. Divergent integrals in the loop calculations are regularized with the use of n-dimensional ’t Hooft-Veltman regularization technique. At the last step to obtain the decay rate, the phase space integral has been calculated.
277

Measuring Linguistic and Cultural Evolution Using Books and Tweets

Gray, Tyler 01 January 2019 (has links)
Written language provides a snapshot of linguistic, cultural, and current events information for a given time period. Aggregating these snapshots by studying many texts over time reveals trends in the evolution of language, culture, and society. The ever-increasing amount of electronic text, both from the digitization of books and other paper documents to the increasing frequency with which electronic text is used as a means of communication, has given us an unprecedented opportunity to study these trends. In this dissertation, we use hundreds of thousands of books spanning two centuries scanned by Google, and over 100 billion messages, or ‘tweets’, posted to the social media platform, Twitter, over the course of a decade to study the English language, as well as study the evolution of culture and society as inferred from the changes in language. We begin by studying the current state of verb regularization and how this compares between the more formal writing of books and the more colloquial writing of tweets on Twitter. We find that the extent of verb regularization is greater on Twitter, taken as a whole, than in English Fiction books, and also for tweets geotagged in the United States relative to American English books, but the opposite is true for tweets geotagged in the United Kingdom relative to British English books. We also find interesting regional variations in regularization across counties in the United States. However, once differences in population are accounted for, we do not identify strong correlations with socio-demographic variables. Next, we study stretchable words, a fundamental aspect of spoken language that, until the advent of social media, was rarely observed within written language. We examine the frequency distributions of stretchable words and introduce two central parameters that capture their main characteristics of balance and stretch. We explore their dynamics by creating visual tools we call ‘balance plots’ and ‘spelling trees’. We also discuss how the tools and methods we develop could be used to study mistypings and misspellings, and may have further applications both within and beyond language. Finally, we take a closer look at the English Fiction n-gram dataset created by Google. We begin by explaining why using token counts as a proxy of word, or more generally, ‘n-gram’, importance is fundamentally flawed. We then devise a method to rebuild the Google Books corpus so that meaningful linguistic and cultural trends may be reliably discerned. We use book counts as the primary ranking for an n-gram and use subsampling to normalize across time to mitigate the extraneous results created by the underlying exponential increase in data volume over time. We also combine the subsampled data over a number of years as a method of smoothing. We then use these improved methods to study linguistic and cultural evolution across the last two centuries. We examine the dynamics of Zipf distributions for n-grams by measuring the churn of language reflected in the flux of n-grams across rank boundaries. Finally, we examine linguistic change using wordshift plots and a rank divergence measure with a tunable parameter to compare the language of two different time periods. Our results address several methodological shortcomings associated with the raw Google Books data, strengthening the potential for cultural inference by word changes.
278

Graph-based Latent Embedding, Annotation and Representation Learning in Neural Networks for Semi-supervised and Unsupervised Settings

Kilinc, Ismail Ozsel 30 November 2017 (has links)
Machine learning has been immensely successful in supervised learning with outstanding examples in major industrial applications such as voice and image recognition. Following these developments, the most recent research has now begun to focus primarily on algorithms which can exploit very large sets of unlabeled examples to reduce the amount of manually labeled data required for existing models to perform well. In this dissertation, we propose graph-based latent embedding/annotation/representation learning techniques in neural networks tailored for semi-supervised and unsupervised learning problems. Specifically, we propose a novel regularization technique called Graph-based Activity Regularization (GAR) and a novel output layer modification called Auto-clustering Output Layer (ACOL) which can be used separately or collaboratively to develop scalable and efficient learning frameworks for semi-supervised and unsupervised settings. First, singularly using the GAR technique, we develop a framework providing an effective and scalable graph-based solution for semi-supervised settings in which there exists a large number of observations but a small subset with ground-truth labels. The proposed approach is natural for the classification framework on neural networks as it requires no additional task calculating the reconstruction error (as in autoencoder based methods) or implementing zero-sum game mechanism (as in adversarial training based methods). We demonstrate that GAR effectively and accurately propagates the available labels to unlabeled examples. Our results show comparable performance with state-of-the-art generative approaches for this setting using an easier-to-train framework. Second, we explore a different type of semi-supervised setting where a coarse level of labeling is available for all the observations but the model has to learn a fine, deeper level of latent annotations for each one. Problems in this setting are likely to be encountered in many domains such as text categorization, protein function prediction, image classification as well as in exploratory scientific studies such as medical and genomics research. We consider this setting as simultaneously performed supervised classification (per the available coarse labels) and unsupervised clustering (within each one of the coarse labels) and propose a novel framework combining GAR with ACOL, which enables the network to perform concurrent classification and clustering. We demonstrate how the coarse label supervision impacts performance and the classification task actually helps propagate useful clustering information between sub-classes. Comparative tests on the most popular image datasets rigorously demonstrate the effectiveness and competitiveness of the proposed approach. The third and final setup builds on the prior framework to unlock fully unsupervised learning where we propose to substitute real, yet unavailable, parent- class information with pseudo class labels. In this novel unsupervised clustering approach the network can exploit hidden information indirectly introduced through a pseudo classification objective. We train an ACOL network through this pseudo supervision together with unsupervised objective based on GAR and ultimately obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets with the highest accuracies reported to date in the literature.
279

Characterization of Hydrogeological Media Using Electromagnetic Geophysics

Linde, Niklas January 2005 (has links)
Radio magnetotellurics (RMT), crosshole ground penetrating radar (GPR), and crosshole electrical resistance tomography (ERT) were applied in a range of hydrogeological applications where geophysical data could improve hydrogeological characterization. A profile of RMT data collected over highly resistive granite was used to map subhorizontal fracture zones below 300m depth, as well as a steeply dipping fracture zone, which was also observed on a coinciding seismic reflection profile. One-dimensional inverse modelling and 3D forward modelling with displacement currents included were necessary to test the reliability of features found in the 2D models, where the forward models did not include displacement currents and only lower frequencies were considered. An inversion code for RMT data was developed and applied to RMT data with azimuthal electrical anisotropy signature collected over a limestone formation. The results indicated that RMT is a faster and more reliable technique for studying electrical anisotropy than are azimuthal resistivity surveys. A new sequential inversion method to estimate hydraulic conductivity fields using crosshole GPR and tracer test data was applied to 2D synthetic examples. Given careful surveying, the results indicated that regularization of hydrogeological inverse problems using geophysical tomograms might improve models of hydraulic conductivity. A method to regularize geophysical inverse problems using geostatistical models was developed and applied to crosshole ERT and GPR data collected in unsaturated sandstone. The resulting models were geologically more reasonable than models where the regularization was based on traditional smoothness constraints. Electromagnetic geophysical techniques provide an inexpensive data source in estimating qualitative hydrogeological models, but hydrogeological data must be incorporated to make quantitative estimation of hydrogeological systems feasible.
280

Compressive Sensing for 3D Data Processing Tasks: Applications, Models and Algorithms

January 2012 (has links)
Compressive sensing (CS) is a novel sampling methodology representing a paradigm shift from conventional data acquisition schemes. The theory of compressive sensing ensures that under suitable conditions compressible signals or images can be reconstructed from far fewer samples or measurements than what are required by the Nyquist rate. So far in the literature, most works on CS concentrate on one-dimensional or two-dimensional data. However, besides involving far more data, three-dimensional (3D) data processing does have particularities that require the development of new techniques in order to make successful transitions from theoretical feasibilities to practical capacities. This thesis studies several issues arising from the applications of the CS methodology to some 3D image processing tasks. Two specific applications are hyperspectral imaging and video compression where 3D images are either directly unmixed or recovered as a whole from CS samples. The main issues include CS decoding models, preprocessing techniques and reconstruction algorithms, as well as CS encoding matrices in the case of video compression. Our investigation involves three major parts. (1) Total variation (TV) regularization plays a central role in the decoding models studied in this thesis. To solve such models, we propose an efficient scheme to implement the classic augmented Lagrangian multiplier method and study its convergence properties. The resulting Matlab package TVAL3 is used to solve several models. Computational results show that, thanks to its low per-iteration complexity, the proposed algorithm is capable of handling realistic 3D image processing tasks. (2) Hyperspectral image processing typically demands heavy computational resources due to an enormous amount of data involved. We investigate low-complexity procedures to unmix, sometimes blindly, CS compressed hyperspectral data to directly obtain material signatures and their abundance fractions, bypassing the high-complexity task of reconstructing the image cube itself. (3) To overcome the "cliff effect" suffered by current video coding schemes, we explore a compressive video sampling framework to improve scalability with respect to channel capacities. We propose and study a novel multi-resolution CS encoding matrix, and a decoding model with a TV-DCT regularization function. Extensive numerical results are presented, obtained from experiments that use not only synthetic data, but also real data measured by hardware. The results establish feasibility and robustness, to various extent, of the proposed 3D data processing schemes, models and algorithms. There still remain many challenges to be further resolved in each area, but hopefully the progress made in this thesis will represent a useful first step towards meeting these challenges in the future.

Page generated in 0.0868 seconds