• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 38
  • 34
  • 8
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 206
  • 32
  • 24
  • 24
  • 22
  • 22
  • 18
  • 18
  • 17
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Automatische Generalisierungsverfahren zur Vereinfachung von Kartenvektordaten unter Berücksichtigung der Topologie und Echtzeitfähigkeit

Hahmann, Stefan 28 July 2010 (has links) (PDF)
Die mapChart GmbH bietet einen Softwaredienst an, der es ermöglicht, auf der Grundlage von teilweise kundenspezifischen Basisgeometrien vektorbasierte Karten zu erstellen. Primäres Ausgabemedium ist dabei die Bildschirmkarte des JAVA-Clients im Webbrowser. PDF-Export und Druck sind ebenso möglich. Bei der Kartenerstellung ist der Anwender nicht an vorgegebene Maßstäbe gebunden, sondern kann frei wählen, welches Gebiet in welcher Größe dargestellt werden soll. Hierdurch ergeben sich komplexe Aufgabenstellungen für die kartografische Generalisierung. Diese Probleme und deren bisherige Lösungen durch das Unternehmen werden im ersten Teil der Arbeit diskutiert, wobei verschiedene wissenschaftliche Arbeiten für spezielle Teilaufgaben der Generalisierung kurz vorgestellt werden. Selektion und Formvereinfachung gelten als die wichtigsten Generalisierungsschritte. Während die Selektion mit den vorhandenen Methoden von Geodatenbanken relativ problemlos realisiert werden kann, stellt die Formvereinfachung ein umfangreiches Problem dar. Das Hauptaugenmerk der Arbeit richtet sich deswegen auf die rechnergestützte Liniengeneralisierung verbunden mit dem Ziel, überflüssige Stützpunkte mit Hilfe von Algorithmen zur Linienvereinfachung einzusparen. Ergebnis sind schnellere Übertragungszeiten der Kartenvektordaten zum Anwender sowie eine Beschleunigung raumbezogener Analysen, wie z. B. Flächenverschneidungen. Des weiteren werden Verbesserungen in der Darstellung angestrebt. Ein geeigneter Algorithmus zeichnet sich durch eine geringe Beanspruchung der Ressourcen Zeit und Speicherbedarf aus. Weiterhin spielt der erreichbare Grad der Verringerung der Stützpunktmenge bei akzeptabler Kartenqualität eine entscheidende Rolle. Nicht zuletzt sind topologische Aspekte und der Implementierungsaufwand zu beachten. Die Arbeit gibt einen umfassenden Überblick über vorhandene Ansätze zur Liniengeneralisierung und leitet aus der Diskussion der Vor- und Nachteile zwei geeignete Algorithmen für die Implementierung mit der Programmiersprache JAVA ab. Die Ergebnisse der Verfahren nach Douglas-Peucker und Visvalingam werden hinsichtlich der Laufzeiten, des Grades der Verringerung der Stützpunktmenge sowie der Qualität der Kartendarstellung verglichen, wobei sich für die Visvalingam-Variante leichte Vorteile ergeben. Eine Parameterkonfiguration für den konkreten Einsatz der Vereinfachungsmethode in das GIS der mapChart GmbH wird vorgeschlagen. Die Vereinfachung von Polygonnetzen stellt eine Erweiterung des Problems der Liniengeneralisierung dar. Hierbei müssen topologische Aspekte beachtet werden, was besonders schwierig ist, wenn die Ausgangsdaten nicht topologisch strukturiert vorliegen. Für diese Aufgabe wurde ein neuer Algorithmus entwickelt und ebenfalls in JAVA implementiert. Die Implementierung dieses Algorithmus und damit erreichbaren Ergebnisse werden anhand von zwei Testdatensätzen vorgestellt, jedoch zeigt sich, dass die wichtige Bedingung der Echtzeitfähigkeit nicht erfüllt wird. Damit ergibt sich, dass der Algorithmus zur Netzvereinfachung nur offline benutzt werden sollte. / MapChart GmbH offers a software service, which allows users to create customized, vector-based maps using vendor as well as customer geometric and attributive data. Target delivery media is the on-screen map of the JAVA client within the web browser. PDF export and print are also supported. Map production is not limited to specific scales. The user can choose which area at which scale is shown. This triggers complex tasks for cartographic generalization. Current solutions by the company are discussed and scientific work for selected tasks will be presented shortly. Selection and Simplification are known as the most important steps of generalization. While selection can be managed sufficiently by geo databases, simplification poses considerably problems. The main focus of the thesis is the computational line generalization aiming on reducing the amount of points by simplification. Results are an increased speed of server to client communication and better performance of spatial analysis, such as intersection. Furthermore enhancements for the portrayal of maps are highlighted. An appropriate algorithm minimizes the demands for the resources time and memory. Furthermore the obtainable level of simplification by still producing acceptable map quality plays an important role. Last but not least efforts for the implementation of the algorithm and topology are important. The thesis discusses a broad overview of existing approaches to line simplification. Two appropriate algorithms for the implementation using the programming language JAVA will be proposed. The results of the methods of Visvalingam and Douglas-Peucker will be discussed with regards to performance, level of point reduction and map quality. Recommended parameters for the implementation in the software of MapChart GmbH are derived. The simplification of polygon meshes will be an extension of the line generalization. Topological constraints need to be considered. This task needs a sophisticated approach as the raw data is not stored in a topological structure. For this task a new algorithm was developed. It was also implemented using JAVA. The results of the testing scenario show that the constraint of real-time performance cannot be fulfilled. Hence it is recommended to use the algorithm for the polygon mesh simplification offline only.
122

Automatische Generalisierungsverfahren zur Vereinfachung von Kartenvektordaten unter Berücksichtigung der Topologie und Echtzeitfähigkeit / Automatic generalisation methods for the simplification for map vector data with regards on topology and real-time performance

Hahmann, Stefan 21 September 2010 (has links) (PDF)
Die mapChart GmbH bietet einen Softwaredienst an, der es ermöglicht, auf der Grundlage von teilweise kundenspezifischen Basisgeometrien vektorbasierte Karten zu erstellen. Primäres Ausgabemedium ist dabei die Bildschirmkarte des JAVA-Clients im Webbrowser. PDF-Export und Druck sind ebenso möglich. Bei der Kartenerstellung ist der Anwender nicht an vorgegebene Maßstäbe gebunden, sondern kann frei wählen, welches Gebiet in welcher Größe dargestellt werden soll. Hierdurch ergeben sich komplexe Aufgabenstellungen für die kartografische Generalisierung. Diese Probleme und deren bisherige Lösungen durch das Unternehmen werden im ersten Teil der Arbeit diskutiert, wobei verschiedene wissenschaftliche Arbeiten für spezielle Teilaufgaben der Generalisierung kurz vorgestellt werden. Selektion und Formvereinfachung gelten als die wichtigsten Generalisierungsschritte. Während die Selektion mit den vorhandenen Methoden von Geodatenbanken relativ problemlos realisiert werden kann, stellt die Formvereinfachung ein umfangreiches Problem dar. Das Hauptaugenmerk der Arbeit richtet sich deswegen auf die rechnergestützte Liniengeneralisierung verbunden mit dem Ziel, überflüssige Stützpunkte mit Hilfe von Algorithmen zur Linienvereinfachung einzusparen. Ergebnis sind schnellere Übertragungszeiten der Kartenvektordaten zum Anwender sowie eine Beschleunigung raumbezogener Analysen, wie z. B. Flächenverschneidungen. Des weiteren werden Verbesserungen in der Darstellung angestrebt. Ein geeigneter Algorithmus zeichnet sich durch eine geringe Beanspruchung der Ressourcen Zeit und Speicherbedarf aus. Weiterhin spielt der erreichbare Grad der Verringerung der Stützpunktmenge bei akzeptabler Kartenqualität eine entscheidende Rolle. Nicht zuletzt sind topologische Aspekte und der Implementierungsaufwand zu beachten. Die Arbeit gibt einen umfassenden Überblick über vorhandene Ansätze zur Liniengeneralisierung und leitet aus der Diskussion der Vor- und Nachteile zwei geeignete Algorithmen für die Implementierung mit der Programmiersprache JAVA ab. Die Ergebnisse der Verfahren nach Douglas-Peucker und Visvalingam werden hinsichtlich der Laufzeiten, des Grades der Verringerung der Stützpunktmenge sowie der Qualität der Kartendarstellung verglichen, wobei sich für die Visvalingam-Variante leichte Vorteile ergeben. Eine Parameterkonfiguration für den konkreten Einsatz der Vereinfachungsmethode in das GIS der mapChart GmbH wird vorgeschlagen. Die Vereinfachung von Polygonnetzen stellt eine Erweiterung des Problems der Liniengeneralisierung dar. Hierbei müssen topologische Aspekte beachtet werden, was besonders schwierig ist, wenn die Ausgangsdaten nicht topologisch strukturiert vorliegen. Für diese Aufgabe wurde ein neuer Algorithmus entwickelt und ebenfalls in JAVA implementiert. Die Implementierung dieses Algorithmus und damit erreichbaren Ergebnisse werden anhand von zwei Testdatensätzen vorgestellt, jedoch zeigt sich, dass die wichtige Bedingung der Echtzeitfähigkeit nicht erfüllt wird. Damit ergibt sich, dass der Algorithmus zur Netzvereinfachung nur offline benutzt werden sollte. / MapChart GmbH offers a software service, which allows users to create customized, vector-based maps using vendor as well as customer geometric and attributive data. Target delivery media is the on-screen map of the JAVA client within the web browser. PDF export and print are also supported. Map production is not limited to specific scales. The user can choose which area at which scale is shown. This triggers complex tasks for cartographic generalization. Current solutions by the company are discussed and scientific work for selected tasks will be presented shortly. Selection and Simplification are known as the most important steps of generalization. While selection can be managed sufficiently by geo databases, simplification poses considerably problems. The main focus of the thesis is the computational line generalization aiming on reducing the amount of points by simplification. Results are an increased speed of server to client communication and better performance of spatial analysis, such as intersection. Furthermore enhancements for the portrayal of maps are highlighted. An appropriate algorithm minimizes the demands for the resources time and memory. Furthermore the obtainable level of simplification by still producing acceptable map quality plays an important role. Last but not least efforts for the implementation of the algorithm and topology are important. The thesis discusses a broad overview of existing approaches to line simplification. Two appropriate algorithms for the implementation using the programming language JAVA will be proposed. The results of the methods of Visvalingam and Douglas-Peucker will be discussed with regards to performance, level of point reduction and map quality. Recommended parameters for the implementation in the software of MapChart GmbH are derived. The simplification of polygon meshes will be an extension of the line generalization. Topological constraints need to be considered. This task needs a sophisticated approach as the raw data is not stored in a topological structure. For this task a new algorithm was developed. It was also implemented using JAVA. The results of the testing scenario show that the constraint of real-time performance cannot be fulfilled. Hence it is recommended to use the algorithm for the polygon mesh simplification offline only.
123

Disaster medicine- performance indicators, information support and documentation : a study of an evaluation tool /

Rüter, Anders, January 2006 (has links)
Diss. (sammanfattning) Linköping : Linköpings universitet, 2006. / Härtill 5 uppsatser.
124

Complex Word Identification for Swedish

Smolenska, Greta January 2018 (has links)
Complex Word Identification (CWI) is a task of identifying complex words in text data and it is often viewed as a subtask of Automatic Text Simplification (ATS) where the main task is making a complex text simpler. The ways in which a text should be simplified depend on the target readers such as second language learners or people with reading disabilities. In this thesis, we focus on Complex Word Identification for Swedish. First, in addition to exploring existing resources, we collect a new dataset for Swedish CWI. We continue by building several classifiers of Swedish simple and complex words. We then use the findings to analyze the characteristics of lexical complexity in Swedish and English. Our method for collecting training data based on second language learning material has shown positive evaluation scores and resulted in a new dataset for Swedish CWI. Additionally, the built complex word classifiers have an accuracy at least as good as similar systems for English. Finally, the analysis of the selected features confirms the findings of previous studies and reveals some interesting characteristics of lexical complexity.
125

An Effective Approach to Biomedical Information Extraction with Limited Training Data

January 2011 (has links)
abstract: In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of relationship patterns are major factors for poor performance in information extraction. This is because the training data cannot possibly contain all concepts and their synonyms; and it contains only limited examples of relationship patterns between concepts. Creating training data, lexicons and relationship patterns is expensive, especially in the biomedical domain (including clinical notes) because of the depth of domain knowledge required of the curators. Dictionary-based approaches for concept extraction in this domain are not sufficient to effectively overcome the complexities that arise because of the descriptive nature of human languages. For example, there is a relatively higher amount of abbreviations (not all of them present in lexicons) compared to everyday English text. Sometimes abbreviations are modifiers of an adjective (e.g. CD4-negative) rather than nouns (and hence, not usually considered named entities). There are many chemical names with numbers, commas, hyphens and parentheses (e.g. t(3;3)(q21;q26)), which will be separated by most tokenizers. In addition, partial words are used in place of full words (e.g. up- and downregulate); and some of the words used are highly specialized for the domain. Clinical notes contain peculiar drug names, anatomical nomenclature, other specialized names and phrases that are not standard in everyday English or in published articles (e.g. "l shoulder inj"). State of the art concept extraction systems use machine learning algorithms to overcome some of these challenges. However, they need a large annotated corpus for every concept class that needs to be extracted. A novel natural language processing approach to minimize this limitation in concept extraction is proposed here using distributional semantics. Distributional semantics is an emerging field arising from the notion that the meaning or semantics of a piece of text (discourse) depends on the distribution of the elements of that discourse in relation to its surroundings. Distributional information from large unlabeled data is used to automatically create lexicons for the concepts to be tagged, clusters of contextually similar words, and thesauri of distributionally similar words. These automatically generated lexical resources are shown here to be more useful than manually created lexicons for extracting concepts from both literature and narratives. Further, machine learning features based on distributional semantics are shown to improve the accuracy of BANNER, and could be used in other machine learning systems such as cTakes to improve their performance. In addition, in order to simplify the sentence patterns and facilitate association extraction, a new algorithm using a "shotgun" approach is proposed. The goal of sentence simplification has traditionally been to reduce the grammatical complexity of sentences while retaining the relevant information content and meaning to enable better readability for humans and enhanced processing by parsers. Sentence simplification is shown here to improve the performance of association extraction systems for both biomedical literature and clinical notes. It helps improve the accuracy of protein-protein interaction extraction from the literature and also improves relationship extraction from clinical notes (such as between medical problems, tests and treatments). Overall, the two main contributions of this work include the application of sentence simplification to association extraction as described above, and the use of distributional semantics for concept extraction. The proposed work on concept extraction amalgamates for the first time two diverse research areas -distributional semantics and information extraction. This approach renders all the advantages offered in other semi-supervised machine learning systems, and, unlike other proposed semi-supervised approaches, it can be used on top of different basic frameworks and algorithms. / Dissertation/Thesis / Ph.D. Biomedical Informatics 2011
126

Překladová čeština a její charakteristiky / Translated Czech and Its Characteristics

Chlumská, Lucie January 2015 (has links)
Title: Translated Czech and Its Characteristics Author: Mgr. Lucie Chlumská Department: Institute of the Czech National Corpus Supervisor: doc. Mgr. Václav Cvrček, Ph.D. Abstract: Despite the fact that translated literature accounts for more than one third of all written publications in the Czech Republic, Czech in translations has not yet been systematically analyzed from a quantitative point of view. The main objective of this corpus-based dissertation is to identify characteristic features of translated Czech com- pared to Czech in original, i.e. non-translated texts. The analysis was based on a large monolingual comparable corpus Jerome, created for the purposes of this study. It inclu- des both fiction and non-fiction texts and its design reflects the real Czech situation regarding the translations' source languages, i.e. translations from English prevail. The research was inspired by the theory of translation universals (typical linguistic featu- res common to any translated text) and focused mainly on simplification, convergence and general frequency characteristics, including parts-of-speech distribution and n-gram analysis. The findings have supported the hypothesis that translated Czech, as reflected in the Jerome corpus, is different from the non-translated Czech in terms of higher degree of...
127

Evaluation of an Appearance-Preserving Mesh Simplification Scheme for CET Designer

Hedin, Rasmus January 2018 (has links)
To decrease the rendering time of a mesh, Level of Detail can be generated by reducing the number of polygons based on some geometrical error. While this works well for most meshes, it is not suitable for meshes with an associated texture atlas. By iteratively collapsing edges based on an extended version of Quadric Error Metric taking both spatial and texture coordinates into account, textured meshes can also be simplified. Results show that constraining edge collapses in the seams of a mesh give poor geometrical appearance when it is reduced to a few polygons. By allowing seam edge collapses and by using a pull-push algorithm to fill areas located outside the seam borders of the texture atlas, the appearance of the mesh is better preserved.
128

e-Rural: ambiente web para geração de conteúdos considerando a cultura e o nível de letramento do aprendiz

Magalhães, Vanessa Maia Aguiar de 31 May 2011 (has links)
Made available in DSpace on 2016-06-02T19:05:52Z (GMT). No. of bitstreams: 1 3862.pdf: 11399749 bytes, checksum: 8ae359af900b04ce5c8c948702d7a71e (MD5) Previous issue date: 2011-05-31 / This paper describes the process of developing and prototyping environment high-fidelity to develop culturally contextualized hyperdocuments web, adapted and made available according to the level of literacy of the learner, so it has access to and can understand the information and technological knowledge allowing your individual improvement and professional in the area it serves. This work focuses on health education, adopting the promotion of inclusive learning and continuous approaches using IHC, considering the literacy levels of learners, providing simplified lexical and syntactic processing and analogies based on common sense knowledge as an expression of the cultural context of the learner. The generation of hyperdocuments intended primarily for users involved in the Brazilian production of milk having difficulties in reading and access to technical information, presenting different levels of literacy and cultural profiles. To provide this inclusive and culturally diverse environment, firstly, there is a need for hyperdocuments are produced through translation of vocabularies and the identification of meanings, adapting them to the cognitive and linguistic needs of users, to assist them in understanding the training content and meaning. It is noteworthy that these hyperdocuments be adapted and made available according to the level of literacy of users, helping them to overcome cultural barriers, social, emotional, perceptual, and cognitive technology that impede their access to knowledge. For this it is necessary that these contents are to be made available providing text equivalents that replace or supplement the textual contents of the hypermedia, whether through images, videos, narration, text, or audio. In order to observe the process of development of computing and its functional prototype, collecting the views of the target audience and partners, were conducted feasibility studies and experiments with specialist area of computing and agribusiness research, reported in this paper, to verify the feasibility and applicability of the proposal in a real scenario. As a result, it is hoped that researchers and agricultural technicians to create content in context for many producers with different levels of literacy, and they can get an understanding of digital technical knowledge available, and therefore incorporates the technologies offered to improve the quality and productivity milk and other foods in your day-to-day. / Este trabalho descreve o processo de desenvolvimento e o protótipo de alta fidelidade do ambiente computacional para elaborar hiperdocumentos contextualizados culturalmente na web, adaptados e disponibilizados de acordo com o nível de letramento do aprendiz, de forma que este tenha acesso e possa compreender as informações e conhecimentos tecnológicos, permitindo seu aprimoramento individual e profissional na área em que atua. Este trabalho enfoca a educação sanitária, adotando a promoção de aprendizagem inclusiva e continuada, utilizando abordagens de IHC, considerando níveis de letramento do aprendiz, provendo simplificação léxica e sintática, analogias e processamento baseado em conhecimento de senso comum como expressão do contexto cultural do aprendiz. A geração de hiperdocumentos destina-se, principalmente, aos usuários envolvidos na produção brasileira de leite que apresentam dificuldades de leitura e acesso a informações técnicas, por apresentarem diferentes níveis de letramento e perfis culturais. Para proporcionar esse ambiente inclusivo e culturalmente diverso, primeiramente, existe a necessidade de que os hiperdocumentos sejam produzidos por meio da tradução de vocabulários e da identificação de significados, adaptando-os às necessidades linguísticas e cognitivas dos usuários, visando auxiliá-los na compreensão do conteúdo e formação de significado. Destaca-se que estes hiperdocumentos devem ser adaptados e disponibilizados de acordo com o nível de letramento dos usuários, auxiliando-os a ultrapassar as barreiras culturais, sociais, emocionais, perceptivas, tecnológicas e cognitivas que impedem seu acesso ao conhecimento. Para tal, é necessário que esses conteúdos devam ser disponibilizados oferecendo equivalentes textuais que substituam ou complementem os conteúdos textuais do hiperdocumento, quer sejam por imagens, vídeos, narração de texto, ou por áudio. Com o intuito de observar o processo de desenvolvimento do ambiente computacional e seu protótipo funcional, coletando a opinião do público alvo e parceiros, foram realizados estudos de viabilidade e experimentos com especialistas da área da computação e pesquisadores do agronegócio, relatados neste trabalho, visando verificar a viabilidade e aplicabilidade da proposta em um cenário real. Como resultado, espera-se que pesquisadores e técnicos agrícolas possam criar conteúdos contextualizados para produtores diversos, com diferentes níveis de letramento, e estes possam obter a compreensão do conhecimento técnico digital disponibilizado, e consequentemente incorpore as tecnologias oferecidas para melhoria da qualidade e produtividade do leite, bem como de outros alimentos, no seu dia-a-dia.
129

Simplificação e praticabilidade no direito tributário / Simplification and praticality in tax law

Sergio Sydionir Saad 07 April 2014 (has links)
Nos dias atuais, em razão de vários fatores, está se tornando cada vez mais impraticável à administração pública garantir o cumprimento da arrecadação e fiscalização tributária. As normas simplificadoras, criadas em nome da praticabilidade, é a solução de compromisso que permite a garantia da tributação de todos, mas sem o custo irrazoável do aparato administrativo para averiguação individual de cada caso concreto. Deixar de avaliar individualmente cada caso na aplicação da lei tributária pode representar uma afronta aos princípios da segurança jurídica, legalidade, igualdade, capacidade contributiva entre outros. Analisar as soluções que atendam esta demanda pela praticabilidade e que não agridam a justiça individual assegurada pelos princípios constitucionais é o que visa esta dissertação. Entre as técnicas de simplificação abordadas, ressaltam-se as presunções e as ficções. As normas simplificadoras, como objeto de estudo, serão identificadas dentro do universo das normas tributárias verificando-se sua finalidade extrafiscal. A Praticabilidade é estudada, trazendo-se um conceito, que a identifica como de caráter principiológico e sua relação com o princípio da eficiência. Como princípio a Praticabilidade é confrontada com: segurança jurídica, legalidade, igualdade, capacidade contributiva, justiça fiscal, discriminação constitucional de competências, proporcionalidade, razoabilidade, neutralidade e proibição de confisco, identificando os limites jurídicos impostos à sua utilização. No estudo de casos, estes limites são verificados e confrontados com o posicionamento mais atual da jurisprudência. / Nowadays, due to various factors, it is becoming increasingly impractical to government enforce the tax collection and enforcement. The simplifying rules, created in the name of practicality, is a compromise that allows the guarantee of all taxes, but without the unreasonable cost of the administrative apparatus for individual investigation of each case. Failure to evaluate each case individually when applying the tax law may represent an affront to the principles of legal certainty, legality, equality, fiscal capacity among others. Analyze solutions to meet this demand by practicality and do not harm the individual justice guaranteed by the constitutional principles is what this thesis aims. Among the techniques addressed for simplification, we emphasize the presumptions and fictions. The simplifying rules, as an object of study, will be identified within the universe of tax rules checking its extrafiscal purpose. The Feasibility study is bringing up a concept, identifying it as a principiológico character and its relation to the principle of efficiency. In principle the Feasibility is confronted with legal certainty legality, equality, ability to pay, taxation, constitutional powers of discrimination, proportionality fairness, neutrality and prohibiting confiscation, identifying the legal limits on its use. In the case studies, these limits are checked and compared with the most current positioning of jurisprudence.
130

Legal aspects of the current mining situation and the impact of the package of new legislatives decrees in the mining activity / Aspectos legales de la situación actual de la minería y el impacto del paquete de nuevos decretos legislativos en la actividad minera. Entrevista a Luis Carlos Rodrigo

Cueva Chauca, Sergio 12 April 2018 (has links)
In this interview, Luis Carlos Rodrigo tells us about the new measures taken by the Government for unlock investments in order to invigorate the economy. Furthermore, he tell us about the participation of new entities in the granting of mining concession, the impact of these new measures in the mining formalization process, the effects of these in the administrative simplification and about relevant aspects uncontemplated by these rules. / En la presenta entrevista, Luis Carlos Rodrigo nos habla sobre las nuevas medidas adoptadas por el Gobierno para destrabar las inversiones con el objetivo de dinamizar la economía. Además, nos habla sobre la participación de nuevas entidades en el otorgamiento de una concesión minera, del impacto de estas medidas en el proceso de formalización minera, de los efectos de estas en la simplificación administrativa y sobre aspectos relevantes no contemplados por estas normas.

Page generated in 0.0834 seconds