• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 551
  • 59
  • 47
  • 36
  • 36
  • 26
  • 20
  • 10
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • Tagged with
  • 979
  • 181
  • 165
  • 130
  • 78
  • 77
  • 68
  • 67
  • 66
  • 55
  • 53
  • 51
  • 51
  • 46
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Ground penetrating radar techniques for the determination of subsurface moisture variability

Charlton, Matthew January 2002 (has links)
No description available.
12

3-D Lokalbebentomographie der südlichen Anden zwischen 36⁰ und 40⁰S /

Bohm, Mirjam, January 2004 (has links)
Thesis (doctoral)--Freie Universität Berlin, 2004. / Title from cover. "Dezember 2004"--P. [2] of cover. Vita. Includes bibliographical references (p. 103-113). Also available via the World Wide Web.
13

Morphologie et remplissage des vallées fossiles sud-armoricaines : apport de la stratigraphie sismique /

Menier, David. January 2004 (has links)
Thesis (doctoral)--Université de Rennes, 2003. / Includes bibliographical references (p. 186-202). Also available on the World Wide Web.
14

Essays on recommender systems : impact of sparse data and an information theoretic segmentation approach /

Mescioglu, Ibrahim, January 2008 (has links)
Thesis (Ph.D.)--University of Texas at Dallas, 2008. / Includes vita. Includes bibliographical references (leaves 128-130)
15

Characterization of micro-scale surface features using partial differential equations

Gonzalez Castro, Gabriela, Spares, Robert, Ugail, Hassan, Whiteside, Benjamin R., Sweeney, John January 2010 (has links)
No / Mass production of components with micro and nano scale surface features is known as micromoulding and is very sensitive to a number of variables that can cause important changes in the surface geometry of the components. The surface itself is regarded as a key element in determining the product's functionality and as such must be subject to thorough quality control procedures. To that end, a number of surface measurement techniques have been employed namely, White Light Interferometry (WLI) and Atomic Force Microscopy (AMF), whose resulting data is given in the form of large and rather unmanageable Cartesian point clouds. This work uses Partial Differential Equations (PDEs) as means for characterizing efficiently the surfaces associated with these data sets. This is carried out by solving the Biharmonic equation subject to a set of boundary conditions describing outer surface contours extracted from the raw measurement data. Design parameters are expressed as a function of the coefficients associated with the analytic solution of the Biharmonic equation and are then compared against the design parameters describing an ideal surface profile. Thus, the technique proposed here offers means for quality assessment using compressed data sets.
16

Tempest: A Framework for High Performance Thermal-Aware Distributed Computing

Pyla, Hari Krishna 08 June 2007 (has links)
Compute clusters are consuming more power at higher densities than ever before. This results in increased thermal dissipation, the need for powerful cooling systems, and ultimately a reduction in system reliability as temperatures increase. Over the past several years, the research community has reacted to this problem by producing software tools such as HotSpot and Mercury to estimate system thermal characteristics and validate thermal-management techniques. While these tools are flexible and useful, they suffer several limitations: for the average user such simulation tools can be cumbersome to use, these tools may take significant time and expertise to port to different systems. Further, such tools produce significant detail and accuracy at the expense of execution time enough to prohibit iterative testing. We propose a fast, easy to use, accurate, portable, software framework called Tempest (for temperature estimator) that leverages emergent thermal sensors to enable user profiling, evaluating, and reducing the thermal characteristics of systems and applications. In this thesis, we illustrate the use of Tempest to analyze the thermal effects of various parallel benchmarks in clusters. We also show how users can analyze the effects of thermal optimizations on cluster applications. Dynamic Voltage and Frequency Scaling (DVFS) reduces the power consumption of high-performance clusters by reducing processor voltage during periods of low utilization. We designed Tempest to measure the runtime effects of processor frequency on thermals. Our experiments indicate HPC workload characteristics greatly impact the effects of DVFS on temperature. We propose a thermal-aware DVFS scheduling approach that proactively controls processor voltage across a cluster by evaluating and predicting trends in processor temperature. We identify approaches that can maintain temperature thresholds and reduce temperature with minimal impact on performance. Our results indicate that proactive, temperature-aware scheduling of DVFS can reduce cluster-wide processor thermals by more than 10 degrees Celsius, the threshold for improving electronic reliability by 50%. / Master of Science
17

CONSTRUCTING USER BEHAVIORAL PROFILES USING DATA-MINING-BASED APPROACH

Gao, Wei January 2005 (has links)
User profiling has wide applications such as personalization, intrusion detection, and online customer analysis in e-business environments. In the past decade, most of past research on user profiling focused on factual profile construction and applications. A few researchers studied application-oriented behavioral profiling problems. In light of the advantages of behavioral profiles over their factual counterparts and the importance of fundamental understanding of them, this dissertation probes into the theoretical foundation, modeling and data-mining-based heuristic techniques for constructing behavioral profiles.We first propose a research framework for behavioral profiling and define the fundamentals. We build an optimization model for describing and solving a general type of behavioral profile construction problem. The analysis of the optimization model's analytic properties found a strong connection between the feasible solution to the model and the independent dominating set in a graph derived from the input of the model. Based on this finding, we employed two solution searching approaches: brute-force and Genetic Algorithm, and performed numerical analysis on a synthetic small-sized profiling problem. The results demonstrate the effectiveness of Genetic Algorithm for producing approximate optimal solution to the CH optimization problem.We propose an innovative data-mining-based heuristic approach - hierarchical characteristic pattern mining to find solutions to the profile construction optimization problem. This approach builds behavioral profiles based on a new type of pattern - characteristic pattern and is appropriate for large-scale problems. Experiments using relatively large amounts of synthetic data were conducted to test the performance of this approach. The results show that the data-mining-based approach outperforms the Genetic Algorithm when the characteristic patterns exist. Finally, a particular user behavioral profile application - web user identification is introduced to present problems and solutions when applying the data-mining-based behavioral profile construction approach into a real-world profile application. The experiments performed on a real-world dataset produced positive results of our approach in terms of effectiveness, efficiency, and interpretability.The main contributions of the dissertation are: (1) proposing a comprehensive profiling research framework; (2) building an optimization model for solving a general type of profile construction problem; and (3) developing an innovative data-mining based heuristic approach to building behavioral profiles.
18

Técnicas de profiling para o co-projeto de hardware e software baseado em computação reconfigurável aplicadas ao processador softcore Nios II da Altera / Hardware and software codesing profiling techniques based on reconfigurable computing applied to the Altera´s Nios soft core processor

Kiehn, Luiz Henrique 21 September 2012 (has links)
Como avanço dos paradigmas de desenvolvimento de sistemas eletrônicos, novos conceitos, modelos e técnicas resultaram dessa evolução, gerando ferramentas mais eficientes e objetivas. Entre estas, as de automação de projetos eletrônicos (EDA - Electronic Design Automation) em nível de sistema (ESL - Electronic System Level) trouxeram um incremento considerável de produtividade à confecção de sistemas eletrônicos, inclusive de sistemas embarcados. Já no que se refere ao desempenho do sistema elaborado, monitorar sua execução e determinar seu perfil de funcionamento são tarefas essenciais para avaliar, a partir do seu comportamento, quais os pontos que representam gargalos ou pontos críticos, afetando sua eficiência geral. Dessa forma, faz-se necessário pesquisar princípios de verificação e otimização dos sistemas elaborados que estejam mais bem adaptados aos novos paradigmas de desenvolvimento de projetos. O presente trabalho tem por objetivo implementar um módulo de coleta e processamento de dados para análise de perfil de programas escritos na linguagem C e que sejam executados em processadores soft core, como o NiosII, da Altera. Entretanto, diferentemente das estatísticas oferecidas pela ferramenta GProf (GNU Profiling) com relação à análise de desempenho, em que cada amostra obtida implica no incremento de um contador para a função flagrada, o presente trabalho volta seu interesse à análise do perfil de uso de memória heap, que encontra-se mormente no volume alocado constatado em cada amostragem. Dessa forma, para diferentes amostragens de uma mesma função interessa saber qual a maior quantidade de memória utilizada pela função entre todas as amostras coletadas. Isso significa que, ao invés de incremento por amostragem, adotar-se-á o princípio do registro do maior valor, em número de bytes, de uso de memória constatado em cada função. Os principais recursos do módulo proposto são: a) o armazenamento das informações de uso de memória heap obtidas no processo de Profiling em formato apropriado para uso posterior por aplicações de co-projeto de hardware e software; e b) a geração de relatórios de Profiling que apresentem o volume de memória dinâmica alocada durante o processamento dos programas analisados para que se possa identificar os locais onde esse uso é mais crítico, permitindo ao projetista tomar decisões quanto à reformulação do código fonte, ou quanto ao incremento no tamanho da memória a ser instalada no sistema, ou quanto à reformulação da arquitetura de um modo geral / Due to the advancement of the paradigms of development of electronic systems, new concepts, models and techniques resulted from this evolution, generating more eficient and objective tools. Among them, the system-level (ESL) electronic design automation (EDA) ones has brought a considerable increase to the productivity of electronic systems manufacturing, especially including the embedded systems. In what refers to elaborated systems, monitoring its execution and determining its operating profile are the essential tasks to assess, from its behavior, which points in this system represent bottlenecks or hot spots, affecting its overall efficiency. Thus, it is necessary to study the principles of verification and optimization of the elaborated systems that are better adapted to the new paradigms of projects development. The present work has as its aim implementing a processing module for data collection and analysis of C language writen programs profile, wich will run in soft core processors, like Alteras NiosII. However, unlike the statistics offered by the tool GProf (GNU Profiling) tool with respect to performance analysis, in which each sample obtained implies the increment of a counter to the function caught, this paper turns his interest to the analysis of memory usage profiling, which is especially found in volume allocated in each sample. Thus, for different samples of the same function, the matter is to know the most amount of memory used by the function among all samples collected. This means that instead of increasing sample we will adopt the principle of registration of the highest number of bytes of memory usage observed in each function. So, this tools main features are: a) storing the information of memory use in the heap memory obtained in the process of Profiling in an appropriate format for later use by hardware and software codesign applications; and b) the reporting of Profiling that shows the dynamic memory volume allocated during analyzed programs processing so one can identify where such use is more critical, allowing the designer to make decisions regarding the reformulation of source code, or as to the increase in memory size to be installed int the system, or as to the architecture redesign
19

Técnicas de profiling para o co-projeto de hardware e software baseado em computação reconfigurável aplicadas ao processador softcore Nios II da Altera / Hardware and software codesing profiling techniques based on reconfigurable computing applied to the Altera´s Nios soft core processor

Luiz Henrique Kiehn 21 September 2012 (has links)
Como avanço dos paradigmas de desenvolvimento de sistemas eletrônicos, novos conceitos, modelos e técnicas resultaram dessa evolução, gerando ferramentas mais eficientes e objetivas. Entre estas, as de automação de projetos eletrônicos (EDA - Electronic Design Automation) em nível de sistema (ESL - Electronic System Level) trouxeram um incremento considerável de produtividade à confecção de sistemas eletrônicos, inclusive de sistemas embarcados. Já no que se refere ao desempenho do sistema elaborado, monitorar sua execução e determinar seu perfil de funcionamento são tarefas essenciais para avaliar, a partir do seu comportamento, quais os pontos que representam gargalos ou pontos críticos, afetando sua eficiência geral. Dessa forma, faz-se necessário pesquisar princípios de verificação e otimização dos sistemas elaborados que estejam mais bem adaptados aos novos paradigmas de desenvolvimento de projetos. O presente trabalho tem por objetivo implementar um módulo de coleta e processamento de dados para análise de perfil de programas escritos na linguagem C e que sejam executados em processadores soft core, como o NiosII, da Altera. Entretanto, diferentemente das estatísticas oferecidas pela ferramenta GProf (GNU Profiling) com relação à análise de desempenho, em que cada amostra obtida implica no incremento de um contador para a função flagrada, o presente trabalho volta seu interesse à análise do perfil de uso de memória heap, que encontra-se mormente no volume alocado constatado em cada amostragem. Dessa forma, para diferentes amostragens de uma mesma função interessa saber qual a maior quantidade de memória utilizada pela função entre todas as amostras coletadas. Isso significa que, ao invés de incremento por amostragem, adotar-se-á o princípio do registro do maior valor, em número de bytes, de uso de memória constatado em cada função. Os principais recursos do módulo proposto são: a) o armazenamento das informações de uso de memória heap obtidas no processo de Profiling em formato apropriado para uso posterior por aplicações de co-projeto de hardware e software; e b) a geração de relatórios de Profiling que apresentem o volume de memória dinâmica alocada durante o processamento dos programas analisados para que se possa identificar os locais onde esse uso é mais crítico, permitindo ao projetista tomar decisões quanto à reformulação do código fonte, ou quanto ao incremento no tamanho da memória a ser instalada no sistema, ou quanto à reformulação da arquitetura de um modo geral / Due to the advancement of the paradigms of development of electronic systems, new concepts, models and techniques resulted from this evolution, generating more eficient and objective tools. Among them, the system-level (ESL) electronic design automation (EDA) ones has brought a considerable increase to the productivity of electronic systems manufacturing, especially including the embedded systems. In what refers to elaborated systems, monitoring its execution and determining its operating profile are the essential tasks to assess, from its behavior, which points in this system represent bottlenecks or hot spots, affecting its overall efficiency. Thus, it is necessary to study the principles of verification and optimization of the elaborated systems that are better adapted to the new paradigms of projects development. The present work has as its aim implementing a processing module for data collection and analysis of C language writen programs profile, wich will run in soft core processors, like Alteras NiosII. However, unlike the statistics offered by the tool GProf (GNU Profiling) tool with respect to performance analysis, in which each sample obtained implies the increment of a counter to the function caught, this paper turns his interest to the analysis of memory usage profiling, which is especially found in volume allocated in each sample. Thus, for different samples of the same function, the matter is to know the most amount of memory used by the function among all samples collected. This means that instead of increasing sample we will adopt the principle of registration of the highest number of bytes of memory usage observed in each function. So, this tools main features are: a) storing the information of memory use in the heap memory obtained in the process of Profiling in an appropriate format for later use by hardware and software codesign applications; and b) the reporting of Profiling that shows the dynamic memory volume allocated during analyzed programs processing so one can identify where such use is more critical, allowing the designer to make decisions regarding the reformulation of source code, or as to the increase in memory size to be installed int the system, or as to the architecture redesign
20

Gärningsmannaprofilering- Konst eller vetenskap?

Halilovic, Melda January 2012 (has links)
Gärningsmannaprofilering (GMP) är en polisiär metod som används vid svårlösta utredningar. I Sverige praktiseras metoden av en central grupp vid Rikskriminalpolisen, den såkallade GMP-gruppen. Studiens övergripande syfte är att förstå och förklara GMP som polisiär metod, samt undersöka metodens effektivitet, korrekthet och vetenskaplighet. Ett särskilt syfte är att testa inlärningsteorin och rutinaktivitetsteorin gentemot GMP-metoden för diskutera eventuella samband. Studien är kvalitativt orienterad och bygger på den hermeneutiska teorin om tolkning av källor. Studien bygger på en omfattande systematisk litteraturstudie i kombination med två intervjuer. Sammanfattningsvis kan man säga att de slutsatser som analysen har är att GMP är en metod som i många fall tycks vara effektiv och som ofta genererar korrekta gärningsmannaprofiler. Dock behövs det mer kunskap om GMPs vetenskapliga förankring. Det har även visat sig att det finns ett samband mellan GMP och både rutinaktvitetsteorin och inlärningsteorin. Sambandet med inlärningsteorin är dock marginellt.

Page generated in 0.0926 seconds