• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 12
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 94
  • 18
  • 16
  • 12
  • 11
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelo de servidor web com quatro módulos de atendimento de requisições (SWMAR) / Web server model with four request attending modules (SWMAR)

Guiesi Junior, Geraldo 30 May 2008 (has links)
Esta dissertação de mestrado apresenta a implementação e validação de um modelo de servidor web que divide o funcionamento de o servidor web em quatro módulos onde cada um desses módulos é responsável por uma etapa que a requisição percorre ao longo de seu processamento. Esses módulos são: atendimento da requisição (módulo 1), transferência do arquivo para a memória principal (módulo 2), processamento de requisições dinâmicas (módulo 3) e envio do arquivo ao cliente (módulo 4). Esses quatro módulos são interligados e são alimentados primeiramente por uma carga inicial gerada pelo gerador de cargas W4Gen e passa obrigatoriamente, nessa ordem, pelo módulo 1, módulo 2 e módulo 4. O módulo 3 só é utilizado quando se trata de uma requisição dinâmica. Ao ser atendido por um dos módulos, é atribuído um tempo de execução (leia-se tempo que a requisição toma para ser processada por esse módulo). Esses tempos foram baseados em trabalhos que fizeram benchmarks em servidores web reais. Os resultados alcançados com o desenvolvimento deste trabalho visam principalmente integrar-se aos trabalhos de simulação envolvendo servidores web do grupo de Sistemas Distribuídos e Programação Concorrente (LaSDPC) e com isso alcançar resultados próximos a resultados aplicados em servidores web reais / This work presents the implementation and validation of a web server model that divides the web server functions into four modules and each one of these is responsible for an execution step, in which a request goes through during processing. These modules are: request serving (module 1); file transferring to the main memory (module 2); dynamic request processing (module 3); and client file sending (module 4). These four modules are linked and the W4Gen generates the initial loads. These loads run in the order as follows: modules 1, 2 and 4. The module 3 is used only when the request is dynamic. When a request is served by a module, an execution time is defined (i.e. the time used by the request in the module), which is based on real world web servers benchmark\'s. The results obtained in the work aims to be integrated to other projects conduced in the Distributed System and Concurrent Programming group (LaSDPC) in order to reach results close to real world web servers
12

Avaliação da eficiência de cursos de graduação através da análise envoltória de dados: um estudo de caso na Universidade Federal Fluminense

Tavares, Rafael Santos 18 September 2015 (has links)
Submitted by Marcia Silva (marcia@latec.uff.br) on 2015-11-04T17:22:16Z No. of bitstreams: 1 DISSERT Rafael Santos Tavares.pdf: 1379424 bytes, checksum: ffda222a8cb3956206d526b037cae811 (MD5) / Made available in DSpace on 2015-11-04T17:22:16Z (GMT). No. of bitstreams: 1 DISSERT Rafael Santos Tavares.pdf: 1379424 bytes, checksum: ffda222a8cb3956206d526b037cae811 (MD5) Previous issue date: 2015-09-18 / Este trabalho tem como objetivo avaliar a eficiência de cursos de graduação oferecidos pela Universidade Federal Fluminense, com base na sua capacidade de agregar conhecimentos aos alunos, durante o período de sua formação acadêmica e para isso, será utilizada a técnica conhecida como Análise Envoltória de Dados. Serão utilizados neste estudo, dados fornecidos pelo INEP a respeito das notas obtidas no Exame Nacional de Desempenho de Estudantes (ENADE) para os concluintes do ciclo 2010, 2011 e 2012 e para os ingressantes do ciclo anterior, de forma a refletir o espaço temporal no qual o aluno permanece na universidade. Também serão utilizados dados referentes ao corpo docente de cada curso de graduação, e dados que reflitam o grau de retenção de cada curso, através do índice conhecido como taxa de sucesso na graduação (TSG). Foi realizada uma revisão na literatura que pôde ratificar a importância da utilização de métodos que auxiliam na aferição da eficiência no ensino superior, uma vez que as Instituições de Ensino Superior são gradativamente pressionadas a fornecer resultados cada vez mais expressivos, utilizando a menor quantidade de recursos possíveis. Com a proposta de três cenários distintos para a avaliação da eficiência de um total de 38 cursos de graduação, o presente estudo foi capaz de identificar os cursos que apresentaram melhor desempenho e os cursos que necessitam de melhorias para alcançar a fronteira de eficiência. Nesse sentido, foram analisados os alvos propostos para os outputs das unidades ineficientes e o seu respectivo conjunto de referência, respeitando-se os perfis característicos de cada curso de graduação, após a proposta de agrupamento de acordo com a área de conhecimento a qual pertencem. / This study aims to evaluate the efficiency of undergraduate courses offered by the Federal Fluminense University regarding its ability to transfer knowledge to their students during his academic education using data envelopment analysis. In this study, it will be used data provided by INEP about the grades in the National Student Performance Exam (ENADE) concerning graduates of the cycle in 2010, 2011 and 2012 and for entering of the previous cycle, to reflect the temporal space, which the student remains in college. Data referring to the faculty of each undergraduate course, and data that reflect the degree of retention of each course, through the index known as the graduation success rate (TSG) will also be used. A review of the literature was performed that could ratify the importance of using methods that assist in measuring the efficiency in higher education, since the higher education institutions are gradually pressed to provide more and more results significant, using the smallest possible amount of resources. Proposing three different scenarios for evaluating the efficiency of 38 undergraduate courses, this study was able to identify courses that performed better and courses that needs improvement to achieve the efficiency frontier. In this sense, the targets proposed for the outputs of inefficient units and their respective set of reference were analyzed, respecting characteristics of each undergraduate course, after the proposal of clustering according to the area of knowledge to which they belong.
13

Modelo de servidor web com quatro módulos de atendimento de requisições (SWMAR) / Web server model with four request attending modules (SWMAR)

Geraldo Guiesi Junior 30 May 2008 (has links)
Esta dissertação de mestrado apresenta a implementação e validação de um modelo de servidor web que divide o funcionamento de o servidor web em quatro módulos onde cada um desses módulos é responsável por uma etapa que a requisição percorre ao longo de seu processamento. Esses módulos são: atendimento da requisição (módulo 1), transferência do arquivo para a memória principal (módulo 2), processamento de requisições dinâmicas (módulo 3) e envio do arquivo ao cliente (módulo 4). Esses quatro módulos são interligados e são alimentados primeiramente por uma carga inicial gerada pelo gerador de cargas W4Gen e passa obrigatoriamente, nessa ordem, pelo módulo 1, módulo 2 e módulo 4. O módulo 3 só é utilizado quando se trata de uma requisição dinâmica. Ao ser atendido por um dos módulos, é atribuído um tempo de execução (leia-se tempo que a requisição toma para ser processada por esse módulo). Esses tempos foram baseados em trabalhos que fizeram benchmarks em servidores web reais. Os resultados alcançados com o desenvolvimento deste trabalho visam principalmente integrar-se aos trabalhos de simulação envolvendo servidores web do grupo de Sistemas Distribuídos e Programação Concorrente (LaSDPC) e com isso alcançar resultados próximos a resultados aplicados em servidores web reais / This work presents the implementation and validation of a web server model that divides the web server functions into four modules and each one of these is responsible for an execution step, in which a request goes through during processing. These modules are: request serving (module 1); file transferring to the main memory (module 2); dynamic request processing (module 3); and client file sending (module 4). These four modules are linked and the W4Gen generates the initial loads. These loads run in the order as follows: modules 1, 2 and 4. The module 3 is used only when the request is dynamic. When a request is served by a module, an execution time is defined (i.e. the time used by the request in the module), which is based on real world web servers benchmark\'s. The results obtained in the work aims to be integrated to other projects conduced in the Distributed System and Concurrent Programming group (LaSDPC) in order to reach results close to real world web servers
14

Efficient Source Selection For SPARQL Endpoint Query Federation

Saleem, Muhammad 13 May 2016 (has links)
The Web of Data has grown enormously over the last years. Currently, it comprises a large compendium of linked and distributed datasets from multiple domains. Due to the decentralised architecture of the Web of Data, several of these datasets contain complementary data. Running complex queries on this compendium thus often requires accessing data from different data sources within one query. The abundance of datasets and the need for running complex query has thus motivated a considerable body of work on SPARQL query federation systems, the dedicated means to access data distributed over the Web of Data. This thesis addresses two key areas of federated SPARQL query processing: (1) efficient source selection, and (2) comprehensive SPARQL benchmarks to test and ranked federated SPARQL engines as well as triple stores. Efficient Source Selection: Efficient source selection is one of the most important optimization steps in federated SPARQL query processing. An overestimation of query relevant data sources increases the network traffic, result in irrelevant intermediate results, and can significantly affect the overall query processing time. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. Similarly, only little attention has been paid to the effect of duplicated data on federated querying. This thesis presents HiBISCuS and TBSS, novel hypergraph-based source selection approaches, and DAW, a duplicate-aware source selection approach to federated querying over the Web of Data. Each of these approaches can be combined directly with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We combined the three (HiBISCuS, DAW, and TBSS) source selections approaches with query rewriting to form a complete SPARQL query federation engine named Quetsal. Furthermore, we present TopFed, a Cancer Genome Atlas (TCGA) tailored federated query processing engine that exploits the data distribution to perform intelligent source selection while querying over large TCGA SPARQL endpoints. Finally, we address the issue of rights managements and privacy while accessing sensitive resources. To this end, we present SAFE: a global source selection approach that enables decentralised, policy-aware access to sensitive clinical information represented as distributed RDF Data Cubes. Comprehensive SPARQL Benchmarks: Benchmarking is indispensable when aiming to assess technologies with respect to their suitability for given tasks. While several benchmarks and benchmark generation frameworks have been developed to evaluate federated SPARQL engines and triple stores, they mostly provide a one-fits-all solution to the benchmarking problem. This approach to benchmarking is however unsuitable to evaluate the performance of a triple store for a given application with particular requirements. The fitness of current SPARQL query federation approaches for real applications is difficult to evaluate with current benchmarks as current benchmarks are either synthetic or too small in size and complexity. Furthermore, state-of-the-art federated SPARQL benchmarks mostly focused on a single performance criterion, i.e., the overall query runtime. Thus, they cannot provide a fine-grained evaluation of the systems. We address these drawbacks by presenting FEASIBLE, an automatic approach for the generation of benchmarks out of the query history of applications, i.e., query logs and LargeRDFBench, a billion-triple benchmark for SPARQL query federation which encompasses real data as well as real queries pertaining to real bio-medical use cases. Our evaluation results show that HiBISCuS, TBSS, TopFed, DAW, and SAFE all can significantly reduce the total number of sources selected and thus improve the overall query performance. In particular, TBSS is the first source selection approach to remain under 5% overall relevant sources overestimation. Quetsal has reduced the number of sources selected (without losing recall), the source selection time as well as the overall query runtime as compared to state-of-the-art federation engines. The LargeRDFBench evaluation results suggests that the performance of current SPARQL query federation systems on simple queries does not reflect the systems\\\'' performance on more complex queries. Moreover, current federation systems seem unable to deal with many of the challenges that await them in the age of Big Data. Finally, the FEASIBLE\\\''s evaluation results shows that it generates better sample queries than the state-of-the-art. In addition, the better query selection and the larger set of query types used lead to triple store rankings which partly differ from the rankings generated by previous works.
15

Enhancing the Accuracy of Synthetic File System Benchmarks

Farhat, Salam 01 January 2017 (has links)
File system benchmarking plays an essential part in assessing the file system’s performance. It is especially difficult to measure and study the file system’s performance as it deals with several layers of hardware and software. Furthermore, different systems have different workload characteristics so while a file system may be optimized based on one given workload it might not perform optimally based on other types of workloads. Thus, it is imperative that the file system under study be examined with a workload equivalent to its production workload to ensure that it is optimized according to its usage. The most widely used benchmarking method is synthetic benchmarking due to its ease of use and flexibility. The flexibility of synthetic benchmarks allows system designers to produce a variety of different workloads that will provide insight on how the file system will perform under slightly different conditions. The downside of synthetic workloads is that they produce generic workloads that do not have the same characteristics as production workloads. For instance, synthetic benchmarks do not take into consideration the effects of the cache that can greatly impact the performance of the underlying file system. In addition, they do not model the variation in a given workload. This can lead to file systems not optimally designed for their usage. This work enhanced synthetic workload generation methods by taking into consideration how the file system operations are satisfied by the lower level function calls. In addition, this work modeled the variations of the workload’s footprint when present. The first step in the methodology was to run a given workload and trace it by a tool called tracefs. The collected traces contained data on the file system operations and the lower level function calls that satisfied these operations. Then the trace was divided into chunks sufficiently small enough to consider the workload characteristics of that chunk to be uniform. Then the configuration file that modeled each chunk was generated and supplied to a synthetic workload generator tool that was created by this work called FileRunner. The workload definition for each chunk allowed FileRunner to generate a synthetic workload that produced the same workload footprint as the corresponding segment in the original workload. In other words, the synthetic workload would exercise the lower level function calls in the same way as the original workload. Furthermore, FileRunner generated a synthetic workload for each specified segment in the order that they appeared in the trace that would result in a in a final workload mimicking the variation present in the original workload. The results indicated that the methodology can create a workload with a throughput within 10% difference and with operation latencies, with the exception of the create latencies, to be within the allowable 10% difference and in some cases within the 15% maximum allowable difference. The work was able to accurately model the I/O footprint. In some cases the difference was negligible and in the worst case it was at 2.49% difference.
16

Benchmarks experimentais e modelação númerica por elementos finitos de processos de conformação plástica

Teixeira, Pedro Manuel Cardoso January 2005 (has links)
Tese de mestrado. Engenharia Mecânica. Faculdade de Engenharia. Universidade do Porto. 2005
17

Predicting college readiness in STEM: a longitudinal study of Iowa students

Rickels, Heather Anne 01 May 2017 (has links)
The demand for STEM college graduates is increasing. However, recent studies show there are not enough STEM majors to fulfill this need. This deficiency can be partially attributed to a gender discrepancy in the number of female STEM graduates and to the high rate of attrition of STEM majors. As STEM attrition has been associated with students being unprepared for STEM coursework, it is important to understand how STEM graduates change in achievement levels from middle school through high school and to have accurate readiness indicators for first-year STEM coursework. This study aimed to address these issues by comparing the achievement growth of STEM majors to non-STEM majors by gender in Science, Math, and Reading from Grade 6 to Grade 11 through latent growth models (LGMs). Then STEM Readiness Benchmarks were established in Science and Math on the Iowas (IAs) for typical first-year STEM courses and validity evidence was provided for the benchmarks. Results from the LGM analyses indicated that STEM graduates start at higher achievement levels in Grade 6 and maintain higher achievement levels through Grade 11 in all subjects. In addition, gender differences were examined. The findings indicate that students with high achievement levels self-select as STEM majors, regardless of gender. In addition, they suggest that students who are not on-track for a STEM degree may need to begin remediation prior to high school. Results from the benchmark analyses indicate that STEM coursework is more demanding and that students need to be better prepared academically in science and math if planning to pursue a STEM degree. In addition, the STEM Readiness Benchmarks were more accurate in predicting success in STEM courses than if general college readiness benchmarks were utilized. Also, students who met the STEM Readiness Benchmarks were more likely to graduate with a STEM degree. This study provides valuable information on STEM readiness to students, educators, and college admissions officers. Findings from this study can be used to better understand the level of academic achievement necessary to be successful as a STEM major and to provide guidance for students considering STEM majors in college. If students are being encouraged to purse STEM majors, it is important they have accurate information regarding their chances of success in STEM coursework.
18

How Do Firms Use Discretion in Deferred Revenue?

Caylor, Marcus Lamar 27 April 2006 (has links)
I conduct an examination of the deferred revenue account. I provide descriptive evidence of deferred revenue both at an industry-level and a macro-level, and I examine whether managers use discretion in deferred revenue around earnings benchmarks. I develop a model to measure the normal change in short-term deferred revenue, and examine how the abnormal change varies across the pre-managed distribution of three common earnings benchmarks. My results show that managers delay recognition of revenue using deferred revenue when pre-managed earnings exceed benchmarks by a large margin, and accelerate the recognition of revenue using deferred revenue when premanaged earnings just miss or miss benchmarks by a large amount. I document the prevalence of accelerated revenue recognition, and show that meeting or just beating the annual consensus analyst forecast is where the most cases of suspected accelerated revenue recognition occur. The results are next strongest for the avoidance of earnings decrease benchmark and weakest for the avoidance of loss benchmark. I examine whether conventional abnormal accrual models reflect discretion in deferred revenue, and whether discretion in deferred revenue is associated with lower earnings quality. I show that deferred revenue changes are a leading indicator of future earnings. My results indicate that discretion in revenue can lower the predictability of sales regardless of whether it is of an aggressive or conservative nature.
19

Entanglement quantification and quantum benchmarking of optical communication devices

Killoran, Nathan January 2012 (has links)
In this thesis, we develop a number of operational tests and tools for benchmarking the quantum nature of optical quantum communication devices. Using the laws of quantum physics, ideal quantum devices can fundamentally outperform their classical counterparts, or even achieve objectives which are classically impossible. Actual devices will not be ideal, but they may still be capable of facilitating quantum communication. Benchmarking tests, based on the presence of entanglement, can be used to verify whether or not imperfect quantum devices offer any advantage over their classical analogs. The general goal in this thesis is to provide strong benchmarking tools which simultaneously require minimal experimental resources but also offer a wide range of applicability. Another major component is the extension of existing qualitative benchmarks (`Is it quantum or classical?') to more quantitative forms (`How quantum is it?'). We provide a number of benchmarking results applicable to two main situations, namely discrete remote state preparation protocols and continuous-variable quantum device testing. The theoretical tools derived throughout this thesis are also applied to the tasks of certifying a remote state preparation experiment and a continuous-variable quantum memory.
20

A New I/O Scheduler for Solid State Devices

Dunn, Marcus P. 2009 August 1900 (has links)
Since the emergence of solid state devices onto the storage scene, improvements in capacity and price have brought them to the point where they are becoming a viable alternative to traditional magnetic storage for some applications. Current file system and device level I/O scheduler design is optimized for rotational magnetic hard disk drives. Since solid state devices have drastically different properties and structure, we may need to rethink the design of some aspects of the file system and scheduler levels of the I/O subsystem. In this thesis, we consider the current approach to I/O scheduling and show that the current scheduler design may not be ideally suited to solid state devices. We also present a framework for extracting some device parameters of solid state drives. Using the information from the parameter extraction, we present a new I/O scheduler design which utilizes the structure of solid state devices to efficiently schedule writes. The new scheduler, implemented on a 2.6 Linux kernel, shows up to 25% improvement for common workloads.

Page generated in 0.0426 seconds