• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
321

Applications of linear block codes to the McEliece cryptosystem

El Rifai, Ahmed Mahmoud 12 1900 (has links)
No description available.
322

The effects of response probability on commission errors in high go low no-go dual response versions of the sustained attention to response task (SART)

Bedi, Aman January 2015 (has links)
In the current investigation, we modified the high Go low No-Go Sustained Attention to Response Task (SART) by replacing the single response on Go trials with a dual response (dual response SART or DR SART). In three experiments a total of 80 participants completed the SART and versions of the DR SART in which response probabilities varied from 50-50, through 70-30 to 90-10. The probability of No-Go withhold stimuli was .11 in all experiments. Using a dynamic utility based model proposed by Peebles and Bothell (2004) we predicted that the 50-50 DR-SART would dramatically reduce commission errors. Additionally, the model predicted that the probability of commission errors to be an increasing function of response frequency. Both predictions were confirmed. Although the increasing rate of commission errors with response probability can also be accommodated by the rationale originally proposed for the SART by its creators (Robertson, Manly, Andrade, Baddeley, & Yiend, 1997) the fact that the current DR SART results and SART findings in general can be accommodated by a utility model without need for any attention processes is a challenge to views that ascribe commission errors to lapses of sustained attention.
323

Price uncertainty, investment and consumption

Ercolani, Marco G. January 1999 (has links)
No description available.
324

Iterative decoding of concatenated codes

Fagervik, Kjetil January 1998 (has links)
No description available.
325

Combination of Reliability-based Automatic Repeat ReQuest with Error Potential-based Error Correction for Improving P300 Speller Performance

Furuhashi, Takeshi, Yoshikawa, Tomohiro, Takahashi, Hiromu January 2010 (has links)
Session ID: SA-B1-3 / SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
326

An improved error correction algorithm for multicasting over LTE networks / Johannes Mattheus Cornelius

Cornelius, Johannes Mattheus January 2014 (has links)
Multicasting in Long-Term Evolution (LTE) environments poses several challenges if it is to be reliably implemented. Neither retransmission schemes nor Forward Error Correction (FEC), the traditional error correction approaches, can be readily applied to this system of communication if bandwidth and resources are to be used efficiently. A large number of network parameters and topology variables can influence the cost of telecommunication in such a system. These need to be considered when selecting an appropriate error correction technique for a certain LTE multicast deployment. This dissertation develops a cost model to investigate the costs associated with over-the-air LTE multicasting when different error correction techniques are applied. The benefit of this simplified model is an easily implementable and fast method to evaluate the communications costs of different LTE multicast deployments with the application of error correction techniques. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2014
327

Honest Approximations to Realistic Fault Models and Their Applications to Efficient Simulation of Quantum Error Correction

Daniel, Puzzuoli January 2014 (has links)
Understanding the performance of realistic noisy encoded circuits is an important task for the development of large-scale practical quantum computers. Specifically, the development of proposals for quantum computation must be well informed by both the qualities of the low-level physical system of choice, and the properties of the high-level quantum error correction and fault-tolerance schemes. Gaining insight into how a particular computation will play out on a physical system is in general a difficult problem, as the classical simulation of arbitrary noisy quantum circuits is inefficient. Nevertheless, important classes of noisy circuits can be simulated efficiently. Such simulations have led to numerical estimates of threshold errors rates and resource estimates in topological codes subject to efficiently simulable error models. This thesis describes and analyzes a method that my collaborators and I have introduced for leveraging efficient simulation techniques to understand the performance of large quantum processors that are subject to errors lying outside of the efficient simulation algorithm's applicability. The idea is to approximate an arbitrary gate error with an error from the efficiently simulable set in a way that ``honestly'' represents the original error's ability to preserve or distort quantum information. After introducing and analyzing the individual gate approximation method, its utility as a means for estimating circuit performance is studied. In particular, the method is tested within the use-case for which it was originally conceived; understanding the performance of a hypothetical physical implementation of a quantum error-correction protocol. It is found that the method performs exactly as desired in all cases. That is, the circuits composed of the approximated error models honestly represent the circuits composed of the errors derived from the physical models.
328

Automated Error Assessment in Spherical Near-Field Antenna Measurements

Pelland, Patrick 27 May 2011 (has links)
This thesis will focus on spherical near-field antenna measurements and the methods developed or modified for the work of this thesis to estimate the uncertainty in a particular far-field radiation pattern. We will discuss the need for error assessment in spherical near-field antenna measurements. A procedure will be proposed that, in an automated fashion, can be used to determine the overall uncertainty in the measured far-field radiation pattern of a particular antenna. This overall uncertainty will be the result of a combination of several known sources of error common to SNF measurements. This procedure will consist of several standard SNF measurements, some newly developed tests, and several stages of post-processing of the measured data. The automated procedure will be tested on four antennas of various operating frequencies and directivities to verify its functionality. Finally, total uncertainty data will be presented to the reader in several formats.
329

Una metodología de detección de fallos transitorios en aplicaciones paralelas sobre cluster de multicores

Montezanti, Diego Miguel January 2014 (has links)
El aumento en la escala de integración, con el objetivo de mejorar las prestaciones en los procesadores actuales, sumado al crecimiento de los sistemas de cómputo, han producido que la fiabilidad se haya vuelto un aspecto relevante. En particular, la creciente vulnerabilidad a los fallos transitorios se ha vuelto crítica, a causa de la capacidad de estos fallos de corromper los resultados de las aplicaciones. Históricamente, los fallos transitorios han sido una preocupación en el diseño de sistemas críticos, como sistemas de vuelo o servidores de alta disponibilidad, en los que las consecuencias del fallo pueden resultar desastrosas. Pese a ser fallos temporarios, tienen la capacidad de alterar el comportamiento del sistema de cómputo. A partir del año 2000 se han vuelto más frecuentes los reportes de desperfectos significativos en distintas supercomputadoras, debidos a los fallos transitorios. El impacto de los fallos transitorios se vuelve más relevante en el contexto del Cómputo de Altas Prestaciones (HPC). Aun cuando el tiempo medio entre fallos (MTBF) es del orden de 2 años para un procesador comercial, en el caso de una supercomputadora con cientos o miles de procesadores que cooperan para resolver una tarea, el MTBF disminuye cuanto mayor es la cantidad de procesadores. Esta situación se agrava con el advenimiento de los procesadores multicore y las arquitecturas de cluster de multicores, que incorporan un alto grado de paralelismo a nivel de hardware. La incidencia de los fallos transitorios es aún mayor en el caso de aplicaciones de gran duración, que manejan elevados volúmenes de datos, dado el alto costo (en términos de tiempo y utilización de recursos) que implica volver a lanzar la ejecución desde el comienzo, en caso de obtener resulta-dos incorrectos debido a la ocurrencia del fallo. Estos factores justifican la necesidad de desarrollar estrategias específicas para mejorar la con-fiabilidad en sistemas de HPC; en este sentido, es crucial poder detectar los fallos llamados silenciosos, que alteran los resultados de las aplicaciones pero que no son interceptados por el sistema operativo ni ninguna otra capa de software del sistema, por lo que no causan la finalización abrupta de la ejecución. En este contexto, el trabajo analizará una metodología distribuida basada en software, diseñada para aplicaciones paralelas científicas que utilizan paso de mensajes, capaz de detectar fallos transitorios mediante la validación de contenidos de los mensajes que se van a enviar a otro proceso de la aplicación. Esta metodología, previamente publicada, intenta abordar un problema no cubierto por las propuestas existentes, detectando los fallos transitorios que permiten la continuidad de la ejecución pero que son capaces de corromper los resultados finales, mejorando la confiabilidad del sistema y disminuyendo el tiempo luego del cual se puede relanzar la aplicación, lo cual es especialmente útil en ejecuciones prolongadas.
330

The management of error in construction projects

Atkinson, Andrew Robin January 1999 (has links)
The 'defects problem' has demanded considerable attention in recent years, with much emphasis given to the technical causes of failure. This research project examines the problem from a different point of view - that of human error. Taking as a starting point, technical publications in the construction industry, the research reviews human error literature from a variety of industries and perspectives and synthesises a model of error causation covering organisations in a construction project context. This model is then progressively tested in four studies, a general preliminary survey and three more detailed studies of house-building. Conclusions support the view that errors leading to failure in complex socio-technical systems often exhibit systems characteristics and involve the whole managerial structure. An improved model is proposed, which emphasises the importance of both project and general management errors.

Page generated in 0.0189 seconds