• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 495
  • 208
  • 135
  • 93
  • 75
  • 74
  • 47
  • 41
  • 28
  • 18
  • 16
  • 16
  • 15
  • 14
  • 10
  • Tagged with
  • 1371
  • 490
  • 354
  • 353
  • 252
  • 191
  • 167
  • 150
  • 149
  • 116
  • 116
  • 112
  • 101
  • 98
  • 97
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

COMPARISON OF FILE TRANSFER USING SCPS FP AND TCP/IP FTP OVER A SIMULATED SATELLITE CHANNEL

Horan, Stephen, Wang, Ru-hai 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The CCSDS SCPS FP file transfer performance is compared with that of TCP/IP FTP in a simulated satellite channel environment. The comparison is made as a function of channel bit error rate and forward/return data rates. From these simulations, we see that both protocols work well when the channel error rate is low (below 10^-6) and the SCPS FP generally performs better when the error rate is higher. We also noticed a strong effect on the SCPS FP throughput as a function of forward transmission rate when running unbalanced channel tests.
232

SMART SENSORS VS DISTRIBUTED DATA ACQUISITION

Myers, Robert L. 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Distributed processing is coming to data acquisition. The desire for smart sensors that can preprocess data, is growing. Making sensors themselves intelligent will reverse the historic trend toward smaller and cheaper sensors. Incorporating current sensor technology into data acquisition nodes in a network will create a distributed data acquisition, DAQ, environment that can acquire data from around the world over the Internet. The future is now.
233

Improving the convergence of IP routing protocols

Francois, Pierre 30 October 2007 (has links)
The IP protocol suite has been initallyi designed to provide best effort reachability among the nodes of a network or an inter-network. The goal was to design a set of routing solutions that would allow routers to automatically provide end-to-end connectivity among hosts. Also, the solution was meant to recover the connectivity upon the failure of one or multiple devices supporting the service, without the need of manual, slow, and error-prone reconfigurations. In other words, the requirement was to have an Internet that "converges" on its own. Along with the "Internet Boom", network availability expectations increased, as e-business emerged and companies started to associate loss of Internet connectivity with loss of customers... and money. So, Internet Service Providers (ISPs) relied on best practice rules for the design and the configuration of their networks, in order to improve their Quality of Service. The goal of this thesis is to complement the IP routing suite so as to improve its resiliency. It provides enhancements to routing protocols that reduce the IP packet losses when an IP network reacts to a change of its topology. It also provides techniques that allow ISPs to perform reconfigurations of their networks that do not lead to packet losses.
234

Essays on the Empirical Analysis of Patent Systems

van Zeebroeck, Nicolas 13 March 2008 (has links)
1. The context: The European patent system has been affected by substantial changes over the past three decades, which have raised vigorous debates at different levels. The main objective of the present dissertation is to contribute to these debates through an exploratory analysis of different changes in patenting practices – in particular the way applications are drafted and filed to patent offices –, their drivers, association with the value of patents, and potential impact on the patent system. The coming essays are therefore empirical in their essence, but are inspired by economic motivations and concerns. Their originality is threefold: it resides in the novelty of the main questions discussed, the comprehensive database specifically built to address them, and the range of statistical methods used for this purpose. The main argument throughout these pages is that patenting practices have significantly evolved in the past decades and that these developments have affected the patent system and could compromise its ability to fulfil its economic purpose. The economic objective of patents is to encourage innovation and its diffusion through the public disclosure of the inventions made. But their exploitation in the knowledge economy has assumed so many different forms that inventors have supposedly developed new patenting and filing strategies to deal with these market conditions or reap the maximum benefits from their patents. The present thesis aims at better understanding the dimensions, determinants, and some potential consequences of these developing practices. 2. The evolution: Chapter 2 presents a detailed descriptive analysis of the evolution in the size of patent applications filed to the European Patent Office (EPO). In this chapter, we propose two measures of patent voluminosity and identify the main patterns in their evolution. Based on a dataset with about 2 million documents filed at the EPO, the results show that the average voluminosity of patent applications – measured in terms of the number of pages and claims contained in each document – has doubled over the past 25 years. Nevertheless, this evolution varies widely across countries, technologies and filing procedures chosen by the applicant. This increasing voluminosity of filings has a strong impact on the workload of the EPO, which justifies the need for regulatory and policy actions. 3. The drivers: The evolution in patent voluminosity observed in chapter 2 calls for a multivariate analysis of its determinants. Chapter 3 therefore proposes and tests 4 different hypotheses that may contribute to explaining the observed inflation in size: the influence of national laws and practices and their diffusion to other countries with the progressive globalization of patenting procedures, the complexification of research activities and inventions, the emergence of new sectors with less established norms and vocabularies, and the construction of patent portfolios. The econometric results first reveal that the four hypotheses are significantly associated with longer documents and are therefore empirically supported. It appears however that the first hypothesis – the diffusion of national drafting practices through international patenting procedures – is the strongest contributor of all, resulting in a progressive harmonization of drafting styles toward American standards, which are longer by nature. The portfolio construction hypothesis seems a less important driver but nevertheless highlights substantial changes in patenting practices. These results raise two questions: Do these evolving patenting practices indicate more valuable patents? Do they induce any embarrassment for the patent system? 4. Measuring patent value: If the former of these two questions is to be addressed, measures are needed to identify higher value patents. Chapter 4 therefore proposes a review of the state of the art on patent value indicators and analyses several issues in their measurement and interpretation. Five classes of indicators proposed in the literature may be obtained directly from patent databases: the number of countries in which each patent is enforced, the number of years during which each patent has been renewed, the grant decision taken, the number of citations received from subsequent patents, and whether it has been opposed by a third party before the EPO. Because the former two measures are closely connected (the geographical scope of protection and length of maintenance can hardly be observed independently), they have been subjected to closer scrutiny in the first section of chapter 4, which shows that these two dimensions have experienced opposite evolutions. A composite measure – the Scope-Year Index – reveals that the overall trend is oriented downwards, which may suggest a substantial decline in the average value of patents. The second section of chapter 4 returns to the five initial classes of measures and underlines their main patterns. It appears that most of them witness the well-known properties of patent value: a severe skewness and large country and technology variations. A closer look at their relationships, however, reveals a high degree of orthogonality between them and opposite trends in their evolution, suggesting that they actually capture different dimensions of a patent’s value and therefore do not always pinpoint the same patents as being the most valuable. This result strongly discourages the reliance on one of the available indicators only and opens some avenue for the creation of one potential composite index of value based upon the five indicators to maximize the chances of capturing all potentially valuable patents in a large database. The proposed index reflects the intensity of the signal provided by all 5 constituting indicators on the potential value of each patent. Its declining trend reflects a rarefaction of this signal on average, leading to different plausible interpretations. 5. The links with patent value: Based upon the six indicators of value proposed in chapter 4 (the five classical ones plus the composite), the question of the association between filing strategies and the value of patents may be analysed. This question is empirically addressed in chapter 5, which focuses on all EPO patents filed between 1990 and 1995. The first section presents a comprehensive review of the existing evidence on the determinants of patent value. The numerous contributions in the field differ widely along three dimensions (the indicator of value chosen as dependent variable, the sampling methodology, and the set of variables tested as determinants), which have translated into many ambiguities across the literature. Section 2 proposes measures to identify different dimensions of filing strategies, which are essentially twofold: they relate to the routes followed by patent filings toward the EPO (PCT, accelerated processing), and to their form (excess claims, share of claims lost in examination), and construction (by assembly or disassembly, divisional). These measures are then included into an econometric model based upon the framework provided by the literature. The proposed model, which integrates the set of filing strategy variables along with some of the classical determinants, is regressed on the six available indicators separately over the full sample. In addition, the sensitivity of the available results to the indicator and the sampling methodology is assessed through 18 geographic and 14 industrial clustered regressions and about 30 regressions over random samples for each indicator. The estimates are then compared across countries, industries and indicators. These results first reveal that filing strategies are indicative of more valuable patents and provide the most stable determinants of all. And third, the results do confirm some classical determinants in their positive association with patent value, but highlight a high degree of sensitivity of most of them to the indicator or the sample chosen for the analysis, requiring much care in generalizing such empirical results. 6. The links with patent length: Chapter 6 focuses on one particular dimension of patent value: the length of patents. To do so, the censored nature of the dependent variable (the time elapsed between the filing of a patent application and its ultimate fall into the public domain) dictates the recourse to a survival time model as proposed by Cox (1972). The analysis is original in three main respects. First of all, despite the fact that renewal data have been exploited for about two decades to obtain estimates of patent value (Pakes and Schankerman, 1984), this chapter provides – to the best of our knowledge – the first comprehensive analysis of the determinants of patent length. Second, whereas most of the empirical literature in the field focuses on granted patents and investigates their maintenance, the analysis reported here includes all patent applications. This comprehensive approach is dictated by the provisional rights provided by pending applications to their holders and by the legal uncertainty these represent for competitors. And third, the model integrates a wide set of explanatory variables, starting with the filing strategy variables proposed in chapter 5. The main results are threefold: first, they clearly show that patent rights have significantly increased in length over the past decades despite a small apparent decline in the average grant rate, but largely due to the expansion of the examination process. Second, they indicate that most filing strategies induce considerable delays in the examination process, possibly to the benefit of the patentee, but most certainly to the expense of legal uncertainty on the markets. And third, they confirm that more valuable patents (more cited or covering a larger geographical scope) take more time to process, and live longer, whereas more complex applications are associated with longer decision lags, but also with lower grant and renewal rates. 7. Conclusions: The potential economic consequences and some policy implications of the findings from the dissertation are discussed in chapter 7. The evolution of patenting practices analysed in these works has some direct consequences for the stakeholders of the patent system. For the EPO, they generate a considerable increase in workload, resulting in growing backlogs and processing lags. For innovative firms, this phenomenon translates into an undesired increase in legal uncertainty, for it complicates the assessment of the limits to each party’s rights and hence of the freedom to operate on a market, which is precisely what the so-called ‘patent trolls’ and ‘submariners’ may be looking for. Although empirical evidence is lacking, some fear that this may result in underinvestment in research, development or commercialization activities (e.g. Hall and Harhoff, 2004). In addition, legal uncertainty is synonymous with an increased risk of litigation, which may hamper the development of SMEs and reduce the level of entrepreneurship. Finally, for society, we are left with a contrasted picture, which is hard to interpret. The European patent system wishes to maintain high quality standards to reduce business uncertainty around granted patents, but it is overloaded with the volume of applications filed, resulting in growing backglogs which translate into legal uncertainty surrounding pending applications. The filing strategies that contribute to this situation might reflect a legitimate need for more time and flexibility in filing more valuable patents, but they could also easily turn into real abuses of the system, allowing some patentees to obtain and artificially maintain provisional rights conferred by pending applications on inventions that might not meet the patentability requirements. Distinguishing between these two cases goes beyond the scope of the present dissertation, but should they be found abusive, they should be fought for they consume resources and generate uncertainty. And if legitimate, then they should be understood and the system adapted accordingly (e.g. by adjusting fees to discourage some strategies, raising the inventive step, fine-tuning the statutory term in certain technologies, providing more legal tools for patent examiners to reject unpatentable applications, etc.) so as to better serve the need of inventors for legal protection in a more efficient way, and to adapt the patent system to the challenges it is or will be facing.
235

Contributions to modelling of internet traffic by fractal renewal processes.

Arfeen, Muhammad Asad January 2014 (has links)
The principle of parsimonious modelling of Internet traffic states that a minimal number of descriptors should be used for its characterization. Until early 1990s, the conventional Markovian models for voice traffic had been considered suitable and parsimonious for data traffic as well. Later with the discovery of strong correlations and increased burstiness in Internet traffic, various self-similar count models have been proposed. But, in fact, such models are strictly mono-fractal and applicable at coarse time scales, whereas Internet traffic modelling is about modelling traffic at fine and coarse time scales; modelling traffic which can be mono and multi-fractal; modelling traffic at interarrival time and count levels; modelling traffic at access and core tiers; and modelling all the three structural components of Internet traffic, that is, packets, flows and sessions. The philosophy of this thesis can be described as: “the renewal of renewal theory in Internet traffic modelling”. Renewal theory has a great potential in modelling statistical characteristics of Internet traffic belonging to individual users, access and core networks. In this thesis, we develop an Internet traffic modelling framework based on fractal renewal processes, that is, renewal processes with underlying distribution of interarrival times being heavy-tailed. The proposed renewal framework covers packets, flows and sessions as structural components of Internet traffic and is applicable for modelling the traffic at fine and coarse time scales. The properties of superposition of renewal processes can be used to model traffic in higher tiers of the Internet hierarchy. As the framework is based on renewal processes, therefore, Internet traffic can be modelled at both interarrival times and count levels.
236

Internet congestion control for variable-rate TCP traffic

Biswas, Md. Israfil January 2011 (has links)
The Transmission Control Protocol (TCP) has been designed for reliable data transport over the Internet. The performance of TCP is strongly influenced by its congestion control algorithms that limit the amount of traffic a sender can transmit based on end-to-end available capacity estimations. These algorithms proved successful in environments where applications rate requirements can be easily anticipated, as is the case for traditional bulk data transfer or interactive applications. However, an important new class of Internet applications has emerged that exhibit significant variations of transmission rate over time. Variable-rate traffic poses a new challenge for congestion control, especially for applications that need to share the limited capacity of a bottleneck over a long delay Internet path (e.g., paths that include satellite links). This thesis first analyses TCP performance of bursty applications that do not send data continuously, but generate data in bursts separated by periods in which little or no data is sent. Simulation analysis shows that standard TCP methods do not provide efficient support for bursty applications that produce variable-rate traffic, especially over long delay paths. Although alternative forms of congestion control like TCP-Friendly Rate Control and the Datagram Congestion Control Protocol have been proposed, they did not achieve widespread deployment. Therefore many current applications that rely upon User Datagram Protocol are not congestion controlled. The use of non-standard or proprietary methods decreases the effectiveness of Internet congestion control and poses a threat to the Internet stability. Solutions are therefore needed to allow bursty applications to use TCP. Chapter three evaluates Congestion Window Validation (CWV), an IETF experimental specification that was proposed to improve support for bursty applications over TCP. It concluded that CWV is too conservative to support many bursty applications and does not provide an incentive to encourage use by application designers. Instead, application designers often avoid generating variable-rate traffic by padding idle periods, which has been shown to waste network resources. CWV is therefore shown to not provide an acceptable solution for variable-rate traffic. In response to this shortfall, a new modification to TCP, TCP-JAGO, is proposed. This allows variable-rate traffic to restart quickly after an inactive (i.e., idle) period and to effectively utilise available network resources while sending at a lower rate than the available rate (i.e., during an application-limited period). The analysis in Chapter five shows that JAGO provides faster convergence to a steady-state rate and improves throughput by more efficiently utilising the network. TCP-JAGO is also shown to provide an appropriate response when congestion is experienced after restart. Variable-rate TCP traffic can also be impacted by the Initial Window algorithm at the start or during the restart of a session. Chapter six considers this problem, where TCP has no prior indication of the network state. A recent proposal for a larger initial window is analysed. Issues and advantages of using a large IW over a range of scenarios are discussed. The thesis concludes by presenting recommendations to improve TCP support for bursty applications. This also provides an incentive for application designers to choose TCP for variable-rate traffic.
237

Diseño e Implementación de una Ip-Contact Center Distribuida Económica y con Fines Docentes

Tchernitchin Lapin, Nikolai January 2007 (has links)
No description available.
238

Implementación de mobile IP entre redes móviles y WLAN

Salvatierra León, Karen Andrea January 2012 (has links)
Ingeniera Civil Electricista / La evolución de las redes de comunicaciones móviles avanza hacia la unificación de un ambiente heterogéneo en el cual se interconectan entre sí redes de distintos accesos. En este escenario, la mantención de sesiones de internet de los usuarios, cuando éstos se mueven de una red a otra, emerge como uno de los principales desafíos a enfrentar. La introducción de un protocolo de movilidad en la red aparece como solución a este problema, facilitando el roaming entre redes y evitando la reconexión de las sesiones del usuario. El despliegue de redes wireless de área local (WLAN) es un económico medio para acceder a internet a diferencia de las redes celulares con soporte de redes de paquetes. Mientras que las redes WLAN poseen un ancho de banda significativamente mayor al de las redes móviles, estas redes poseen una amplia cobertura que incentiva la interconexión entre ambas tecnologías. En este trabajo se desarrolla un modelo de interconexión de redes celulares y WLANs utilizando Mobile IP. Las simulaciones se realizan en el software computacional OPNET y el soporte de movilidad se enfoca desde el nivel IP utilizando los protocolos Mobile IP en un ambiente global; y Proxy Mobile IP en un ambiente localizado, ambos en su versión IPv4. Los resultados MIP muestran una asimetría entre delays end-to-end producto del camino triangular del tráfico de datos. Esto lleva a una ineficiencia en el ruteo que afecta principalmente a aplicaciones en tiempo real. El roaming hacia la red WLAN provoca una disminución en los delays end-to-end en ambas direcciones situándose entre 56 % y 71 % cuando el estado de la red de internet tiene asociado una baja latencia. El establecimiento de sesiones TCP sube en un 7 % mientras que el control de aplicaciones de descarga FTP baja en 29 %. En el caso de PMIP, el 3-way handshake disminuye en un 14 % mientras que el control de aplicación FTP en un 41 %. Este protocolo no evidencia ineficiencia en el ruteo, tendiéndose además una disminución de delays end-to-end incluso con latencia de red de 100 ms. Lo positivo de enfocar la movilidad desde el nivel IP se encuentra en la indiferencia que supone el tipo de acceso a nivel de capa de enlace. La ventaja de PMIP sobre MIP es que no requiere de software especializado en el cliente, reduciendo así los costos de señalización en el establecimiento del servicio de movilidad. MIP por su parte, es idóneo de ser utilizado en un esquema de interconexión global, por lo que su integración con PMIP puede ser vista de manera complementaria estableciendo un esquema jerárquico con este último proveyendo movilidad a nivel local.
239

Optimization of resources allocation for H.323 endpoints and terminals over VoIP networks

27 January 2014 (has links)
M.Phil. (Electrical & Electronic Engineering) / Without any doubt, the entire range of voice and TV signals will migrate to the packet network. The universal addressable mode of Internet protocol (IP) and the interfacing framing structure of Ethernet are the main reasons behind the success of TCP/IP and Ethernet as a packet network and network access scheme mechanisms. Unfortunately, the success of the Internet has been the problem for real-time traffic such as voice, leading to more studies in the domain of Teletraffic Engineering; and the lack of a resource reservation mechanism in Ethernet, which constitutes a huge problem as switching system mechanism, have raised enough challenges for such a migration. In that context, ITU-T has released a series of Recommendation under the umbrella of H.323 to guarantee the required Quality of Service (QoS) for such services. Although the “utilisation” is not a good parameter in terms of traffic and QoS, we are here in proposing a multiplexing scheme with a queuing solution that takes into account the positive correlations of the packet arrival process experienced at the multiplexer input with the aim to optimize the utilisation of the buffer and bandwidth on the one hand; and the ITU-T H.323 Endpoints and Terminals configuration that can sustain such a multiplexing scheme on the other hand. We take into account the solution of the models from the M/M/1 up to G/G/1 queues based on Kolmogorov’s analysis as our solution to provide a better justification of our approach. This solution, the Diffusion approximation, is the limit of the Fluid process that has not been used enough as queuing solution in the domain of networking. Driven by the results of the Fluid method, and the resulting Gaussian distribution from the Diffusion approximation, the application of the asymptotic properties of the Maximum Likelihood Estimation (MLE) as the central limit theorem allowed capturing the fluctuations and therefore filtering out the positive correlations in the queue system. This has resulted in a queue system able to serve 1 erlang (100% of transmission link capacity) of traffic intensity without any extra delay and a queue length which is 60% of buffer utilization when compared to the ordinary Poisson queue length.
240

SW modul TCP/IP a Modbus pro OS FreeRTOS / TCP/IP and Modbus modules for OS FreeRTOS

Šťastný, Ladislav January 2012 (has links)
The aim of this work is to get familiar with operating system FreeRTOS and its usage in device design. It also explains usage of SW module LwIP (TCP/IP stack) and Free-MODBUS. Consequently is designed example device, simple operational panel. The panel communicates through ethernet interface using Modbus TCP protocol with connected PLCs on separate network. Its meet function of webserver, HID and also source of real time.

Page generated in 0.063 seconds