• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1106
  • 379
  • 210
  • 133
  • 95
  • 75
  • 37
  • 19
  • 18
  • 18
  • 15
  • 15
  • 15
  • 15
  • 12
  • Tagged with
  • 2453
  • 610
  • 607
  • 376
  • 324
  • 321
  • 267
  • 257
  • 252
  • 234
  • 226
  • 216
  • 210
  • 204
  • 185
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Patterns in network security: an analysis of architectural complexity in securing recursive inter-network architecture networks

Small, Jeremiah January 2012 (has links)
Recursive Inter-Network Architecture (RINA) networks have a shorter protocol stack than the current architecture (the Internet) and rely instead upon separation of mech- anism from policy and recursive deployment to achieve large scale networks. Due to this smaller protocol stack, fewer networking mechanisms, security or otherwise, should be needed to secure RINA networks. This thesis examines the security proto- cols included in the Internet Protocol Suite that are commonly deployed on existing networks and shows that because of the design principles of the current architecture, these protocols are forced to include many redundant non-security mechanisms and that as a consequence, RINA networks can deliver the same security services with substantially less complexity.
302

TCP performance over mobile data networks. / Transmission control protocol performance over mobile data networks / CUHK electronic theses & dissertations collection

January 2013 (has links)
近年來,使用者通過移動數據網路,如3G和LTE,連接到互聯網的數目急劇增加。眾所周知無線網路和移動數據網路展現的特點和有線網路有很大的不同。儘管如此,大多數移動應用程式的基本構建塊,即傳輸控制協議(TCP),在很大程度上仍是根植於有線網路。本論文通過廣泛的開展多個移動數據網路,包括3G,HSPA,最新的LTE網路的測試和實驗,探討TCP在現代移動數據網路的性能。儘管移動數據網路頻寬的迅速增加,我們的測量結果均顯示,現有的TCP實現在實踐中表現不佳,未能利用高速移動數據網路豐富的頻寬。這項工作解決TCP的性能限制,採用一種新的方法透明協議優化,通過在中間網路設備即時優化TCP,顯著提高TCP的吞吐量。具體來說,這項工作發展(一)一個新穎的機會傳輸算法克服TCP的流量控制的瓶頸;(二)一個傳輸速率控制演算法來解決TCP的拥塞控制的瓶頸;(三)一個新穎的投機重傳演算法,以提高TCP在重傳中的吞吐量;(四)用隨機模型來量化TCP吞吐量性能對移動網路資源利用率的影響;(五)一個新的隊列長度測量算法,為擁塞控制和網路監測打開一條新的途徑。另外,擬議的協議優化技術已全面實施,變成一個移動加速器裝置已經成功在三個不同的3G/LTE生產移動數據網路領域試用,實驗顯示TCP的吞吐量從48%增加至163%。在發明一種新的傳輸協議,或修改現有的TCP實施相比,所提出的方法不要求在用戶端/伺服器的主機現有的TCP實施任何修改,不需要重新配置伺服器或用戶端,並因此可以容易在現今的3G和4G移動網路部署,提高所有現有網路上運行在TCP之上的應用程式的吞吐量性能。 / The number of Internet users which are connected via mobile networks such as 3G and LTE has increased dramatically in recent years. It is well-known that wireless networks in general, and mobile data networks in particular, exhibit characteristics that are very different from their wired counterparts. Nevertheless, the fundamental building block of most Internet applications, namely the Transmission Control Protocol (TCP), is still largely rooted in wired networks. This dissertation investigate the performance of TCP over modern mobile data networks through extensive measurements and experiments carried out in multiple production data networks, ranging from 3G, HSPA, to the latest LTE networks. Despite the rapid increases in mobile network bandwidth, our measurements consistently reveal that existing TCP implementations perform sub-optimally in practice, failing to utilize the abundant bandwidth available in high-speed mobile networks. This work tackles the performance limitations of TCP using a novel approach - transparent protocol optimization, to significantly improve TCP’s throughput performance using on-the-fly protocol optimization carried out by an intermediate network device in-between the TCP end-hosts. Specifically, this work develops (i) a novel opportunistic transmission algorithm to overcome the TCP’s flow control bottleneck; (ii) a transmission rate control algorithm to tackle TCP’s congestion control bottleneck; (iii) a new opportunistic retransmission algorithm to improve TCP’s performance during packet loss recovery; (iv) a stochastic model to quantify the impact of TCP throughput performance on mobile network capacity; and (v) a new queue length estimation algorithm which opens a new avenue for congestion control and network monitoring. In addition, the proposed protocol optimization techniques have been fully implemented into a mobile accelerator device which has been successfully field trialed in three different production 3G/LTE mobile networks, consistently increasing TCP’s throughput by 48% to 163%. In contrast to inventing a new transport protocol or modifying an existing TCP implementation, the proposed approach does not require any modification to the existing TCP implementation at the client/server hosts, does not require any reconfiguration of the server or client, and hence can be deployed readily in today’s 3G and 4G mobile networks, raising the throughput performance of all existing network applications running atop TCP. / Detailed summary in vernacular field only. / Liu, Ke. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2013. / Includes bibliographical references (leaves 166-174). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts also in Chinese. / Abstract --- p.2 / Acknowledgement --- p.6 / Chapter 1 --- p.1 / Introduction --- p.1 / Chapter 1.1 --- Motivation --- p.1 / Chapter 1.2 --- Contributions --- p.3 / Chapter 1.3 --- Structure of the Thesis --- p.6 / Chapter 2 --- p.9 / Flow and Congestion Control --- p.9 / Chapter 2.1 --- TCP Performance Bottlenecks --- p.9 / Chapter 2.2 --- Background and related works --- p.16 / Chapter 2.3 --- Transparent Protocol Optimization --- p.20 / Chapter 2.3.1 --- Opportunistic Transmission --- p.20 / Chapter 2.3.2 --- Transmission Rate Control --- p.22 / Chapter 2.3.3 --- Lost Packet Recovery --- p.27 / Chapter 2.4 --- Modeling and Analysis --- p.28 / Chapter 2.4.1 --- Background and Assumptions --- p.28 / Chapter 2.4.2 --- Queue Length at the Radio Interface --- p.31 / Chapter 2.4.3 --- Queue Length Bounds --- p.38 / Chapter 2.4.4 --- Guaranteeing Full Bandwidth Utilization --- p.45 / Chapter 2.4.5 --- Link Buffer Size Requirement --- p.47 / Chapter 2.5 --- Performance Evaluation --- p.53 / Chapter 2.5.1 --- Parameter Tuning --- p.53 / Chapter 2.5.2 --- Bandwidth Efficiency --- p.56 / Chapter 3 --- p.62 / Packet Loss Recovery --- p.62 / Chapter 3.1 --- Introduction --- p.62 / Chapter 3.2 --- TCP Loss Recovery Revisited --- p.64 / Chapter 3.2.1 --- Standard TCP Loss Recovery Algorithm --- p.64 / Chapter 3.2.2 --- Loss Recovery Algorithm in Linux --- p.66 / Chapter 3.2.3 --- Loss Recovery Algorithm in A-TCP --- p.67 / Chapter 3.3 --- Efficiency of TCP Loss Recovery Algorithms --- p.68 / Chapter 3.3.1 --- Standard TCP Loss Recovery Algorithm --- p.70 / Chapter 3.3.2 --- TCP Loss Recovery in Linux --- p.72 / Chapter 3.3.3 --- Loss Recovery Algorithm Used in A-TCP --- p.72 / Chapter 3.3.4 --- Discussions --- p.73 / Chapter 3.4 --- Opportunistic Retransmission --- p.74 / Chapter 3.4.1 --- Applications and Performance Analysis --- p.76 / Chapter 3.4.2 --- Bandwidth Utilization During Loss Recovery --- p.78 / Chapter 3.5 --- Experimental Results --- p.81 / Chapter 3.5.1 --- Model Validation --- p.85 / Chapter 3.5.2 --- Impact of Loss Recovery Phase on TCP Throughput --- p.85 / Chapter 3.5.3 --- A-TCP with Opportunistic Retransmission --- p.86 / Chapter 3.6 --- Summary --- p.87 / Chapter 4 --- p.89 / Impact on Mobile Network Capacity --- p.89 / Chapter 4.1 --- Introduction --- p.89 / Chapter 4.2 --- Background and Related Work --- p.91 / Chapter 4.2.1 --- TCP Performance over Mobile Data Networks --- p.91 / Chapter 4.2.2 --- Modeling of Mobile Data Networks --- p.92 / Chapter 4.3 --- System Model --- p.94 / Chapter 4.3.1 --- Mobile Cell Bandwidth Allocation --- p.95 / Chapter 4.3.2 --- Markov Chain Model --- p.96 / Chapter 4.3.3 --- Performance Metric for Mobile Internet --- p.98 / Chapter 4.3.4 --- Protocol-limited Capacity Loss --- p.100 / Chapter 4.3.5 --- Channel-limited Capacity Loss --- p.101 / Chapter 4.4 --- Performance Evaluation --- p.102 / Chapter 4.4.1 --- Service Response Time --- p.103 / Chapter 4.4.2 --- Network Capacity Loss --- p.105 / Chapter 5 --- p.114 / Mobile Link Queue Length Estimation --- p.114 / Chapter 5.1 --- Introduction --- p.115 / Chapter 5.2 --- Sum-of-Delay (SoD) algorithm Revisited --- p.117 / Chapter 5.2.1 --- Queue Length and Link Buffer Size Estimation --- p.117 / Chapter 5.2.2 --- A Bound on Estimation Error --- p.120 / Chapter 5.2.3 --- Impact of Uplink Delay Variations --- p.122 / Chapter 5.3 --- Uplink Delay Variation Compensation --- p.127 / Chapter 5.3.1 --- Exploiting the TCP Timestamp Option --- p.127 / Chapter 5.3.2 --- TCP Timestamp Granularity --- p.130 / Chapter 5.4 --- Performance Evaluation --- p.131 / Chapter 5.4.1 --- Link buffer size estimation under uplink delay variations --- p.132 / Chapter 5.4.2 --- Queue length estimation under uplink delay variations --- p.136 / Chapter 5.5 --- Summary --- p.136 / Chapter 6 --- p.139 / Summary and Future Works --- p.139 / Chapter 6.1 --- Transparent Protocol Optimization --- p.139 / Chapter 6.2 --- Cross-Layer Modeling and Optimization of Mobile Networks --- p.141 / Chapter Appendix A. --- Derivation of Equations (2.24) and (2.25) --- p.143 / Chapter Appendix B. --- Proof of Theorem 2.1 --- p.145 / Chapter Appendix C. --- for Proof of Theorem 2.2 --- p.147 / Chapter Appendix D. --- for Proof of Theorem 2.3 --- p.150 / Chapter Appendix E. --- for Proof of Theorem 2.4 --- p.151 / Chapter Appendix F. --- for Proof of Theorem 2.5 --- p.152 / Chapter Appendix G. --- for Proof of Theorem 2.6 --- p.153 / Chapter Appendix H. --- for Proof of Theorem 2.7 --- p.156 / Chapter Appendix I. --- for Proof of Theorem 2.8 --- p.157 / Chapter Appendix J. --- for Proof of Theorem 3.2 --- p.161 / Chapter Appendix K. --- for Theorem 3.4 --- p.163 / Chapter Appendix H. --- for Theorem 3.5 --- p.164 / Bibliography --- p.166
303

TCP veno: end-to-end congestion control over heterogeneous networks. / CUHK electronic theses & dissertations collection

January 2001 (has links)
by Fu Chengpeng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2001. / Includes bibliographical references (p. 102-119). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web. / Abstracts in English and Chinese.
304

The global universal addressing model for IP mobility and the cellular universal IP. / CUHK electronic theses & dissertations collection

January 2007 (has links)
In 3GPP Release 5 and beyond, an All-IP architecture has been specified. This indicates that convergence of mobile applications such as voice, video and gaming to IP is not a "trend" anymore, but a reality. IP mobility has therefore been intensively studied in recent years. Majority of the existing IP mobility schemes, including Mobile IPv6 (MIPv6), the current de facto standard solution for IP mobility, are designed around a two-tier addressing model. In this model, while visiting a foreign link, a mobile node (MN) is identified by its home address assigned by its home link but is located by the care-of-address (CoA) acquired from the foreign link. Incoming packets for the MN are routed to its home link as usual, but are intercepted by the home agent and tunneled to the CoA. This model is simple and is well accepted. However, when it comes to real-time applications, it also has been known to be ineffective in terms of handoff delay and bandwidth consumption due to, respectively, its lengthy CoA acquisition and the extra IP header for tunneling. The latter is especially expensive for the case of real-time applications because of the excessive overhead induced by the extra IP header (20 bytes for IPv4 and 40 bytes for IPv6) to the packet payload size (∼20-160 bytes). / In this thesis, we show that (i) can be overcome when a direct Layer-3 connection between the home and any particular visiting domain is available so that inter-domain routing effectively becomes routing within the same logical hierarchy. We call a global network formed by the directly Layer-3 connected domains the Global Universal Addressing (GUA) framework. When deployed on the GUA framework, the existing local mobility schemes can easily be upgraded to support global mobility as seamlessly as local mobility with no modification needed. / Much work has been devoted to improving the two-tier addressing model, including various local mobility schemes such as HAWAII and Cellular IP. These schemes eliminate the CoA acquisition when the MNs move within one domain, but revert back to the two-tier addressing model when the mobility is across different domains (or so-called global mobility). These schemes therefore inherit all the drawbacks of the two-tier addressing model when it comes to global mobility. It has been argued that mobility across domains is rare. However, looking into the near future, this assumption is certainly not applicable to the upcoming fourth-generation (4G) wireless architecture in which the MNs can dynamically choose the best connected wireless interface among heterogeneous networks (e.g., WiFi, WiMax, etc.) of different domains as they move. Therefore, an efficient solution is needed to handle the frequent inter-domain mobility, or global mobility, in the form of heterogeneous handoffs as well. / To address (ii), we propose a new IP mobility scheme called Cellular Universal IP (CUIP), which runs on the GUA framework and makes use of a home route concept also proposed in this thesis. The home route concept intelligently integrates the efficiency of prefix routing and flexibility of full-address routing to achieve high performance and routing scalability under the universal addressing model. In addition, based on IPv6, CUIP makes use of the IPv6 option header to embed the route-update information of an MN in the outgoing data packets for a short period after handoff, so that global routing information can be effectively updated along the path traversed by the packets. We study the performance of CUIP quantitatively and show the following: (1) the average number of routers updated per handoff is less than three, so that the average handoff delay is minimal. (2) The routing table complexity is asymptotically independent of the depth and monotonically decreasing with the width of the network hierarchy. That is, routing scalability is not a concern even in large networks. / To efficiently support global mobility, a universal addressing model, under which a mobile node is always identified and located by the same IP address globally, is an obvious answer to the problems associated with the two-tier addressing model. However, the universal addressing model has been considered to be infeasible due to difficulties in (i) inter-domain (or cross-prefix) IP routing and (ii) routing table scaling. / by Lam, Pak Kit. / "June 2007." / Adviser: Soung Liew. / Source: Dissertation Abstracts International, Volume: 69-01, Section: B, page: 0553. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 128-130). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
305

Desenvolvimento e teste de um monitor de barramento I2C para proteção contra falhas transientes / Development and test of an I2C bus monitor for protection against transient faults

Carvalho, Vicente Bueno January 2016 (has links)
A comunicação entre circuitos integrados tem evoluído em desempenho e confiabilidade ao longo dos anos. Inicialmente os projetos utilizavam barramentos paralelos, onde existe a necessidade de uma grande quantidade de vias, utilizando muitos pinos de entrada e saída dos circuitos integrados resultando também em uma grande suscetibilidade a interferências eletromagnéticas (EMI) e descargas eletrostáticas (ESD). Na sequência, ficou claro que o modelo de barramento serial possuía ampla vantagem em relação ao predecessor, uma vez que este utiliza um menor número de vias, facilitando o processo de leiaute de placas, facilitando também a integridade de sinais possibilitando velocidades muito maiores apesar do menor número de vias. Este trabalho faz uma comparação entre os principais protocolos seriais de baixa e média velocidade. Nessa pesquisa, foram salientadas as características positivas e negativas de cada protocolo, e como resultado o enquadramento de cada um dos protocolos em um segmento de atuação mais apropriado. O objetivo deste trabalho é utilizar o resultado da análise comparativa dos protocolos seriais para propor um aparato de hardware capaz de suprir uma deficiência encontrada no protocolo serial I2C, amplamente utilizado na indústria, mas que possui restrições quando a aplicação necessita alta confiabilidade. O aparato, aqui chamado de Monitor de Barramento I2C, é capaz de verificar a integridade de dados, sinalizar métricas sobre a qualidade das comunicações, detectar falhas transitórias e erros permanentes no barramento e agir sobre os dispositivos conectados ao barramento para a recuperação de tais erros, evitando falhas. Foi desenvolvido um mecanismo de injeção de falhas para simular as falhas em dispositivos conectados ao barramento e, portanto, verificar a resposta do monitor. Resultados no PSoC5, da empresa Cypress, mostram que a solução proposta tem um baixo custo em termos de área e nenhum impacto no desempenho das comunicações. / The communication between integrated circuits has evolved in performance and reliability over the years. Initially projects used parallel buses, where there is a need for a large amount of wires, consuming many input and output pins of the integrated circuits resulting in a great susceptibility to electromagnetic interference (EMI) and electrostatic discharge (ESD). As a result, it became clear that the serial bus model had large advantage over predecessor, since it uses a smaller number of lanes, making the PCB layout process easier, which also facilitates the signal integrity allowing higher speeds despite fewer pathways. This work makes a comparison between the main low and medium speed serial protocols. The research has emphasized the positive and negative characteristics of each protocol, and as a result the framework of each of the protocols in a more appropriate market segment. The objective of this work is to use the results of comparative analysis of serial protocols to propose a hardware apparatus capable of filling a gap found in the I2C protocol, widely used in industry, but with limitations when the application requires high reliability. The apparatus, here called I2C Bus Monitor, is able to perform data integrity verification activities, to signalize metrics about the quality of communications, to detect transient faults and permanent errors on the bus and to act on the devices connected to the bus for the recovery of such errors avoiding failures. It was developed a fault injection mechanism to simulate faults in the devices connected to the bus and thus verify the monitor response. Results in the APSoC5 from Cypress show that the proposed solution has an extremely low cost overhead in terms of area and no performance impact in the communication.
306

Procedimento para uniformização de espectros de solos (VIS-NIR-SWIR) / Proceeding for standardization of soils spectra (VIS-NIR-SWIR)

Danilo Jefferson Romero 04 December 2015 (has links)
As técnicas de sensoriamento remoto têm evoluído dentro da ciência do solo visando superar as limitações de tempo e custo das análises químicas tradicionalmente utilizadas para quantificação de atributos do solo. As análises espectrais há muito tempo têm provado serem alternativas para complementar às análises tradicionais, sendo consideradas atualmente uma técnica consolidada e de ampla utilização. Os estudos em pedologia espectral têm utilizado os comprimentos de onda entre 350 a 25000 nm, porém, têm se detido com mais frequência na região de 350 a 2500 nm, a qual é dividida em Visível (VIS - 350 a 700 nm), infravermelho próximo (NIR - 700 a 1000 nm), e infravermelho de ondas curtas (SWIR - 1000 até 2500 nm). A exemplo das técnicas laboratoriais tradicionalmente utilizadas em análises de solos, faz-se necessário estabelecer padrões visando a comunicação científica a nível mundial em espectroscopia de solos. Com vista ao futuro da espectroscopia de solos, desenvolveu-se este estudo afim de se avaliar o efeito do uso de amostras padrões na aquisição de dados espectrais de amostras de solos tropicais em três diferentes geometrias de aquisição em três espectrorradiômetros (350 - 2.500 nm). 97 amostras de solos registrados na Biblioteca Espectral de Solos do Brasil (BESB) provenientes do Estado do Mato Grosso do Sul, cedidas pelo projeto AGSPEC foram utilizadas no estudo e duas amostras mestre brancas utilizadas como padrões de referência, sendo estas oriundas das dunas das praias de Wylie Bay (WB - 99 % quartzo) e Lucky Bay (LB - 90 % quartzo e 10 % aragonita), no sudoeste da Austrália. Para avaliar a padronização, as morfologias das curvas espectrais foram observadas quanto curvatura, feições, albedo; complementando as observações descritivas, as diferenças de reflectância entre os tratamentos utilizados (Sensor x Geometria x Correção) foram estudadas pela análise de variância e pelo teste de Tukey a 5 % de significância em três bandas espectrais médias (VIS-NIR-SWIR); e a modelagem para quantificação de teores de argila por meio da regressão por mínimos quadrados parciais (\"partial least squares regression\", PLSR), com validação cruzada (Cross Validation) para cada configuração e outra simulando uma biblioteca espectral mista, composta por combinações entre as configurações. O método de padronização proposto reduz as diferenças entre espectros obtidos em diferentes sensores e geometrias. A predição de argila por uma biblioteca espectral utilizando dados com diferentes configurações é favorecida pela padronização, passando de um de 0,83 para um de 0,85 após a correção, indicando a validade da unificação dos espectros pela técnica proposta. / Remote sensing techniques have evolved within the soil science aiming to overcome time and cost limitations of chemical analysis traditionally used for quantification of soil properties. Spectral analysis have long proven to be alternatives to supplement traditional analysis, currently being considered a mature technique and widely applied. Studies on spectral pedology have used the wavelength between 350 to 25000 nm, however, have held more often in the region of 350 to 2500 nm, which is divided into visible (VIS - 350 to 700 nm), near infrared ( NIR - 700 to 1000 nm) and short wave infrared (SWIR - 1000 to 2500 nm). As traditional laboratory techniques used in soil analysis, it is necessary to establish standards aimed at worldwide scientific communication in soils spectroscopy. Going forward soil spectroscopy, this study was developed in order to evaluate the effect of using standard samples in the acquisition of spectral data of tropical soil in three different geometries acquisition in three spectroradiometers (350-2500 nm). 97 soil samples documented in Brazilian Soils Spectral Library (BESB) from Mato Grosso do Sul State, provided by the AGSPEC project were used in the study and two white master samples used as reference standards, which are from the beaches dunes of Wylie Bay (WB - 99% quartz) and Lucky Bay (LB - 90% quartz and 10% aragonite) in southwestern Australia. To judge the standardization, the morphologies of the spectral curves were observed for curvature, absorption features, albedo; complementing the descriptive observations, the reflectance differences between the configurations (Sensor x Geometry x Correction) were studied by analysis of variance and Tukey test at 5% significance in three average spectral bands (VIS-NIR-SWIR); and modelling for quantification of clay through regression by partial least squares (PLSR) with cross-validation for each configuration and another simulating a mixed spectral library, consisting of combinations of situations. The method proposed standardization reduces differences between spectra obtained from different sensors and geometries. The prediction of clay by a spectral library using data with different settings is favoured to standardize, from R² of 0.83 to 0.85 after correction, indicating the validity of the unification of the spectra by the proposed technique.
307

A protocol for the conservation of the built heritage of Suakin

Ashley, Katherine S. January 2015 (has links)
The conservation of built heritage is increasingly recognised as promoting cultural sustainability and encouraging the inclusion of culture in the sustainable development of the built environment. Reflecting this recognition is the advocacy of a dynamic integrated conservation approach that considers built heritage within its historic, physical, social, and cultural contexts. Yet, the cultural context of built heritage remains one of the most challenging and neglected aspects in conservation practice. In the specific case of Sudan's historic port town of Suakin, a number of recurrent obstacles to the site's conservation, in addition to a number of potential enablers to address these challenges, have been recognised throughout previous research. However, previous investigations have lacked an essential local socio-cultural perspective. Furthermore, a lack of strategy or framework for Suakin's conservation has so far prevented the coordination of its stakeholders, and the consequential implementation of potential enablers to address its conservation challenges. This thesis is the conclusion of a four-year EngD research that has developed a protocol for the conservation of the built heritage of Suakin. It begins with an introduction to the context, justification and scope of the research, and the research aim and objectives. A review of previous literature is then presented concerning a number of issues related to the research subject and the methodology employed to meet the research aim and objectives. The research methods conducted, including literature review, a mixed-method case study, questionnaire surveys, and a series of participatory action research focus groups, are then explained and the results achieved are discussed. The research findings result in the development of a protocol for Suakin's conservation consisting of five themes emanating from the research stages. These are: ownership; finances and planning; stakeholder inclusion and collaboration; conservation knowledge and awareness; response to the local context. Each theme is comprised of a challenge, or number of challenges, and corresponding solution(s). Furthermore, the research findings define a protocol implementation strategy, consisting of Suakin's stakeholders' suggested implementation and responsibility of the protocol solutions. The collaborative stakeholder process established by the research, and the resulting protocol and its implementation strategy, are a new development in the approach towards Suakin's conservation. The potential long-term impact of the research on Suakin's conservation has so far been indicated by the adoption of the resulting protocol implementation strategy as a formal approach to Suakin's conservation by NCAM. The thesis concludes with a critical review of the research throughout the research stages and key recommendations for the research sponsor, for Suakin's stakeholders, for the built heritage conservation industry and for further research. The findings of this research were published in four peer-reviewed papers.
308

Quality of service for high-speed interconnection networks onboard spacecraft

Ferrer Florit, Albert January 2013 (has links)
State-of-the-art onboard spacecraft avionics use SpaceWire networks to interconnect payload data-handling sub-systems. This includes high data-rate sensors and instruments, processing units, and memory devices. SpaceWire is an interconnection network composed of nodes and routers connected by bi-directional, point-to-point, high-speed, serial-data communication links. SpaceWire is established as one of the main data-handling protocols and is being used on many ESA, NASA and JAXA spacecraft. SpaceWire is very successful for being fast, flexible and simple to use and implement. However it does not implement Quality of Service mechanisms, which aim to provide guarantees in terms of reliability and timely delivery to data generated by network clients. Quality of Service is increasingly being deployed in commercial ground technologies and its availability for space applications, which requires high reliability and performance, is of high interest for the space community. This thesis researches how Quality of Service can be provided to existing SpaceWire networks. Existing solutions for ground-based technologies cannot be directly used because of the constraints imposed by the limitations of space-qualified electronics. Due to these limitations SpaceWire uses wormhole routing which has many benefits but makes it more challenging to obtain timing guarantees and to achieve a deterministic behaviour. These challenges are addressed in this work with a careful analysis of existing Quality of Service techniques and the implementation of a novel set of protocols specifically designed for SpaceWire networks. These new protocols target specific use cases and utilise different mechanisms to achieve the required reliability, timely delivery and determinism. Traditional and novel techniques are deployed for first time in SpaceWire networks. In particular, segmentation, acknowledgements, retry, time-division multiplexing an cross-layer techniques are considered, analysed, implemented and evaluated with extensive prototyping efforts. SpaceWire provides high-rate data transfers but the next generation of payload instruments are going to require multi-gigabit capabilities. SpaceFibre is a new onboard networking technology under development which aims to satisfy these new requirements, keeping compatibility with SpaceWire user-applications. As a new standard, SpaceFibre offers the opportunity to implement Quality of Service techniques without the limitations imposed by the SpaceWire standard. The last part of this thesis focuses on the specification of the SpaceFibre standard in order to provide the Quality of Service required by next generation of space applications. This work includes analytical studies, software simulations, and hardware prototyping of new concepts which are the basis of the Quality of Service mechanisms defined in the new SpaceFibre standard. Therefore, a critical contribution is made to the definition and evaluation of a novel Quality of Service solution which provides high reliability, bandwidth reservation, priority and deterministic delivery to SpaceFibre links.
309

Transcostal focused ultrasound surgery : treatment through the ribcage

Gao, Jing January 2012 (has links)
Two issues hindering the clinical application of image-guided transcostal focused ultrasound surgery (FUS) are the organ motion caused by cardiac and respiratory movements and the presence of the ribcage. Intervening ribs absorb and reflect the majority of ultrasound energy excited by an acoustic source, resulting in insufficient energy delivered to the target organs of the liver, kidney, and pancreas. Localized hot spots also exist at the interfaces between the ribs and soft tissue and in highly absorptive regions such as the skin. The aim of this study is to assess the effects of transmitted beam distortion and frequency-dependent rib heating during trans-costal FUS, and to propose potential solutions to reduce the side effects of rib heating and increase ultrasound efficacy. Direct measurements of the transmitted beam propagation were performed on a porcine rib cage phantom, an epoxy rib cage phantom and an acoustic absorber rib cage phantom, in order of their similarities to the human rib cage. Finite element analysis was used to investigate the rib cage geometry, the position of the target tissue relative to the rib cage, and the geometry and operating frequency of the transducer. Of particular importance, frequency-dependent heating at the target and the intervening ribs were estimated along with experimental verification. The ratio of ultrasonic power density at the target and the ribs, the time-varying spatial distribution of temperature, and the ablated focus of each sonication are regarded as key indicators to determine the optimal frequency. Following that, geometric rib-sparing was evaluated by investigating the operation of 2D matrix arrays to optimize focused beam shape and intensity at target. Trans-costal FUS is most useful in treating tumours that are small and near the surface of the abdominal organs, such as the liver, kidney and pancreas. However, for targets deep inside these organs, severe attenuation of acoustic energy occurs, suggesting that pure ultrasound thermal ablation with different heating patterns will have limited effects in improving the treatment efficacy. Results also demonstrate that the optimal ultrasound frequency is around 0.8 MHz for the configurations considered, but that it may shift to higher frequencies with changes in the axial and lateral positions of the tumours. In this work, I aimed to reduce the side effects of rib heating and increase the ultrasound efficacy at the focal point in trans-costal treatment. However, potential advanced techniques need to be explored for further enhanced localized heating in trans-costal FUS.
310

Routage inter-domaine / Inter-domain routing

Sarakbi, Bakr 10 February 2011 (has links)
Internet est le réseau le plus gigantesque que l'humanité ne se soit pourvu. Il fournit un nombre important de services à plus de deux milliards d'utilisateurs. Cette topologie grandissante et complexe pêche en stabilité, ce qui peut notamment être constaté quand un appel voix est interrompu, quand une page web à besoin d'être rafraîchie, etc. L'initiateur de cette instabilité est l'ensemble des événements constatés dans l'Internet. Ceci nous motive à une Étude profonde de la stabilité d'Internet afin de suggère des solutions à cette problématique. Internet est divisé en deux niveaux de base: le niveau AS (Autonomous System) et le niveau de routage. Cette distinction se répercute dans les protocoles de routage qui contrôlent le trafic Internet à travers deux types de protocoles: extérieur (inter-AS) et intérieur (intra-AS). L'unique protocole de routage extérieur utilité est le mode externe de BGP (External Border Gateway Protocol) tandis qu'il en existe plusieurs de type intérieur. De fait, la stabilisation de l'Internet est corrélée à la stabilité des protocoles de routage. Cela pousse les efforts de traitement de l'instabilité de l'Internet à Étudier le comportement du protocole de routage (BGP). Analyser les résultats des comportements de BGP dans son mode externe (eBGP) souffre d'un temps de convergence important menant notamment à des réponses lentes des évènements de topologie et, à terme, à la perte du trafic. Les études établissent également que le mode interne de BGP (iBGP) souffre de plusieurs types d'anomalies de routage causant la divergence. Afin d'illustrer la stabilité de BGP, nous avons besoin d'un modèle de routage qui formule la procédure de décision et le flot de signalisation. De plus, les améliorations de BGP ne peuvent pas être aisément validées dans l'Internet, rendant ainsi les modèles de BGP indispensables à une validation formelle. De fait, la première étape dans l'étude du routage inter-domaine est de définir un modèle approprié permettant la formulation de ses opérations et de prouver sa correction. Nous avons proposé deux modèles complémentaires: topologique et fonctionnel, avec lesquels nous avons formulé le processus de convergence du routage et démontré la sécurité et la robustesse de nos solutions d'inter/intra-AS. Notre proposition d'inter-AS élimine les déconnections transitoires causées par une faible convergence d'eBGP en suggérant une stratégie de backup lors d'une panne. Notre proposition d'intra-AS (skeleton) donne une alternative aux configurations internes existantes, pour laquelle nous avons montré l'absence d'anomalies. / Internet is the hugest network the humanity has ever known. It provides a large number of various services to more than two billion users. This complex and growing topology lacks stability, which we can notice when a voice call is dropped, when a web page needs to be refreshed, etc. The initiator of this instability is the frequent events around the Internet. This motivates us to unleash a profound study to tackle Internet stability and suggest solutions to overcome this concern. Internet is divided into two obvious levels: AS (Autonomous System) level and router level. This distinction is reflected on the routing protocols that control the Internet traffic through two protocol types: exterior (inter-AS) and interior (intra-AS). The unique used exterior routing protocol is the external mode of BGP (External Border Gateway Protocol), while there are several used internal routing protocols. Therefore, stabilizing the Internet is correlated to the routing protocol stability, which directs such efforts to the investigation of routing protocol (BGP) behavior. Studying BGP behaviors results in that its external mode (eBGP) suffers from long convergence time which is behind the slow response to topology events and, in turn, the traffic loss. Those studies state also that BGP internal mode (iBGP) suffers from several types of routing anomalies that causes its divergence.Therefore, we propose enhancements for both BGP modes: eBGP and iBGP and try to meet the following objectives: Scalability, safety, robustness, correctness, and backward compatibility with current version of BGP. Our eBGP proposal eliminates the transient disconnectivity caused by slow convergence by suggesting a backup strategy to be used upon the occurrence of a failure. IBGP proposal (skeleton) gives an alternative to the existing internal configurations, that we prove its freeness of anomalies. Validation methods are essential to prove that the suggested enhancements satisfy the attended objectives. Since we are tackling an interdomain subject, then it is not possible to do validation in the real Internet. We suggested several validation methods to show that our enhancements meet the above objectives. We used simulation environment to implement eBGP backup solution and observe the convergence time and the continuous connectivity. We relied on two tools: brite and rocketfuel to provide us with inter and intra AS topologies respectively. And to prove the safety of our approaches we employed an algebraic framework and made use of its results.

Page generated in 0.0312 seconds