• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 371
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1077
  • 1077
  • 331
  • 274
  • 193
  • 136
  • 117
  • 101
  • 93
  • 92
  • 77
  • 76
  • 76
  • 72
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
731

台灣地區共同基金績效持續性及證券投資信託事業開放影響之研究 / The Study on Consistency of Mutual Fund's Performance and on Impact of Security Investment Trust Open to Public (Taiwan)

徐嘉慶, Hsu,Chia Ching Unknown Date (has links)
本研究的主要目的在於引用學理上有關共同基金的績效評估模式,對投資 於國內證券市場的共同基金,分別在多頭與空頭市場下,進行績效評估, 並檢定有關的假說。其中,檢定這些共同基金在不同時期的績效表現是否 具有持續的性質,乃本研究最主要的假說檢定。此外,證管會決定開放新 的證券投資信託公司設立,新的競爭者加入經營,其對投信事業以及整個 證券市場在不久的將來所造成的衝擊,也是本研究欲嘗試加以窺探的主題 。本研究得到以下的結論:一?共同基金的績效表現,會隨著評估期間與 績效指標的不同而異。在多頭時期,以福爾摩莎與福元基金表現最好,國 民與中華基金表現最差;在空頭時期,以台灣與福元基金表現最好,光華 與鴻運基金則表現最差。二?績效指標的選擇,可用Sharpe及M.C.V.績效 指標評估共基金整體績效;以Fama模式的淨選擇能力及分散能力評基金的 分散能力和選股能力。三?國內基金的平均報酬率未優於市場投資組合平 均報酬率。四?開放型基金與封閉型基金的績效差異不顯著。五?四家舊 投信的投資績效差異不顯著。六?國內共同基金的績效表現,經由統計檢 定發現,在本研的前後期不具顯著的持續性。七?國內原有四家投信的投 資績效,經由統計檢定發現,前期不具顯著的持續性。八?新投信依照股 權結構區分為專業經營型?證券商主導型財團主導型。各種類型的新投信 有其優缺點,而各家投在專業人才?基金管理?促銷及投資策略上或有異 同。九?主管機關對開放投信設立採取自由競爭的態度。十?新投信開放 設立後將對證券市場及現有投信造成衝擊
732

參與的理想或授能的幻想?從民主行政重構政府績效管理制度 / The ideal of participation or the illusion of empowerment? Reframing government performance management from democratic administration

黃重豪, Huang, Chung Hao Unknown Date (has links)
【研究目的】 管理理論從素樸的績效評估過渡到發展性的績效管理,反映了從集權控制轉向分權參與的趨勢,一方面更加著重於人力資源發展,對政府部門而言,目的尚在鞏固文官的中立能力。惟我國以績效管理為名進行考績制度的改革時,似乎仍執著於命令與控制手段的討論。準此,本研究首先檢討現行考績參與機制的良窳,再探詢公務人員對於考績制度民主化的認知,以此剖析制度轉變可能遭遇的困境,最後依據實務可行性重構參與式考績制度。 【研究問題與研究方法】 本研究採質化方法逐一針對研究問題分析之,分別為考績制度是否有體現民主價值的必要?考績委員會是否有助組織管理的民主化?公務人員是否認同考績制度體現民主價值?如何從民主行政角度設計參與式考績制度?本研究以次級資料及文獻分析法論證考績制度體現民主價值的必要性及考績委員會的制度起源;再從深度訪談法分析考績委員會是否發揮功能,以及公務人員對考績制度民主化的認知及接受度;最後綜合所有質性資料提出參與式考績制度的改革建議。 【研究發現】 本研究從理論與訪談論證考績制度的權力集中化將扼殺文官的專業能力,而考績委員會雖企求藉民主參與確保程序正義,卻仍將上下從屬的權力結構複製進獨立委員會,失卻水平制衡的功能。而以部分機關試行績效管理的經驗及深度訪談的假設性詢問發現,行政人員普遍無法接受考績制度開放民主參與,僅期盼獲悉考核原委,故仍將制度正義寄託在考核者身上。推其原因在於集體行動的邏輯、橫向組織力量的匱乏,以及家長式領導的文化,使組織民主遭到實用性價值的貶低。最後本研究在不侵犯管理權威的前提下,從資訊公開角度主張考核資訊透明化及考績委員會「實質參與」,作為考績制度改革的建議。
733

Monitoring as an instrument for improving environmental performance in public authorities : Experience from Swedish Infrastructure Management / : Experience from Swedish Infrastructure Management

Lundberg, Kristina January 2009 (has links)
Monitoring is an important tool for gaining insight into an organisation’s environmental performance and for learning about the environmental condition and the effectiveness of environmental management measures. Development of environmental monitoring has generally relied on research aiming at improving monitoring methodology, technique or practice within a particular management tool. Little empirical research has taken into account the organisation’s reality where several management tools are used in parallel. This thesis analyses the practice of environmental monitoring in public authorities with the aim of identifying barriers and possibilities for environmental monitoring as an instrument for improving environmental performance, using the Swedish Rail Administration as a case organisation. The study identified two different types of environmental monitoring: environmental performance measurement (EPM) and activity monitoring, both important for achieving environmental improvements. EPM involves gathering and evaluating data to determine whether the organisation is meeting the criteria for environmental performance set by the management of the organisation. EPM can further be used for judging the success and failure of environmental objectives and strategies. Activity monitoring provides each project of the organisation with information to minimise the negative effects on the natural environment or human health and to ensure that the organisation’s operations conform with regulations. Problems encountered comprised a variety of little co-ordinated monitoring activities, poor utilization of the monitoring results as well as limited internal feedback on monitoring results. Some of the problems identified seem to be an effect of the management transition from a traditional ‘command and control’ system to a self-administered organisation managed by economic incentives and voluntary management systems. This thesis suggests several improvements to make monitoring more efficient. Primarily, the monitoring systems must have a clear structure and be adapted to its specific function. The EPE system would benefit from being integrated with the organisation’s central performance measurement, presenting progress towards organisational strategic objectives as well as operational objectives. The system for activity monitoring must not only focus on inputs and outputs to the system but must also include the environmental condition of the system. In order to improve communication and learning, monitoring data within both EPE and activity monitoring must be better transmitted and utilised within the structure of the permanent organisation. Experience from all monitoring activities that now is scattered and inaccessible to the individuals of the organisation could beneficially be stored within a well-structured organisational ‘memory‘. Such a system would facilitate an iterative management process where the monitoring results and the knowledge gained are used for making future plans and projects more adaptive, thereby improving the environmental performance of the organisation. / QC 20100729
734

Scalable download protocols

Carlsson, Niklas 15 December 2006
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
735

Scalable and Highly Available Database Systems in the Cloud

Minhas, Umar Farooq January 2013 (has links)
Cloud computing allows users to tap into a massive pool of shared computing resources such as servers, storage, and network. These resources are provided as a service to the users allowing them to “plug into the cloud” similar to a utility grid. The promise of the cloud is to free users from the tedious and often complex task of managing and provisioning computing resources to run applications. At the same time, the cloud brings several additional benefits including: a pay-as-you-go cost model, easier deployment of applications, elastic scalability, high availability, and a more robust and secure infrastructure. One important class of applications that users are increasingly deploying in the cloud is database management systems. Database management systems differ from other types of applications in that they manage large amounts of state that is frequently updated, and that must be kept consistent at all scales and in the presence of failure. This makes it difficult to provide scalability and high availability for database systems in the cloud. In this thesis, we show how we can exploit cloud technologies and relational database systems to provide a highly available and scalable database service in the cloud. The first part of the thesis presents RemusDB, a reliable, cost-effective high availability solution that is implemented as a service provided by the virtualization platform. RemusDB can make any database system highly available with little or no code modifications by exploiting the capabilities of virtualization. In the second part of the thesis, we present two systems that aim to provide elastic scalability for database systems in the cloud using two very different approaches. The three systems presented in this thesis bring us closer to the goal of building a scalable and reliable transactional database service in the cloud.
736

我國各縣市整體環保績效之研究 / The Performance Evaluation of Environmental Protection in Taiwan’s Local Governments

游京晶, Yu, Jing Jing Unknown Date (has links)
台灣1970年代以來經濟起飛,人民生活日漸富裕,隨著經濟實力成長,同時也犧牲了環境,為了使人類永續發展,人民開始重視環境保護,要求政府改善以維護生活品質。因此環保投入的效率成為重要的研究課題。 本研究目的以資料包絡分析法建立客觀的「投入-產出」模型,來評估2001年至2010年各縣市環保機關在空氣、噪音、水及廢棄物污染防制的續效表現,再分析各縣市環保機關整體績效,並研究四個環保評估面向影響整體環保績效的比例為何。 實證結果發現,整體績效而言,台北市、高雄市資源雖然多,但技術仍然不足以應付沉重的環境、人口負荷量,使得投入與產出的效率不如其他縣市。並由各環保面向績效的趨勢分析發現,資源回收率的效率進步最多,水污染防制效率最差。 Tobit迴歸模型中,四個環保評估面向對整體環保面向的影響為顯著正相關,而且資源回收率效率值對整體環保效率值的影響最大,符合本研究預期。 / This research aims at assessing environmental protection performance in Taiwan’s local governments about air pollution, noise pollution, water pollution and resource recycling from 2001 to 2010 base on DEA and Tobit regression model to analyze the effect of each part’s CCR on whole Environmental Protection efficiency. The result of DEA are (1)The Taipei city and Kaohsiung are good in input than other cities, but lower than other cities in output, because there are too many population to deal with those pollution. (2)Analyzing the trend of each environmental protection part, the resource recycling make great progress. According to this study, the fore evaluations are significantly positive effect on whole performance evaluation of environmental protection. The most value is resource recycling that meet our expected
737

Design and analysis of medium access control protocols for ad hoc and cooperative wireless networks

Alonso Zárate, Jesús 25 February 2009 (has links)
La presente tesis doctoral contribuye a la incesante evolución de las comunicaciones inalámbricas. Se centra en el diseño de protocolos de acceso al medio (MAC) para redes ad hoc y redes inalámbricas cooperativas. En una primera parte introductoria se presenta un minucioso estado del arte y se establecen las bases teóricas de las contribuciones presentadas en la tesis. En esta primera parte introductoria se definen las principales motivaciones de la tesis y se plantean los objetivos. Después, las contribuciones de la tesis se organizan en dos grandes bloques, o partes. En la primera parte de esta tesis se diseña, analiza y evalúa el rendimiento de un novedoso protocolo MAC de alta eficiencia llamado DQMAN (Protocolo MAC basado en colas distribuidas para redes ad hoc). Este protocolo constituye la extensión y adaptación del protocolo DQCA, diseñado para redes centralizadas, para operar en redes sin infraestructura. En DQMAN se introduce un nuevo paradigma en el campo del acceso al medio para redes distribuidas: la integración de un algoritmo de clusterización espontáneo y dinámico basado en una estructura de master y esclavo junto con un protocolo MAC de alta eficiencia diseñado para redes centralizadas. Tanto el análisis teórico como las simulaciones por ordenador presentadas en esta tesis muestran que DQMAN mejora el rendimiento del actual estándar IEEE 802.11. La principal característica de DQMAN es que se comporta como un protocolo de acceso aleatorio cuando la carga de tráfico es baja y cambia automática y transparentemente a un protocolo de reserva a medida que el tráfico de la red aumenta. Además, su rendimiento es prácticamente independiente del número de usuarios simultáneos de la red, lo cual es algo deseable en redes que nacen para cubrir una necesidad espontánea y no pueden ser planificadas. El hecho de que algoritmo de clusterización se base en un acceso aleatorio permite la coexistencia e intercomunicación de usuarios DQMAN con usuarios basados en el estándar IEEE 802.11. Este estudio se presenta en esta primera parte de la tesis y es fundamental de cara a una posible explotación comercial de DQMAN. La metodología presentada en esta tesis mediante el cual se logra extender la operación de DQCA a entornos ad hoc sin infraestructura puede ser utilizada para adaptar cualquier otro protocolo centralizado. Con el objetivo de poner de manifiesto esta realidad, la primera parte de la tesis concluye con el diseño y evaluación de DPCF como una extensión distribuida del modo de coordinación centralizado (PCF) del estándar IEEE 802.11 para operar en redes distribuidas. La segunda parte de la tesis se centra en el estudio de un tipo específico de técnicas cooperativas: técnicas cooperativas de retransmisión automática (C-ARQ). La idea principal de las técnicas C-ARQ es que cuando un paquete de datos se recibe con bits erróneos, se solicita retransmisión, no a la fuente de datos, si no a cualquiera de los usuarios que escuchó la transmisión original. Estos usuarios se convierten en espontáneos retransmisores que permiten mejorar la eficiencia de la comunicación. A pesar de que este tipo de esquema puede obtener diversidad de cooperación, el hecho de implicar a más de un usuario en una comunicación punto a punto requiere una coordinación que hasta ahora ha sido obviada en la literatura, asumiendo que los retransmisores pueden coordinarse perfectamente para retransmitir uno detrás de otro. En esta tesis se analiza y evalúa el coste de coordinación impuesto por la capa MAC y se identifican los principales retos de diseño que las técnicas C-ARQ imponen al diseño de la capa MAC. Además, se presenta el diseño y análisis de dos novedosos protocolos MAC para C-ARQ: DQCOOP y PRCSMA. El primero se basa en DQMAN y constituye una extensión de este para operar en esquemas C-ARQ, mientras que el segundo constituye la adaptación del estándar IEEE 802.11 para poder ejecutarse en un esquema C-ARQ. El rendimiento de estos esquemas se compara en esta tesis tanto con esquemas no cooperativos como con esquemas ideales cooperativos donde se asume que el MAC es ideal. Los resultados principales muestran que el diseño eficiente de la capa MAC es esencial para obtener todos los beneficios potenciales de los esquemas cooperativos. / This thesis aims at contributing to the incessant evolution of wireless communications. The focus is on the design of medium access control (MAC) protocols for ad hoc and cooperative wireless networks. A comprehensive state of the art and a background on the topic is provided in a first preliminary part of this dissertation. The motivations and key objectives of the thesis are also presented in this part. Then, the contributions of the thesis are divided into two fundamental parts. The first part of the thesis is devoted to the design, analysis, and performance evaluation of a new high-performance MAC protocol. It is the Distributed Queueing MAC Protocol for Ad hoc Networks (DQMAN) and constitutes an extension and adaptation of the near-optimum Distributed Queueing with Collision Avoidance (DQCA) protocol, designed for infrastructure-based networks, to operate over networks without infrastructure. DQMAN introduces a new access paradigm in the context of distributed networks: the integration of a spontaneous, dynamic, and soft-binding masterslave clustering mechanism together with a high-performance infrastructure-based MAC protocol. Theoretical analysis and computer-based simulation show that DQMAN outperforms IEEE 802.11 Standard. The main characteristic of the protocol is that it behaves as a random access control protocol when the traffic load is low and it switches smoothly and automatically to a reservation protocol as the traffic load grows. In addition, its performance is almost independent of the number of users of a network. The random-access based clustering algorithm allows for the coexistence and intercommunication of stations using DQMAN with the ones just based on the legacy IEEE 802.11 Standard. This assessment is also presented in this first part of the dissertation and constitutes a key contribution in the light of the commercial application of DQMAN. Indeed, the rationale presented in this first part of the thesis to extend DQCA and become DQMAN to operate over distributed networks can be used to extend the operation of any other infrastructure-based MAC protocol to ad hoc networks. In order to exemplify this, a case study is presented to conclude the first part of the thesis. The Distributed Point Coordination Function (DPCF) MAC protocol is presented as the extension of the PCF of the IEEE 802.11 Standard to be used in ad hoc networks. The second part of the thesis turns the focus to a specific kind of cooperative communications: Cooperative Automatic Retransmission Request (C-ARQ) schemes. The main idea behind C-ARQ is that when a packet is received with errors at a receiver, a retransmission can be requested not only from the source but also to any of the users which overheard the original transmission. These users can become spontaneous helpers to assist in the failed transmission by forming a temporary ad hoc network. Although such a scheme may provide cooperative diversity gain, involving a number of users in the communication between two users entails a complicated coordination task that has a certain cost. This cost has been typically neglected in the literature, assuming that the relays can attain a perfect scheduling and transmit one after another. In this second part of the thesis, the cost of the MAC layer in C-ARQ schemes is analyzed and two novel MAC protocols for C-ARQ are designed, analyzed, and comprehensively evaluated. They are the DQCOOP and the Persistent Relay Carrier Sensing Multiple Access (PRCSMA) protocols. The former is based on DQMAN and the latter is based on the IEEE 802.11 Standard. A comparison with non-cooperative ARQ schemes (retransmissions performed only from the source) and with ideal CARQ (with perfect scheduling among the relays) is included to have actual reference benchmarks of the novel proposals. The main results show that an efficient design of the MAC protocol is crucial in order to actually obtain the benefits associated to the C-ARQ schemes.
738

Contributions to channel modelling and performance estimation of HAPS-based communication systems regarding IEEE Std 802.16TM

Palma Lázgare, Israel Romualdo 24 October 2011 (has links)
New and future telecommunication networks are and will be broadband type. The existing terrestrial and space radio communication infrastructures might be supplemented by new wireless networks that make and will make use of aeronautics-technology. Our study/contribution is referring to radio communications based on radio stations aboard a stratospheric platform named, by ITU-R, HAPS (High Altitude Platform Station). These new networks have been proposed as an alternative technology within the ITU framework to provide various narrow/broadband communication services. With the possibility of having a payload for Telecommunications in an aircraft or a balloon (HAPS), it can be carried out radio communications to provide backbone connections on ground and to access to broadband points for ground terminals. The latest implies a complex radio network planning. Therefore, the radio coverage analysis at outdoors and indoors becomes an important issue on the design of new radio systems. In this doctoral thesis, the contribution is related to the HAPS application for terrestrial fixed broadband communications. HAPS was hypothesised as a quasi-static platform with height above ground at the so-called stratospheric layer. Latter contribution was fulfilled by approaching via simulations the outdoor-indoor coverage with a simple efficient computational model at downlink mode. This work was assessing the ITU-R recommendations at bands recognised for the HAPS-based networks. It was contemplated the possibility of operating around 2 GHz (1820 MHz, specifically) because this band is recognised as an alternative for HAPS networks that can provide IMT-2000 and IMT-Advanced services. The global broadband radio communication model was composed of three parts: transmitter, channel, and receiver. The transmitter and receiver parts were based on the specifications of the IEEE Std 802.16TM-2009 (with its respective digital transmission techniques for a robust-reliable link), and the channel was subjected to the analysis of radio modelling at the level of HAPS and terrestrial (outdoors plus indoors) parts. For the channel modelling was used the two-state characterisation (physical situations associated with the transmitted/received signals), the state-oriented channel modelling. One of the channel-state contemplated the environmental transmission situation defined by a direct path between transmitter and receiver, and the remaining one regarded the conditions of shadowing. These states were dependent on the elevation angle related to the ray-tracing analysis: within the propagation environment, it was considered that a representative portion of the total energy of the signal was received by a direct or diffracted wave, and the remaining power signal was coming by a specular wave, to last-mentioned waves (rays) were added the scattered and random rays that constituted the diffuse wave. At indoors case, the variations of the transmitted signal were also considering the following matters additionally: the building penetration, construction material, angle of incidence, floor height, position of terminal in the room, and indoor fading; also, these indoors radiocommunications presented different type of paths to reach the receiver: obscured LOS, no LOS (NLOS), and hard NLOS. The evaluation of the feasible performance for the HAPS-to-ground terminal was accomplished by means of thorough simulations. The outcomes of the experiment were presented in terms of BER vs. Eb/N0 plotting, getting significant positive conclusions for these kind of system as access network technology based on HAPS.
739

Scalable download protocols

Carlsson, Niklas 15 December 2006 (has links)
Scalable on-demand content delivery systems, designed to effectively handle increasing request rates, typically use service aggregation or content replication techniques. Service aggregation relies on one-to-many communication techniques, such as multicast, to efficiently deliver content from a single sender to multiple receivers. With replication, multiple geographically distributed replicas of the service or content share the load of processing client requests and enable delivery from a nearby server.<p>Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. Analytic lower bounds developed in this thesis show that neither of these protocols consistently yields performance close to optimal. New hybrid protocols are proposed that achieve within 20% of the optimal delay in homogeneous systems, as well as within 25% of the optimal maximum client delay in all heterogeneous scenarios considered.<p>In systems utilizing both service aggregation and replication, well-designed policies determining which replica serves each request must balance the objectives of achieving high locality of service, and high efficiency of service aggregation. By comparing classes of policies, using both analysis and simulations, this thesis shows that there are significant performance advantages in using current system state information (rather than only proximities and average loads) and in deferring selection decisions when possible. Most of these performance gains can be achieved using only local (rather than global) request information.<p>Finally, this thesis proposes adaptations of already proposed peer-assisted download techniques to support a streaming (rather than download) service, enabling playback to begin well before the entire media file is received. These protocols split each file into pieces, which can be downloaded from multiple sources, including other clients downloading the same file. Using simulations, a candidate protocol is presented and evaluated. The protocol includes both a piece selection technique that effectively mediates the conflict between achieving high piece diversity and the in-order requirements of media file playback, as well as a simple on-line rule for deciding when playback can safely commence.
740

Towards Ideal Network Traffic Measurement: A Statistical Algorithmic Approach

Zhao, Qi 03 October 2007 (has links)
With the emergence of computer networks as one of the primary platforms of communication, and with their adoption for an increasingly broad range of applications, there is a growing need for high-quality network traffic measurements to better understand, characterize and engineer the network behaviors. Due to the inherent lack of fine-grained measurement capabilities in the original design of the Internet, it does not have enough data or information to compute or even approximate some traffic statistics such as traffic matrices and per-link delay. While it is possible to infer these statistics from indirect aggregate measurements that are widely supported by network measurement devices (e.g., routers), how to obtain the best possible inferences is often a challenging research problem. We name this as "too little data" problem after its root cause. Interestingly, while "too little data" is clearly a problem, "too much data" is not a blessing either. With the rapid increase of network link speeds, even to keep sampled summarized network traffic (for inferring various network statistics) at low sample rates results in too much data to be stored, processed, and transmitted over measurement devices. In summary high-quality measurements in today's Internet is very challenging due to resource limitations and lack of built-in support, manifested as either "too little data" or "too much data". We present some new practices and proposals to alleviate these two problems.The contribution is four fold: i) designing universal methodologies towards ideal network traffic measurements; ii) providing accurate estimations for several critical traffic statistics guided by the proposed methodologies; iii) offering multiple useful and extensible building blocks which can be used to construct a universal network measurement system in the future; iv) leading to some notable mathematical results such as a new large deviation theorem that finds applications in various areas.

Page generated in 0.1205 seconds