• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 370
  • 356
  • 40
  • 34
  • 34
  • 32
  • 30
  • 28
  • 8
  • 7
  • 6
  • 4
  • 4
  • 3
  • 2
  • Tagged with
  • 1076
  • 1076
  • 331
  • 274
  • 193
  • 136
  • 117
  • 101
  • 93
  • 91
  • 77
  • 76
  • 76
  • 72
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
471

Design and Analysis of Algorithms for Efficient Location and Service Management in Mobile Wireless Systems

Gu, Baoshan 01 December 2005 (has links)
Mobile wireless environments present new challenges to the design and validation of system supports for facilitating development of mobile applications. This dissertation concerns two major system-support mechanisms in mobile wireless networks, namely, location management and service management. We address this research issue by considering three topics: location management, service management, and integrated location and service management. A location management scheme must effectively and efficiently handle both user location-update and location-search operations. We first quantitatively analyze a class of location management algorithms and identify conditions under which one algorithm may perform better than others. From insight gained from the quantitative analysis, we design and analyze a hybrid replication with forwarding algorithm that outperforms individual algorithms and show that such a hybrid algorithm can be uniformly applied to mobile users with distinct call and mobility characteristics to simplify the system design without sacrificing performance. For service management, we explore the notion of location-aware personal proxies that cooperate with the underlying location management system with the goal to minimize the network communication cost caused by service management operations. We show that for cellular wireless networks that provide packet services, when given a set of model parameters characterizing the network and workload conditions, there exists an optimal proxy service area size such that the overall network communication cost for service operations is minimized. These proxy-based mobile service management schemes are shown to outperform non-proxy-based schemes over a wide range of identified conditions. We investigate a class of integrated location and service management schemes by which service proxies are tightly integrated with location databases to further reduce the overall network signaling and communication cost. We show analytically and by simulation that when given a user's mobility and service characteristics, there exists an optimal integrated location and service management scheme that would minimize the overall network communication cost for servicing location and service operations. We demonstrate that the best integrated location and service scheme identified always performs better than the best decoupled scheme that considers location and service managements separately. / Ph. D.
472

Linking Governance and Performance: ICANN as an Internet Hybrid

Lee, Maeng Joo 25 August 2008 (has links)
The Internet Corporation for Assigned Names and Numbers (ICANN) is a hybrid organization managing the most critical Internet infrastructure - the Domain Name System. ICANN represents a new, emerging Internet self-governance model in which the private sector takes the lead and the government sector plays a more marginal role. Little is known, however, about what is actually happening in this new organization. The dissertation (a) systematically assesses ICANN's overall performance based on a set of evaluative criteria drawn from its mission statements; (b) explores possible factors and actors that influence ICANN's overall performance by tracing the governance processes in three cases based on a preliminary conceptual framework; and (c) suggests practical and theoretical implications of ICANN's governance and performance in its broader institutional context. The study finds that although differing governance processes have led to different performance outcomes (Lynn et al. 2000), "stability" has been the defining value that has shaped the overall path of ICANN's governance and performance. The study characterizes ICANN as a conservative hybrid captured, based on specific issues, by the technical and governmental communities. It also proposes the concept of "technical capture" to suggest how technical experts can have significant, but often implicit, influence over the policy development process in organizations. / Ph. D.
473

Strength and Life Prediction of FRP Composite Bridge Deck

Majumdar, Prasun Kanti 30 April 2008 (has links)
Fiber reinforced polymer (FRP) composites are considered very promising for infrastructure applications such as repair, rehabilitation and replacement of deteriorated bridge decks. However, there is lack of proper understanding of the structural behavior of FRP decks. For example, due to the localization of load under a truck tire, the conventionally used uniform patch loading is not suitable for performance evaluation of FRP composite deck systems with cellular geometry and relatively low modulus (compared to concrete decks). In this current study, a simulated tire patch loading profile has been proposed for testing and analysis of FRP deck. The tire patch produced significantly different failure mode (local transverse failure under the tire patch) compared to the punching-shear mode obtained using the conventional rectangular steel plate. The local response of a cellular FRP composite deck has been analyzed using finite element simulation and results are compared with full scale laboratory experiment of bridge deck and structure. Parametric studies show that design criteria based on global deck displacement is inadequate for cellular FRP deck and local deformation behavior must be considered. The adhesive bonding method is implemented for joining of bridge deck panels and response of structural joint analyzed experimentally. Strength, failure mode and fatigue life prediction methodologies for a cellular FRP bridge deck are presented in this dissertation. / Ph. D.
474

Institutional segmentation of equity markets: causes and consequences

Hosseinian, Amin 27 July 2022 (has links)
We re-examine the determinants of institutional ownership (IO) from a segmentation perspective -- i.e. accounting for a hypothesized systematic exclusion of stocks that cause high implementation or agency costs. Incorporating segmentation effects substantially improves both explained variance in IO and model parsimony (essentially requiring just one input: market capitalization). Our evidence clearly establishes a role for both implementation costs and agency considerations in explaining segmentation effects. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. Importantly, our study finds evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. We then explore three implications of segmentation for the equity market. First, a mass exodus of publicly listed stocks predicted to fall outside institutions' investable universe helps explain the listing puzzle. There has been no comparable exit by institutionally investable stocks. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, IO return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use our logistic model and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with abnormal institutional ownership generate significant positive returns, suggesting institution actions are informed. / Doctor of Philosophy / We demonstrate that implementation and agency frictions restrict professional money managers from ownership of particular stocks. We characterize this systematic exclusion of stocks as segmentation and show that a specification that accommodates the segmentation effect substantially improves the empirical fit of institutional demand. The adjusted R-squared increases substantially; the residuals are better behaved, and the dimensionality of institutions' demands for stock characteristics reduces from a list of 8-10 standard characteristics (e.g., market cap, liquidity, index membership, volatility, beta) to just one: a stock's market capitalization. Our evidence identifies a prominent role for both implementation costs and agency costs as determinants of institutional segmentation. Implementation costs bind for larger, less diversified, and higher turnover institutions. Agency costs bind for smaller institutions and clienteles sensitive to fiduciary diligence. In fact, we find that segmentation arises from a trade-off between implementation costs (which bind for larger institutions) and agency considerations (which bind for smaller institutions). Agency concerns dominate; characteristics relating to the agency hypothesis have far more explanatory power in identifying the cross-section of segmentation effects than characteristics relating to the implementation hypothesis. More importantly, we find evidence for interior optimum with respect to the institution's scale, due to the counteracting effect between implementation and agency frictions. This conclusion is important to considerations of scale economies/diseconomies in investment management. The agency story goes in the opposite direction to the conventional wisdom underlying scale arguments. We then explore three implications of segmentation for the equity market. First, our evidence suggests that institutional segmentation coupled with growing institutional dominance in public equity markets may have had a truncating effect on the universe of listed stocks. Stocks predicted to fall outside of institutions' investable universe were common prior to the 1990s, but are now almost nonexistent. By contrast, stocks predicted to fall within institutions' investable universe have not declined over time. Second, institutional segmentation can lead to narrow investment opportunity sets, which limit money managers' ability to take advantage of profitable opportunities outside their investment segment. In this respect, we construct pricing factors that are feasible (ex-ante) for institutions and benchmark their performance. We find evidence consistent with the demand-based asset pricing view. Specifically, feasible return factors yield higher return premia and worsened institutional performance relative to standard benchmarks in an expanding institutional setting (pre-millennium). Third, we use logistic specification and examine the effect of aggregated segmentation on the institutions' portfolio returns. Our findings suggest that investment constraints cut profitable opportunities and restrict institutions from generating alpha. In addition, we find that stocks with high (low) abnormal institutional ownership generate significant positive (negative) returns, suggesting institution actions are informed.
475

Automated Landing Site Evaluation for Semi-Autonomous Unmanned Aerial Vehicles

Klomparens, Dylan 27 October 2008 (has links)
A system is described for identifying obstacle-free landing sites for a vertical-takeoff-and-landing (VTOL) semi-autonomous unmanned aerial vehicle (UAV) from point cloud data obtained from a stereo vision system. The relatively inexpensive, commercially available Bumblebee stereo vision camera was selected for this study. A "point cloud viewer" computer program was written to analyze point cloud data obtained from 2D images transmitted from the UAV to a remote ground station. The program divides the point cloud data into segments, identifies the best-fit plane through the data for each segment, and performs an independent analysis on each segment to assess the feasibility of landing in that area. The program also rapidly presents the stereo vision information and analysis to the remote mission supervisor who can make quick, reliable decisions about where to safely land the UAV. The features of the program and the methods used to identify suitable landing sites are presented in this thesis. Also presented are the results of a user study that compares the abilities of humans and computer-supported point cloud analysis in certain aspects of landing site assessment. The study demonstrates that the computer-supported evaluation of potential landing sites provides an immense benefit to the UAV supervisor. / Master of Science
476

An energy-efficient and scalable slot-based privacy homomorphic encryption scheme for WSN-integrated networks

Verma, Suraj, Pillai, Prashant, Hu, Yim Fun 04 1900 (has links)
Yes / With the advent of Wireless Sensor Networks (WSN) and its immense popularity in a wide range of applications, security has been a major concern for these resource-constraint systems. Alongside security, WSNs are currently being integrated with existing technologies such as the Internet, satellite, Wi-Max, Wi-Fi, etc. in order to transmit data over long distances and hand-over network load to more powerful devices. With the focus currently being on the integration of WSNs with existing technologies, security becomes a major concern. The main security requirement for WSN-integrated networks is providing end-to-end security along with the implementation of in-processing techniques of data aggregation. This can be achieved with the implementation of Homomorphic encryption schemes which prove to be computationally inexpensive since they have considerable overheads. This paper addresses the ID-issue of the commonly used Castelluccia Mykletun Tsudik (CMT) [12] homomorphic scheme by proposing an ID slotting mechanism which carries information pertaining to the security keys responsible for the encryption of individual sensor data. The proposed scheme proves to be 93.5% lighter in terms of induced overheads and 11.86% more energy efficient along with providing efficient WSN scalability compared to the existing scheme. The paper provides analytical results comparing the proposed scheme with the existing scheme thus justifying that the modification to the existing scheme can prove highly efficient for resource-constrained WSNs.
477

Web Prefetching Techniques in Real Environments

DE LA OSSA PÉREZ, BERNARDO ANTONIO 10 May 2012 (has links)
Esta tesis estudia la aplicación a la World Wide Web (WWW) de las técnicas de prebúsqueda desde un punto de vista realista y práctico. La prebúsqueda se aplica a la web para reducir la latencia percibida por los usuarios ya que, básicamente, consiste en predecir y preprocesar los siguientes accesos de los usuarios. Hasta ahora, la literatura disponible acerca de la prebúsqueda web se ha concentrado en cuestiones teóricas y no ha considerado algunos de los problemas que aparecen al implementar la técnica en condiciones reales. Por otra parte, los trabajos de investigación existentes usan para la evaluación modelos simplificados que no consideran cómo los aspectos prácticos afectan realmente a la implementación de una técnica de prebúsqueda. Además, apenas unos pocos trabajos han usado índices de prestaciones que sean relevantes para los usuarios en la evaluación de los beneficios que la prebúsqueda puede lograr. Con objeto de superar estas tres restricciones se ha desarrollado Delfos, un entorno de prebúsqueda web que implementa predicción y prebúsqueda en un sistema real, puede integrarse en la arquitectura web sin realizar modificaciones en los protocolos web estándar, y es compatible con los programas existentes. Delfos también puede usarse para evaluar y comparar técnicas de prebúsqueda y algoritmos de predicción así como ayudar en el diseño de otros nuevos ya que proporciona información estadística detallada de los experimentos llevados a cabo. A modo de ejemplo, Delfos se ha usado para proponer, probar y evaluar una nueva técnica (Predecir en la Prebúsqueda, PAP) que es capaz de reducir considerablemente la latencia percibida por el usuario sin costes adicionales respecto al mecanismo de prebúsqueda básico. Los algoritmos de predicción propuestos en la literatura de investigación que alcanzan la mayor precisión incurren en un alto coste computacional, y esto representa un problema para incluirlos en sistemas reales. Para aminorar este inconveniente, en esta tesis se propone un nuevo algoritmo de predicci on de bajo coste, (Referrer Graph, RG). / De La Ossa Pérez, BA. (2011). Web Prefetching Techniques in Real Environments [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15574
478

Accurate workload design for web performance evaluation

Peña Ortiz, Raúl 13 February 2013 (has links)
Las nuevas aplicaciones y servicios web, cada vez má¡s populares en nuestro día a día, han cambiado completamente la forma en la que los usuarios interactúan con la Web. En menos de media década, el papel que juegan los usuarios ha evolucionado de meros consumidores pasivos de información a activos colaboradores en la creación de contenidos dinámicos, típicos de la Web actual. Y, además, esta tendencia se espera que aumente y se consolide con el paso del tiempo. Este comportamiento dinámico de los usuarios es una de las principales claves en la definición de cargas de trabajo adecuadas para estimar con precisión el rendimiento de los sistemas web. No obstante, la dificultad intrínseca a la caracterización del dinamismo del usuario y su aplicación en un modelo de carga, propicia que muchos trabajos de investigación sigan todavía empleando cargas no representativas de las navegaciones web actuales. Esta tesis doctoral se centra en la caracterización y reproducción, para estudios de evaluación de prestaciones, de un tipo de carga web más realista, capaz de imitar el comportamiento de los usuarios de la Web actual. El estado del arte en el modelado y generación de cargas para los estudios de prestaciones de la Web presenta varias carencias en relación a modelos y aplicaciones software que representen los diferentes niveles de dinamismo del usuario. Este hecho nos motiva a proponer un modelo más preciso y a desarrollar un nuevo generador de carga basado en este nuevo modelo. Ambas propuestas han sido validadas en relación a una aproximación tradicional de generación de carga web. Con este fin, se ha desarrollado un nuevo entorno de experimentación con la capacidad de reproducir cargas web tradicionales y dinámicas, mediante la integración del generador propuesto con un benchmark de uso común. En esta tesis doctoral también se analiza y evalúa por primera vez, según nuestro saber y entender, el impacto que tiene el empleo de cargas de trabajo dinámicas en las métrica / Peña Ortiz, R. (2013). Accurate workload design for web performance evaluation [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/21054
479

Performance evaluation of a brackish water reverse osmosis pilot-plant desalination process under different operating conditions: Experimental study

Ansari, M., Al-Obaidi, Mudhar A.A.R., Hadadian, Z., Moradi, M., Haghighi, A., Mujtaba, Iqbal 28 March 2022 (has links)
Yes / The Reverse Osmosis (RO) input parameters have key roles in mass transport and performance indicators. Several studies can be found in open literature. However, an experimental research on evaluating the brackish water RO input parameters influence on the performance metrics with justifying the interference between them via a robust model has not been addressed yet. This paper aims to design, construct, and experimentally evaluate the performance of a 50 m3/d RO pilot-plant to desalinate brackish water in Shahid Chamran University of Ahvaz, Iran. Water samples with various salinity ranging from 1000 to 5000 ppm were fed to a semi-permeable membrane under variable operating pressures from 5 to 13 bar. By evaluating permeate flux and brine flowrate, permeate and brine salinities, membrane water recovery, and salt rejection, some logical relations were derived. The results indicated that the performance of an RO unit is largely dependent on feed pressure and feed salinity. At a fixed feed concentration, an almost linear relationship was found to relate feed pressure and both permeate and brine flowrates. Statistically, it was found that 13 bar feed pressure results in a maximum salt rejection of 98.8% at a minimum permeate concentration of 12 ppm. Moreover, 73.3% reduction in permeate salinity and 30.8% increase in brine salinity are reported when feed pressure increases from 5 to 13 bar. Finally, it is concluded that the water transport coefficient is a function of feed pressure, salinity, and temperature, which is experimentally estimated to be 2.8552 L/(m2 h bar).
480

Measuring the efficiency of two stage network processes: a satisficing DEA approach

Mehdizadeh, S., Amirteimoori, A., Vincent, Charles, Behzadi, M.H., Kordrostami, S. 2020 March 1924 (has links)
No / Regular Network Data Envelopment Analysis (NDEA) models deal with evaluating the performance of a set of decision-making units (DMUs) with a two-stage construction in the context of a deterministic data set. In the real world, however, observations may display a stochastic behavior. To the best of our knowledge, despite the existing research done with different data types, studies on two-stage processes with stochastic data are still very limited. This paper proposes a two-stage network DEA model with stochastic data. The stochastic two-stage network DEA model is formulated based on the satisficing DEA models of chance-constrained programming and the leader-follower concepts. According to the probability distribution properties and under the assumption of the single random factor of the data, the probabilistic form of the model is transformed into its equivalent deterministic linear programming model. In addition, the relationship between the two stages as the leader and the follower, respectively, at different confidence levels and under different aspiration levels, is discussed. The proposed model is further applied to a real case concerning 16 commercial banks in China in order to confirm the applicability of the proposed approach at different confidence levels and under different aspiration levels.

Page generated in 0.0848 seconds