• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2202
  • 363
  • 282
  • 176
  • 98
  • 72
  • 38
  • 36
  • 34
  • 25
  • 24
  • 21
  • 21
  • 20
  • 20
  • Tagged with
  • 4020
  • 527
  • 472
  • 469
  • 425
  • 425
  • 417
  • 403
  • 383
  • 362
  • 338
  • 315
  • 288
  • 284
  • 279
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
521

In silico modeling for uncertain biochemical data

Gusenleitner, Daniel January 2009 (has links)
Analyzing and modeling data is a well established research area and a vast variety of different methods have been developed over the last decades. Most of these methods assume fixed positions of data points; only recently uncertainty in data has caught attention as potentially useful source of information. In order to provide a deeper insight into this subject, this thesis concerns itself with the following essential question: Can information on uncertainty of feature values be exploited to improve in silico modeling? For this reason a state-of-art random forest algorithm is developed using Matlab R. In addition, three techniques of handling uncertain numeric features are presented and incorporated in different modified versions of random forests. To test the hypothesis six realworld data sets were provided by AstraZeneca. The data describe biochemical features of chemical compounds, including the results of an Ames test; a widely used technique to determine the mutagenicity of chemical substances. Each of the datasets contains a single uncertain numeric feature, represented as an expected value and an error estimate. Themodified algorithms are then applied on the six data sets in order to obtain classifiers, able to predict the outcome of an Ames test. The hypothesis is tested using a paired t-test and the results reveal that information on uncertainty can indeed improve the performance of in silico models.
522

A quasi-random-walk to model a biological transport process

Keller, Peter, Roelly, Sylvie, Valleriani, Angelo January 2013 (has links)
Transport Molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance. While moving along the rope the motor can also detach and is lost. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
523

Distributed Random Set Theoretic Soft/Hard Data Fusion

Khaleghi, Bahador January 2012 (has links)
Research on multisensor data fusion aims at providing the enabling technology to combine information from several sources in order to form a unifi ed picture. The literature work on fusion of conventional data provided by non-human (hard) sensors is vast and well-established. In comparison to conventional fusion systems where input data are generated by calibrated electronic sensor systems with well-defi ned characteristics, research on soft data fusion considers combining human-based data expressed preferably in unconstrained natural language form. Fusion of soft and hard data is even more challenging, yet necessary in some applications, and has received little attention in the past. Due to being a rather new area of research, soft/hard data fusion is still in a edging stage with even its challenging problems yet to be adequately de fined and explored. This dissertation develops a framework to enable fusion of both soft and hard data with the Random Set (RS) theory as the underlying mathematical foundation. Random set theory is an emerging theory within the data fusion community that, due to its powerful representational and computational capabilities, is gaining more and more attention among the data fusion researchers. Motivated by the unique characteristics of the random set theory and the main challenge of soft/hard data fusion systems, i.e. the need for a unifying framework capable of processing both unconventional soft data and conventional hard data, this dissertation argues in favor of a random set theoretic approach as the first step towards realizing a soft/hard data fusion framework. Several challenging problems related to soft/hard fusion systems are addressed in the proposed framework. First, an extension of the well-known Kalman lter within random set theory, called Kalman evidential filter (KEF), is adopted as a common data processing framework for both soft and hard data. Second, a novel ontology (syntax+semantics) is developed to allow for modeling soft (human-generated) data assuming target tracking as the application. Third, as soft/hard data fusion is mostly aimed at large networks of information processing, a new approach is proposed to enable distributed estimation of soft, as well as hard data, addressing the scalability requirement of such fusion systems. Fourth, a method for modeling trust in the human agents is developed, which enables the fusion system to protect itself from erroneous/misleading soft data through discounting such data on-the-fly. Fifth, leveraging the recent developments in the RS theoretic data fusion literature a novel soft data association algorithm is developed and deployed to extend the proposed target tracking framework into multi-target tracking case. Finally, the multi-target tracking framework is complemented by introducing a distributed classi fication approach applicable to target classes described with soft human-generated data. In addition, this dissertation presents a novel data-centric taxonomy of data fusion methodologies. In particular, several categories of fusion algorithms have been identifi ed and discussed based on the data-related challenging aspect(s) addressed. It is intended to provide the reader with a generic and comprehensive view of the contemporary data fusion literature, which could also serve as a reference for data fusion practitioners by providing them with conducive design guidelines, in terms of algorithm choice, regarding the specifi c data-related challenges expected in a given application.
524

Vision-Based Localization Using Reliable Fiducial Markers

Stathakis, Alexandros 05 January 2012 (has links)
Vision-based positioning systems are founded primarily on a simple image processing technique of identifying various visually significant key-points in an image and relating them to a known coordinate system in a scene. Fiducial markers are used as a means of providing the scene with a number of specific key-points, or features, such that computer vision algorithms can quickly identify them within a captured image. This thesis proposes a reliable vision-based positioning system which utilizes a unique pseudo-random fiducial marker. The marker itself offers 49 distinct feature points to be used in position estimation. Detection of the designed marker occurs after an integrated process of adaptive thresholding, k-means clustering, color classification, and data verification. The ultimate goal behind such a system would be for indoor localization implementation in low cost autonomous mobile platforms.
525

Scalable and Reliable Searching in Unstructured Peer-to-peer Systems

Ioannidis, Efstratios 01 March 2010 (has links)
The subject of this thesis is searching in unstructured peer-to-peer systems. Such systems have been used for a variety of different applications, including file-sharing, content distribution and video streaming. These applications have been very popular; they contribute to a large percentage of today's Internet traffic and their users typically number in the millions. By searching, we refer to the process of locating content stored by peers. Searching in unstructured peer-to-peer systems poses a challenge because of high churn: both the topology and the content stored by peers can change quickly as peers arrive and depart, while the network formed under this churn process can be arbitrary at any point in time. As a result, a search mechanism must operate without any a priori assumptions on this dynamic topology. Ideally, a search mechanism should be scalable: as, typically, peers have limited bandwidth, the traffic generated by queries should not grow significantly as the peer population increases. Moreover, a search mechanism should also be reliable: if certain content is in the system, searching should locate it with reasonable guarantees. These two goals can be conflicting, as generating more queries increases a mechanism's reliability but decreases its scalability. Hence, a fundamental question regarding searching in unstructured systems is whether a mechanism can exhibit both properties, despite the network's dynamic and arbitrary nature. In this thesis, we show this is indeed the case, by proposing a novel mechanism that is both scalable and reliable. This is shown under a mathematical model that captures the evolution of both network and content in an unstructured system, but is also verified through simulations. To the best of our knowledge, this is the first provably scalable and reliable search mechanism for unstructured peer-to-peer systems. In addition to the above problem, we also consider a hybrid peer-to-peer system, in which the peer-to-peer network co-exists with a central server. The purpose of this hybrid architecture is to reduce the server's traffic by delegating part of it to its clients ---\emph{i.e.}, the peers: a peer wishing to retrieve certain content first propagates a query over the peer-to-peer network, and downloads the content from the server only if the query fails. This hybrid architecture can be used to partially decentralize a content distribution server, a search engine, an online encyclopedia, etc. The trade-off between scalability and reliability translates, in the hybrid case, to a trade-off between the peer and the server traffic loads. We propose a search mechanism under which both loads remain bounded as the peer population grows. This is surprising, and has an important implication: one can construct hybrid peer-to-peer systems that can handle traffic generated by a large (unbounded) peer population, even when both the server and peer bandwidth capacities are limited. Again, this is proved under a model capturing the hybrid system's dynamic nature and verified through simulations. To the best of our knowledge, our work is the first to show that hybrid systems with such properties exist.
526

The Role of Amenities in the Location Decisions of Ph.D. Recipients in Science and Engineering

Sumell, Albert Joseph 09 January 2006 (has links)
Location-specific amenities have been shown to play an increasingly important role in individual migration decisions. The role certain amenities play in the location decisions of the highly educated may be the cause of persistent regional differences in certain types of human capital, and consequently in regional productivity. This dissertation examines the determinants of the location decisions of new Ph.D. recipients in science and engineering (S&E). A discrete choice random utility model of the city location decisions of new Ph.D.s is developed to estimate preferences for city attributes as well as willingness to pay for improved amenity quality. By estimating the value Ph.D.s place on various urban amenities, the results of this research help inform policymakers as to their ability (or inability) to attract and retain highly educated workers to their region through public investment in amenity quality. To link the choice of city with the geographic attributes of cities, a unique micro dataset is used which reports the planned employment city location of S&E Ph.D. recipients in the U.S. at the time of degree. The primary data comes from the 1997-1999 Survey of Earned Doctorates (SED), administered by Science Resources Statistics of the National Science Foundation. The SED is given to all new doctorate recipients in the U.S. at or near the time of degree, and has a response rate over 90%. The application focuses on approximately 23,000 new Ph.D.s who received their degree in one of twelve S&E fields during the period 1997-1999, and who had made a definite commitment to an employer in a known U.S. metropolitan area. The results consistently suggest that natural amenities, such as summer or winter temperatures, play a larger role in the location decisions of new S&E Ph.D.s than reproducible amenities, such as crime or air quality. The implication is that policymakers have only a limited ability to improve the composition of their workforce through amenity investment. The results also indicate that the influence of amenities on location choice is related to a number of observable characteristics such as age, race, marital status, citizenship, and Ph.D.s’ previous migration behavior.
527

Low Order Modeling of Seemingly Random Systems with Application to Stock Market Securities

Surendran, Arun 14 March 2013 (has links)
Even simple observation of stock price graphs can reveal dominant patterns. In our work, we will refer to such re-occurring, dominant patterns as “coherent structures”, a term borrowed from the theory of turbulence in fluid dynamics. Stock price performance exhibits coherent structures, which by definition make it non-random, although a price-versus-time graph might seem totally chaotic to the naked eye. A novel low-order modeling technique for systems that are seemingly random has been developed. Though stock market data is used for the formulation and verification of the technique, its application in diverse fields is verified. The dissertation discusses some of the salient features of the novel technique along with a dynamic system analogy. The technique reduces many of the significant limitations associated with traditional methods like Fourier analysis and digital filters. Application of the technique to a nonlinear dynamical system and meteorological data are presented as well as the primary application on stock market securities.
528

Leakage Resilience and Black-box Impossibility Results in Cryptography

Juma, Ali 31 August 2011 (has links)
In this thesis, we present constructions of leakage-resilient cryptographic primitives, and we give black-box impossibility results for certain classes of constructions of pseudo-random number generators. The traditional approach for preventing side-channel attacks has been primarily hardware-based. Recently, there has been significant progress in developing algorithmic approaches for preventing such attacks. These algorithmic approaches involve modeling side-channel attacks as {\em leakage} on the internal state of a device; constructions secure against such leakage are {\em leakage-resilient}. We first consider the problem of storing a key and computing on it repeatedly in a leakage-resilient manner. For this purpose, we define a new primitive called a {\em key proxy}. Using a fully-homomorphic public-key encryption scheme, we construct a leakage-resilient key proxy. We work in the ``only computation leaks'' leakage model, tolerating a logarithmic number of bits of polynomial-time computable leakage per computation and an unbounded total amount of leakage. We next consider the problem of verifying that a message sent over a public channel has not been modified, in a setting where the sender and the receiver have previously shared a key, and where the adversary controls the public channel and is simultaneously mounting side-channel attacks on both parties. Using only the assumption that pseudo-random generators exist, we construct a leakage-resilient shared-private-key authenticated session protocol. This construction tolerates a logarithmic number of bits of polynomial-time computable leakage per computation, and an unbounded total amount of leakage. This leakage occurs on the entire state, input, and randomness of the party performing the computation. Finally, we consider the problem of constructing a large-stretch pseudo-random generator given a one-way permutation or given a smaller-stretch pseudo-random generator. The standard approach for doing this involves repeatedly composing the given object with itself. We provide evidence that this approach is necessary. Specifically, we consider three classes of constructions of pseudo-random generators from pseudo-random generators of smaller stretch or from one-way permutations, and for each class, we give a black-box impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and non-adaptive black-box constructions.
529

Scalable and Reliable Searching in Unstructured Peer-to-peer Systems

Ioannidis, Efstratios 01 March 2010 (has links)
The subject of this thesis is searching in unstructured peer-to-peer systems. Such systems have been used for a variety of different applications, including file-sharing, content distribution and video streaming. These applications have been very popular; they contribute to a large percentage of today's Internet traffic and their users typically number in the millions. By searching, we refer to the process of locating content stored by peers. Searching in unstructured peer-to-peer systems poses a challenge because of high churn: both the topology and the content stored by peers can change quickly as peers arrive and depart, while the network formed under this churn process can be arbitrary at any point in time. As a result, a search mechanism must operate without any a priori assumptions on this dynamic topology. Ideally, a search mechanism should be scalable: as, typically, peers have limited bandwidth, the traffic generated by queries should not grow significantly as the peer population increases. Moreover, a search mechanism should also be reliable: if certain content is in the system, searching should locate it with reasonable guarantees. These two goals can be conflicting, as generating more queries increases a mechanism's reliability but decreases its scalability. Hence, a fundamental question regarding searching in unstructured systems is whether a mechanism can exhibit both properties, despite the network's dynamic and arbitrary nature. In this thesis, we show this is indeed the case, by proposing a novel mechanism that is both scalable and reliable. This is shown under a mathematical model that captures the evolution of both network and content in an unstructured system, but is also verified through simulations. To the best of our knowledge, this is the first provably scalable and reliable search mechanism for unstructured peer-to-peer systems. In addition to the above problem, we also consider a hybrid peer-to-peer system, in which the peer-to-peer network co-exists with a central server. The purpose of this hybrid architecture is to reduce the server's traffic by delegating part of it to its clients ---\emph{i.e.}, the peers: a peer wishing to retrieve certain content first propagates a query over the peer-to-peer network, and downloads the content from the server only if the query fails. This hybrid architecture can be used to partially decentralize a content distribution server, a search engine, an online encyclopedia, etc. The trade-off between scalability and reliability translates, in the hybrid case, to a trade-off between the peer and the server traffic loads. We propose a search mechanism under which both loads remain bounded as the peer population grows. This is surprising, and has an important implication: one can construct hybrid peer-to-peer systems that can handle traffic generated by a large (unbounded) peer population, even when both the server and peer bandwidth capacities are limited. Again, this is proved under a model capturing the hybrid system's dynamic nature and verified through simulations. To the best of our knowledge, our work is the first to show that hybrid systems with such properties exist.
530

Leakage Resilience and Black-box Impossibility Results in Cryptography

Juma, Ali 31 August 2011 (has links)
In this thesis, we present constructions of leakage-resilient cryptographic primitives, and we give black-box impossibility results for certain classes of constructions of pseudo-random number generators. The traditional approach for preventing side-channel attacks has been primarily hardware-based. Recently, there has been significant progress in developing algorithmic approaches for preventing such attacks. These algorithmic approaches involve modeling side-channel attacks as {\em leakage} on the internal state of a device; constructions secure against such leakage are {\em leakage-resilient}. We first consider the problem of storing a key and computing on it repeatedly in a leakage-resilient manner. For this purpose, we define a new primitive called a {\em key proxy}. Using a fully-homomorphic public-key encryption scheme, we construct a leakage-resilient key proxy. We work in the ``only computation leaks'' leakage model, tolerating a logarithmic number of bits of polynomial-time computable leakage per computation and an unbounded total amount of leakage. We next consider the problem of verifying that a message sent over a public channel has not been modified, in a setting where the sender and the receiver have previously shared a key, and where the adversary controls the public channel and is simultaneously mounting side-channel attacks on both parties. Using only the assumption that pseudo-random generators exist, we construct a leakage-resilient shared-private-key authenticated session protocol. This construction tolerates a logarithmic number of bits of polynomial-time computable leakage per computation, and an unbounded total amount of leakage. This leakage occurs on the entire state, input, and randomness of the party performing the computation. Finally, we consider the problem of constructing a large-stretch pseudo-random generator given a one-way permutation or given a smaller-stretch pseudo-random generator. The standard approach for doing this involves repeatedly composing the given object with itself. We provide evidence that this approach is necessary. Specifically, we consider three classes of constructions of pseudo-random generators from pseudo-random generators of smaller stretch or from one-way permutations, and for each class, we give a black-box impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and non-adaptive black-box constructions.

Page generated in 0.0464 seconds