1 |
Robust optimization with applications in maritime inventory routingZhang, Chengliang 27 May 2016 (has links)
In recent years, the importance of incorporating uncertainty into planning models for logistics and transportation systems has been widely recognized in the Operations Research and transportation science communities. Maritime transportation, as a major mode of transport in the world, is subject to a wide range of disruptions at the strategic, tactical and operational levels. This thesis is mainly concerned with the development of robustness planning strategies that can mitigate the effects of some major types of disruptions for an important class of optimization problems in the shipping industry. Such problems arise in the creation and negotiation of long-term delivery contracts with customers who require on-time deliveries of high-value goods throughout the year. In this thesis, we consider the disruptions that can increase travel times between ports and ultimately affect one or more scheduled deliveries to the customers. Computational results show that our integrated solution procedure and robustness planning strategies can generate delivery plans that are both economical as well as robust against uncertain disruptions.
|
2 |
Airline Integrated Planning and OperationsGao, Chunhua 09 May 2007 (has links)
Efficient integrated/robust planning and recovery models were studied. The research focus was to integrate fleet assignment and crew scheduling, and, in addition, to provide solutions robust to real time operations. The contributions include: (1) To understand how schedule development and fleet assignment stages influence crew scheduling performance, schedule analysis methods were proposed to evaluate the crew friendliness of a schedule for a given fleet. (2) To meet the computational challenges of crew scheduling in integrated planning, a duty flow model was proposed which can efficiently find suboptimal legal pairing solutions. (3) A new robust crew scheduling method based on spoke purity was proposed. Computational results indicated that with little or no extra cost, a more robust crew pairing solutions can be expected. (4) By imposing station purity, an integrated and robust planning model which integrates fleet assignment and crew connections was proposed. The impact of crew base purities and fleet purities on FAM profit, crew scheduling, and computational efficiency were investigated. (5) Airline integrated recovery method was studied. A recovery scope for integrated recovery was proposed to limit the ripple effect caused by disruptions. Based on the defined recovery scope, a new integrated recovery model and Bender¡¯s decomposition solution approach was studied.
|
3 |
Programação dinâmica em tempo real para processos de decisão markovianos com probabilidades imprecisas / Real-time dynamic programming for Markov Decision Processes with Imprecise ProbabilitiesDias, Daniel Baptista 28 November 2014 (has links)
Em problemas de tomada de decisão sequencial modelados como Processos de Decisão Markovianos (MDP) pode não ser possível obter uma medida exata para as probabilidades de transição de estados. Visando resolver esta situação os Processos de Decisão Markovianos com Probabilidades Imprecisas (Markov Decision Processes with Imprecise Transition Probabilities, MDP-IPs) foram introduzidos. Porém, enquanto estes MDP-IPs se mostram como um arcabouço robusto para aplicações de planejamento no mundo real, suas soluções consomem muito tempo na prática. Em trabalhos anteriores, buscando melhorar estas soluções foram propostos algoritmos de programação dinâmica síncrona eficientes para resolver MDP-IPs com uma representação fatorada para as funções de transição probabilística e recompensa, chamados de MDP-IP fatorados. Entretanto quando o estado inicial de um problema do Caminho mais Curto Estocástico (Stochastic Shortest Path MDP, SSP MDP) é dado, estas soluções não utilizam esta informação. Neste trabalho será introduzido o problema do Caminho mais Curto Estocástico com Probabilidades Imprecisas (Stochastic Shortest Path MDP-IP, SSP MDP-IP) tanto em sua forma enumerativa, quanto na fatorada. Um algoritmo de programação dinâmica assíncrona para SSP MDP-IP enumerativos com probabilidades dadas por intervalos foi proposto por Buffet e Aberdeen (2005). Entretanto, em geral um problema é dado de forma fatorada, i.e., em termos de variáveis de estado e nesse caso, mesmo se for assumida a imprecisão dada por intervalos sobre as variáveis, ele não poderá ser mais aplicado, pois as probabilidades de transição conjuntas serão multilineares. Assim, será mostrado que os SSP MDP-IPs fatorados são mais expressivos que os enumerativos e que a mudança do SSP MDP-IP enumerativo para o caso geral de um SSP MDP-IPs fatorado leva a uma mudança de resolução da função objetivo do Bellman backup de uma função linear para uma não-linear. Também serão propostos algoritmos enumerativos, chamados de RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e fatorados chamados factRTDP-IP (factored RTDP-IP) e factLRTDP-IP (factored LRTDP-IP). Eles serão avaliados em relação aos algoritmos de programação dinâmica síncrona em termos de tempo de convergência da solução e de escalabilidade. / In sequential decision making problems modelled as Markov Decision Processes (MDP) we may not have the state transition probabilities. To solve this issue, the framework based in Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) is introduced. Therefore, while MDP-IPs is a robust framework to use in real world planning problems, its solutions are time-consuming in practice. In previous works, efficient algorithms based in synchronous dynamic programming to solve MDP-IPs with factored representations of the probabilistic transition function and reward function, called factored MDP-IPs. However, given a initial state of a system, modeled as a Stochastic Shortest Path MDP (SSP MDP), solutions does not use this information. In this work we introduce the Stochastic Shortest Path MDP-IPs (SSP MDP-IPs) in enumerative form and in factored form. An efficient asynchronous dynamic programming solution for SSP MDP-IPs with enumerated states has been proposed by Buffet e Aberdeen (2005) before which is restricted to interval-based imprecision. Nevertheless, in general the problem is given in a factored form, i.e., in terms of state variables and in this case even if we assume interval-based imprecision over the variables, the previous solution is no longer applicable since we have multilinear parameterized joint transition probabilities. In this work we show that the innocuous change from the enumerated SSP MDP-IP cases to the general case of factored SSP MDP-IPs leads to a switch from a linear to nonlinear objectives in the Bellman backup. Also we propose assynchronous dynamic programming enumerative algorithms, called RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) and LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities), and factored algorithms called factRTDP-IP (factored RTDP-IP) and factLRTDP-IP (factored LRTDP-IP). There algorithms will be evaluated with the synchronous dynamic programming algorithms previously proposed in terms of convergence time and scalability.
|
4 |
Processos de decisão Markovianos fatorados com probabilidades imprecisas / Factored Markov decision processes with Imprecise Transition ProbabilitiesDelgado, Karina Valdivia 19 January 2010 (has links)
Em geral, quando modelamos problemas de planejamento probabilístico do mundo real, usando o arcabouço de Processos de Decisão Markovianos (MDPs), é difícil obter uma estimativa exata das probabilidades de transição. A incerteza surge naturalmente na especificação de um domínio, por exemplo, durante a aquisição das probabilidades de transição a partir de um especialista ou de dados observados através de técnicas de amostragem, ou ainda de distribuições de transição não estacionárias decorrentes do conhecimento insuficiente do domínio. Com o objetivo de se determinar uma política robusta, dada a incerteza nas transições de estado, Processos de Decisão Markovianos com Probabilidades Imprecisas (MDP-IPs) têm sido usados para modelar esses cenários. Infelizmente, apesar de existirem diversos algoritmos de solução para MDP-IPs, muitas vezes eles exigem chamadas externas de rotinas de otimização que podem ser extremamente custosas. Para resolver esta deficiência, nesta tese, introduzimos o MDP-IP fatorado e propomos métodos eficientes de programação matemática e programação dinâmica que permitem explorar a estrutura de um domínio de aplicação. O método baseado em programação matemática propõe soluções aproximadas eficientes para MDP-IPs fatorados, estendendo abordagens anteriores de programação linear para MDPs fatorados. Essa proposta, baseada numa formulação multilinear para aproximações robustas da função valor de estados, explora a representação fatorada de um MDP-IP, reduzindo em ordens de magnitude o tempo consumido em relação às abordagens não-fatoradas previamente propostas. O segundo método proposto, baseado em programação dinâmica, resolve o gargalo computacional existente nas soluções de programação dinâmica para MDP-IPs propostas na literatura: a necessidade de resolver múltiplos problemas de otimização não-linear. Assim, mostramos como representar a função valor de maneira compacta usando uma nova estrutura de dados chamada de Diagramas de Decisão Algébrica Parametrizados, e como aplicar técnicas de aproximação para reduzir drasticamente a sobrecarga computacional das chamadas a um otimizador não-linear, produzindo soluções ótimas aproximadas com erro limitado. Nossos resultados mostram uma melhoria de tempo e até duas ordens de magnitude em comparação às abordagens tradicionais enumerativas baseadas em programação dinâmica e uma melhoria de tempo de até uma ordem de magnitude sobre a extensão de técnicas de iteração de valor aproximadas para MDPs fatorados. Além disso, produzimos o menor erro de todos os algoritmos de aproximação avaliados. / When modeling real-world decision-theoretic planning problems with the framework of Markov Decision Processes(MDPs), it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, uncertainty arises in the specification of transitions due to elicitation of MDP transition models from an expert or data, or non-stationary transition distributions arising from insuficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) have been introduced. Unfortunately, while various solutions exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose eficient mathematical programming and dynamic programming methods to exploit its structure. First, we derive eficient approximate solutions for Factored MDP-IPs based on mathematical programming resulting in a multilinear formulation for robust maximin linear-value approximations in Factored MDP-IPs. By exploiting factored structure in MDP-IPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches. Second, noting that the key computational bottleneck in the dynamic programming solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional at dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error among all approximation algorithm evaluated.
|
5 |
Robust optimization of radiation therapy accounting for geometric uncertaintyFredriksson, Albin January 2013 (has links)
Geometric errors may compromise the quality of radiation therapy treatments. Optimization methods that account for errors can reduce their effects. The first paper of this thesis introduces minimax optimization to account for systematic range and setup errors in intensity-modulated proton therapy. The minimax method optimizes the worst case outcome of the errors within a given set. It is applied to three patient cases and shown to yield improved target coverage robustness and healthy structure sparing compared to conventional methods using margins, uniform beam doses, and density override. Information about the uncertainties enables the optimization to counterbalance the effects of errors. In the second paper, random setup errors of uncertain distribution---in addition to the systematic range and setup errors---are considered in a framework that enables scaling between expected value and minimax optimization. Experiments on a phantom show that the best and mean case tradeoffs between target coverage and critical structure sparing are similar between the methods of the framework, but that the worst case tradeoff improves with conservativeness. Minimax optimization only considers the worst case errors. When the planning criteria cannot be fulfilled for all errors, this may have an adverse effect on the plan quality. The third paper introduces a method for such cases that modifies the set of considered errors to maximize the probability of satisfying the planning criteria. For two cases treated with intensity-modulated photon and proton therapy, the method increased the number of satisfied criteria substantially. Grasping for a little less sometimes yields better plans. In the fourth paper, the theory for multicriteria optimization is extended to incorporate minimax optimization. Minimax optimization is shown to better exploit spatial information than objective-wise worst case optimization, which has previously been used for robust multicriteria optimization. The fifth and sixth papers introduce methods for improving treatment plans: one for deliverable Pareto surface navigation, which improves upon the Pareto set representations of previous methods; and one that minimizes healthy structure doses while constraining the doses of all structures not to deteriorate compared to a reference plan, thereby improving upon plans that have been reached with too weak planning goals. / <p>QC 20130516</p>
|
6 |
Programação dinâmica em tempo real para processos de decisão markovianos com probabilidades imprecisas / Real-time dynamic programming for Markov Decision Processes with Imprecise ProbabilitiesDaniel Baptista Dias 28 November 2014 (has links)
Em problemas de tomada de decisão sequencial modelados como Processos de Decisão Markovianos (MDP) pode não ser possível obter uma medida exata para as probabilidades de transição de estados. Visando resolver esta situação os Processos de Decisão Markovianos com Probabilidades Imprecisas (Markov Decision Processes with Imprecise Transition Probabilities, MDP-IPs) foram introduzidos. Porém, enquanto estes MDP-IPs se mostram como um arcabouço robusto para aplicações de planejamento no mundo real, suas soluções consomem muito tempo na prática. Em trabalhos anteriores, buscando melhorar estas soluções foram propostos algoritmos de programação dinâmica síncrona eficientes para resolver MDP-IPs com uma representação fatorada para as funções de transição probabilística e recompensa, chamados de MDP-IP fatorados. Entretanto quando o estado inicial de um problema do Caminho mais Curto Estocástico (Stochastic Shortest Path MDP, SSP MDP) é dado, estas soluções não utilizam esta informação. Neste trabalho será introduzido o problema do Caminho mais Curto Estocástico com Probabilidades Imprecisas (Stochastic Shortest Path MDP-IP, SSP MDP-IP) tanto em sua forma enumerativa, quanto na fatorada. Um algoritmo de programação dinâmica assíncrona para SSP MDP-IP enumerativos com probabilidades dadas por intervalos foi proposto por Buffet e Aberdeen (2005). Entretanto, em geral um problema é dado de forma fatorada, i.e., em termos de variáveis de estado e nesse caso, mesmo se for assumida a imprecisão dada por intervalos sobre as variáveis, ele não poderá ser mais aplicado, pois as probabilidades de transição conjuntas serão multilineares. Assim, será mostrado que os SSP MDP-IPs fatorados são mais expressivos que os enumerativos e que a mudança do SSP MDP-IP enumerativo para o caso geral de um SSP MDP-IPs fatorado leva a uma mudança de resolução da função objetivo do Bellman backup de uma função linear para uma não-linear. Também serão propostos algoritmos enumerativos, chamados de RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) e fatorados chamados factRTDP-IP (factored RTDP-IP) e factLRTDP-IP (factored LRTDP-IP). Eles serão avaliados em relação aos algoritmos de programação dinâmica síncrona em termos de tempo de convergência da solução e de escalabilidade. / In sequential decision making problems modelled as Markov Decision Processes (MDP) we may not have the state transition probabilities. To solve this issue, the framework based in Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) is introduced. Therefore, while MDP-IPs is a robust framework to use in real world planning problems, its solutions are time-consuming in practice. In previous works, efficient algorithms based in synchronous dynamic programming to solve MDP-IPs with factored representations of the probabilistic transition function and reward function, called factored MDP-IPs. However, given a initial state of a system, modeled as a Stochastic Shortest Path MDP (SSP MDP), solutions does not use this information. In this work we introduce the Stochastic Shortest Path MDP-IPs (SSP MDP-IPs) in enumerative form and in factored form. An efficient asynchronous dynamic programming solution for SSP MDP-IPs with enumerated states has been proposed by Buffet e Aberdeen (2005) before which is restricted to interval-based imprecision. Nevertheless, in general the problem is given in a factored form, i.e., in terms of state variables and in this case even if we assume interval-based imprecision over the variables, the previous solution is no longer applicable since we have multilinear parameterized joint transition probabilities. In this work we show that the innocuous change from the enumerated SSP MDP-IP cases to the general case of factored SSP MDP-IPs leads to a switch from a linear to nonlinear objectives in the Bellman backup. Also we propose assynchronous dynamic programming enumerative algorithms, called RTDP-IP (Real-time Dynamic Programming with Imprecise Transition Probabilities), LRTDP-IP (Labeled Real-time Dynamic Programming with Imprecise Transition Probabilities), SSiPP-IP (Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities) and LSSiPP-IP (Labeled Short-Sighted Probabilistic Planner with Imprecise Transition Probabilities), and factored algorithms called factRTDP-IP (factored RTDP-IP) and factLRTDP-IP (factored LRTDP-IP). There algorithms will be evaluated with the synchronous dynamic programming algorithms previously proposed in terms of convergence time and scalability.
|
7 |
Processos de decisão Markovianos fatorados com probabilidades imprecisas / Factored Markov decision processes with Imprecise Transition ProbabilitiesKarina Valdivia Delgado 19 January 2010 (has links)
Em geral, quando modelamos problemas de planejamento probabilístico do mundo real, usando o arcabouço de Processos de Decisão Markovianos (MDPs), é difícil obter uma estimativa exata das probabilidades de transição. A incerteza surge naturalmente na especificação de um domínio, por exemplo, durante a aquisição das probabilidades de transição a partir de um especialista ou de dados observados através de técnicas de amostragem, ou ainda de distribuições de transição não estacionárias decorrentes do conhecimento insuficiente do domínio. Com o objetivo de se determinar uma política robusta, dada a incerteza nas transições de estado, Processos de Decisão Markovianos com Probabilidades Imprecisas (MDP-IPs) têm sido usados para modelar esses cenários. Infelizmente, apesar de existirem diversos algoritmos de solução para MDP-IPs, muitas vezes eles exigem chamadas externas de rotinas de otimização que podem ser extremamente custosas. Para resolver esta deficiência, nesta tese, introduzimos o MDP-IP fatorado e propomos métodos eficientes de programação matemática e programação dinâmica que permitem explorar a estrutura de um domínio de aplicação. O método baseado em programação matemática propõe soluções aproximadas eficientes para MDP-IPs fatorados, estendendo abordagens anteriores de programação linear para MDPs fatorados. Essa proposta, baseada numa formulação multilinear para aproximações robustas da função valor de estados, explora a representação fatorada de um MDP-IP, reduzindo em ordens de magnitude o tempo consumido em relação às abordagens não-fatoradas previamente propostas. O segundo método proposto, baseado em programação dinâmica, resolve o gargalo computacional existente nas soluções de programação dinâmica para MDP-IPs propostas na literatura: a necessidade de resolver múltiplos problemas de otimização não-linear. Assim, mostramos como representar a função valor de maneira compacta usando uma nova estrutura de dados chamada de Diagramas de Decisão Algébrica Parametrizados, e como aplicar técnicas de aproximação para reduzir drasticamente a sobrecarga computacional das chamadas a um otimizador não-linear, produzindo soluções ótimas aproximadas com erro limitado. Nossos resultados mostram uma melhoria de tempo e até duas ordens de magnitude em comparação às abordagens tradicionais enumerativas baseadas em programação dinâmica e uma melhoria de tempo de até uma ordem de magnitude sobre a extensão de técnicas de iteração de valor aproximadas para MDPs fatorados. Além disso, produzimos o menor erro de todos os algoritmos de aproximação avaliados. / When modeling real-world decision-theoretic planning problems with the framework of Markov Decision Processes(MDPs), it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, uncertainty arises in the specification of transitions due to elicitation of MDP transition models from an expert or data, or non-stationary transition distributions arising from insuficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, Markov Decision Processes with Imprecise Transition Probabilities (MDP-IPs) have been introduced. Unfortunately, while various solutions exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose eficient mathematical programming and dynamic programming methods to exploit its structure. First, we derive eficient approximate solutions for Factored MDP-IPs based on mathematical programming resulting in a multilinear formulation for robust maximin linear-value approximations in Factored MDP-IPs. By exploiting factored structure in MDP-IPs we are able to demonstrate orders of magnitude reduction in solution time over standard exact non-factored approaches. Second, noting that the key computational bottleneck in the dynamic programming solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional at dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error among all approximation algorithm evaluated.
|
8 |
Full Automation of Air Traffic Management in High Complexity Airspace / Vollautomatisierung der Flugsicherung in Lufträumen hoher KomplexitätEhrmanntraut, Rüdiger 20 May 2010 (has links) (PDF)
The thesis is that automation of en-route Air Traffic Management in high complexity airspace can be achieved with a combination of automated tactic planning in a look-ahead time horizon of up to two hours complemented with automated tactic conflict resolution functions. The literature review reveals that no significant results have yet been obtained and that full automation could be approached with a complementary integration of automated tactic resolutions AND planning. The focus shifts to ‘planning for capacity’ and ‘planning for resolution’ and also – but not only – for ‘resolution’.
The work encompasses a theoretical part on planning, and several small scale studies of empirical, mathematical or simulated nature.
The theoretical part of the thesis on planning under uncertainties attempts to conceive a theoretical model which abstracts specificities of planning in Air Traffic Management into a generic planning model. The resulting abstract model treats entities like the planner, the strategy, the plan and the actions, always considering the impact of uncertainties. The work innovates in specifying many links from the theory to the application in planning of air traffic management, and especially the new fields of tactical capacity management.
The second main part of the thesis comprises smaller self-containing works on different aspects of the concept grouped into a section on complexity, another on tactic planning actions, and the last on planners. The produced studies are about empirical measures of conflicts and conflict densities to get a better understanding of the complexity of air traffic; studies on traffic organisation using tactical manoeuvres like speed control, lateral offset and tactical direct using fast time simulation; and studies on airspace design like sector optimisation, dynamic sectorisation and its optimisation using optimisation techniques.
In conclusion it is believed that this work will contribute to further automation attempts especially by its innovative focus which is on planning, base on a theory of planning, and its findings already influence newer developments.
|
9 |
Full Automation of Air Traffic Management in High Complexity AirspaceEhrmanntraut, Rüdiger 29 March 2010 (has links)
The thesis is that automation of en-route Air Traffic Management in high complexity airspace can be achieved with a combination of automated tactic planning in a look-ahead time horizon of up to two hours complemented with automated tactic conflict resolution functions. The literature review reveals that no significant results have yet been obtained and that full automation could be approached with a complementary integration of automated tactic resolutions AND planning. The focus shifts to ‘planning for capacity’ and ‘planning for resolution’ and also – but not only – for ‘resolution’.
The work encompasses a theoretical part on planning, and several small scale studies of empirical, mathematical or simulated nature.
The theoretical part of the thesis on planning under uncertainties attempts to conceive a theoretical model which abstracts specificities of planning in Air Traffic Management into a generic planning model. The resulting abstract model treats entities like the planner, the strategy, the plan and the actions, always considering the impact of uncertainties. The work innovates in specifying many links from the theory to the application in planning of air traffic management, and especially the new fields of tactical capacity management.
The second main part of the thesis comprises smaller self-containing works on different aspects of the concept grouped into a section on complexity, another on tactic planning actions, and the last on planners. The produced studies are about empirical measures of conflicts and conflict densities to get a better understanding of the complexity of air traffic; studies on traffic organisation using tactical manoeuvres like speed control, lateral offset and tactical direct using fast time simulation; and studies on airspace design like sector optimisation, dynamic sectorisation and its optimisation using optimisation techniques.
In conclusion it is believed that this work will contribute to further automation attempts especially by its innovative focus which is on planning, base on a theory of planning, and its findings already influence newer developments.
|
Page generated in 0.0931 seconds