Door scalability and performance Authorization caching: create a proxy class for the authorization service which will implement caching internally using static fields. PoolManager / door integration: add waited random selection of the pools instead of the pure cost selection. Predictive cost cost calculation is initially kept. Next try not to predict pool cost anymore. If this works, we can move pool manager functionality to doors. Gerd is going to run historical access patterns on top of new cost function. PoolManager configuration will be periodically transfered to each door. Door makes it own selections of the write pools and read pools that have files online already. Door still forwards p2p and stage requests to central PoolManager. This work depends on the space manager - door communication modification discussed above. We should not take away a possibility to admins to decide with what preference freshly staged files are removed in the presence of the empty space. Alex K.: While calculating cost we do not consider file size, network bandwidth and many other parameters, which lead to uneven distribution of the file transfers on pools. Timur: Ability to reject a transfer by the pool could be used to force queuing of the transfer requests in doors, if there are no transfer slots, thus avoiding situations when some pools have a large number of transfer queued, while others are queueing. Dmitry: Re-factoring Pool Manager so that pool selection becomes a pluggable module or an interface is the first step towards the improvement of dCache transfer scheduling improvements discussed so far. Build infrastructure: We move the packages into a common tree, but add tools that check dependency between components and guaranty absence of undesired dependencies. Door scalability and performance : Tigran: We will implement new architecture for the doors based on the Grizzly NIO and Web framework (https://grizzly.dev.java.net/). Protocol parser(s) will be running under the management of GRIZZLY, they in their turn will talk IO Adapters, that encapsulate interactions between the protocol parser and the rest of dCache. Common cell endpoint will be shared between all the connections. Caching will be much easier to implement in this architecture.