Transmission schemes that gain contents from multiple servers concurrently are highlighted due to their ability of bandwidth aggregation, resilience against dynamic server departure, and load balancing. Previous application layer approaches aggregate server bandwidth by slicing the contents into fixed size chunks and requesting each chunk to a different server. They require huge resequencing buffer in the receiver application due to the out-of-order arrival of large size chunks. Transport approaches try to reduce the size of the resequencing buffer by reducing the size of a chunk to the size of a segment. However, they lack the consideration about the overhead of the sender and, moreover, excessively increase the burden of the sender as the the number of senders that participate in the connection increases. I propose a novel transport layer protocol, MTCP, which provides full aggregation of bandwidth with minimum overhead of senders while suppressing the additional overhead of a receiver. With MTCP, a receiver can choose the most appropriate chunk that should be requested to each server in a manner that achieves high locality in the senders`` buffer cache and sequential arrival of chunks from all the servers as well as full aggregation of output bandwidth of servers.