Federated learning is a machine learning technique that can overcome the privacy and limited bandwidth issues of centralized learning, where local data is sent to a server to train a model on a central server. However, in real-world scenarios, Federated learning suffers from heterogeneous clients with varying computing power. Due to heterogeneous clients, multiple global models must be created or the size of the global model must be reduced to fit the least capable client, which leads to an overall performance degradation. In particular, clients with limited capabilities face difficult to train large, computationally intensive machine learning models. To address these challenges, we propose a novel federated learning framework to tackle with system heterogeneity. To enable training of large models despite limited client computing power, the proposed framework partitions large models into client-side and server-side models, with the partitioning point being flexible to accommodate heterogeneous client capabilities. This approach enables the server’s computational power to be utilized in addition to the client’s power to learn the larger model, thereby improving the model performance. By providing flexible partitioning points for different clients, it also enables all clients to participate in learning and reduces the unnecessary use of server power. Experiments show that the proposed algorithm effectively utilizes server power and outperforms
the baseline proposed algorithm.