As a distributed computing framework based on memory, Spark is being used by more and more enterprises. Generally, Spark runs in multi-user and multi-job mode, where may exist a large number of reuse of jobs. This reuse, here, refers to the calculation reuse inside the jobs, and it can greatly shorten the executing time of jobs in Spark. Therefore, this paper proposes a scheduling pool scheduling algorithm based on reuse of jobs. This algorithm is based on the original scheduling pool scheduling algorithm in Spark and can take great advantage of the reusable parts. Experiments show that the new scheduling algorithm realizes reuse of jobs, and improves the execution efficiency of the cluster.