The effectiveness of state-of-the-art deep learning (DL) models has empowered the development of industrial Internet of things (IIoT). Recently, considering resource-constrained and privacy-required IIoT devices, collaborative inference has been proposed, which splits DL models and deploys them in IIoT devices and an edge server separately. However, in this article, we argue that there are still severe privacy vulnerabilities in collaborative inference systems. And we devise the first membership inference attack (MIA) against collaborative inference, to infer whether a particular data sample is used for training the model of IIoT systems. Existing MIAs either assume full access to the systems’ APIs or availability of the target model’s parameters, which is not applicable in realistic IIoT environments. In contrast to prior works, we propose transfer-inherit shadow learning and thus relax these key assumptions. We evaluate our attack on different datasets and various settings, and the results show it has high effectiveness.