There is one TensorFlow. The differences between using TF internally and externally have primarily to do with which RPC bindings it uses (the external one uses gRPC, which is open-source, and the internal one uses the internal RPC framework, which is tied in with all of the internal cluster stuff and authentication and whatnot), and things like filesystems that only exist in Google. The other difference is that there are linkages to use TPUs, instead of just GPUs, which is hardware that doesn't exist outside of Google. The final differences are just in how the BUILD files link against library files -- the external version downloads protobuf for you, the internal version assumes it's there to use. yadda yadda yadda.
You can see all of this in the code. It leaks out in places, such as:
Yes, it's that _super secret_ use of a different integral_types.h header. (/sarcasm). If you look through for things like PLATFORM_GOOGLE in the defines, you'll see a lot of the things that differ, and they're incredibly boring. The core of TensorFlow performance-related stuff is Eigen (or, thanks to Intel's recent contributions, Intel MKL) for executing Tensor ops on CPU, or cuDNN for executing Tensor ops on GPU. Just like every other freakin' framework out there. There's a reason that all of these things tend to reduce to the performance of cuDNN...
("we use almost exactly the same code base inside Google that we make available on GitHub").
(Source: I'm a part-time hanger-on on the Brain team, which develops TensorFlow. I'm also a Carnegie Mellon professor most of the time, and I despise marketing getting in the way of truth.)
There is one TensorFlow. The differences between using TF internally and externally have primarily to do with which RPC bindings it uses (the external one uses gRPC, which is open-source, and the internal one uses the internal RPC framework, which is tied in with all of the internal cluster stuff and authentication and whatnot), and things like filesystems that only exist in Google. The other difference is that there are linkages to use TPUs, instead of just GPUs, which is hardware that doesn't exist outside of Google. The final differences are just in how the BUILD files link against library files -- the external version downloads protobuf for you, the internal version assumes it's there to use. yadda yadda yadda.
You can see all of this in the code. It leaks out in places, such as:
https://github.com/tensorflow/tensorflow/blob/d0d975f8c3330b...
Yes, it's that _super secret_ use of a different integral_types.h header. (/sarcasm). If you look through for things like PLATFORM_GOOGLE in the defines, you'll see a lot of the things that differ, and they're incredibly boring. The core of TensorFlow performance-related stuff is Eigen (or, thanks to Intel's recent contributions, Intel MKL) for executing Tensor ops on CPU, or cuDNN for executing Tensor ops on GPU. Just like every other freakin' framework out there. There's a reason that all of these things tend to reduce to the performance of cuDNN...
See also Pete Warden's article: https://www.oreilly.com/ideas/how-the-tensorflow-team-handle...
("we use almost exactly the same code base inside Google that we make available on GitHub").
(Source: I'm a part-time hanger-on on the Brain team, which develops TensorFlow. I'm also a Carnegie Mellon professor most of the time, and I despise marketing getting in the way of truth.)