Added a new API, dtensor.relayout_like, for relayouting a tensor according to the layout of another tensor. Now, disabling TensorFloat-32 by calling tf._tensor_float_32_execution(False) will cause TPUs to use float32 precision for such ops instead of bfloat16. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |