Found in 1 comment on Hacker News
fnord123 · 2016-12-30 · Original thread
>We aren't sure why, but when we tried to delete a lot of data (~200GB from each machine which each contain several TB of data), our databases become unresponsive for an hour.

There used to be an issue where users hitting their quota couldn't delete files since for some reason deleting a file meant creating a file somehow. The trick was to find some reasonably large file and `echo 1 > large_file` which truncates the file and frees up enough space that you can begin removing files. Maybe this kind of trick could help you guys.

That said, it's inadvisable to run a database on a btree file system like ZFS or btrfs if you're keeping an eye on the write performance. cf Postgres 9.0 High Performance by Gregory Smith (https://www.amazon.com/PostgreSQL-High-Performance-Gregory-S...)

and

https://blog.pgaddict.com/posts/postgresql-performance-on-ex...