Coming here from mongodb/mongo-rust-driver#1589
Use-case
We get a load of uploads by users which are only needed for finishing a process and afterward for documentation and debugging purposes, at least for a few months. After that time we need to clean up the files in our GridFS database, that can amount to a few ten-thousand files.
Current state
As far as I have seen, right now there is no way to directly do this on GridFS, except to find every file id and then delete every file via an .delete(id) execution, which, in our case, would result in a few ten-thousand database requests.
The other way would be to use regular database queries. First find all matching documents in the fs.files collection, then find all related documents in the fs.chunks collection and now delete everything, with at least two additional database delete_many queries. But only, if everything works out. Otherwise, you would need to check which files, or chunks, weren't deleted and try to delete them again, by which ever means/strategy/algorithm.
Proposal
Provide a delete_many function that takes a filter as parameter. This function should delete all matching fs.files documents and their related entries in fs.chunks, similar to a regular delete_many
Coming here from mongodb/mongo-rust-driver#1589
Use-case
We get a load of uploads by users which are only needed for finishing a process and afterward for documentation and debugging purposes, at least for a few months. After that time we need to clean up the files in our GridFS database, that can amount to a few ten-thousand files.
Current state
As far as I have seen, right now there is no way to directly do this on GridFS, except to find every file
idand then delete every file via an.delete(id)execution, which, in our case, would result in a few ten-thousand database requests.The other way would be to use regular database queries. First find all matching documents in the
fs.filescollection, then find all related documents in thefs.chunkscollection and now delete everything, with at least two additional databasedelete_manyqueries. But only, if everything works out. Otherwise, you would need to check which files, or chunks, weren't deleted and try to delete them again, by which ever means/strategy/algorithm.Proposal
Provide a
delete_manyfunction that takes a filter as parameter. This function should delete all matchingfs.filesdocuments and their related entries infs.chunks, similar to a regulardelete_many