This project is read-only.

Deleting largish numbers of files

Mar 11, 2008 at 9:32 PM
I have a folder which I populate with about 6000 photos. I want to delete them enmasse, and I select all and then delete, it freezes the program (it also takes about 60 seconds to display a listing of all those files but I understand why). I thought hmm, maybe its too much at once (though there is no progress bar or anything to show the progress) so I deleted only about 100 files and it takes quite a few minutes (I stopped counting after 3 minutes). A progress bar would be a huge help, but with some worker threads it should be pretty easy to send a simple delete operation very quickly. I don't see why it would take more than 100ms per file times 10 threads, that's 100 per second.
Coordinator
Mar 15, 2008 at 6:39 PM
This is a reasonable request - there is very little (no) optimization for large volume operations. This was one of the biggest reasons why the new background job manager was introduced - perhaps it can be leveraged for these large delete operations as well...

Added a work-item for this enhancement here:
http://www.codeplex.com/spaceblock/WorkItem/View.aspx?WorkItemId=1752
Mar 15, 2008 at 8:46 PM
I troubleshooted it and the verify folder took 6 seconds per delete, and the actual delete only took about 0.2 so i commented that out and I was able to delete 6000 files in about the time it took to delete 100 before.
Coordinator
Mar 16, 2008 at 12:13 AM
Check out the latest release and let me know if that works for you.

The delete is backgrounded but not multi-threaded (no proper thread management within individual jobs yet). Still should be significantly faster due to an S3 batch-file-delete optimization: i.e. only ensures that the folder exists on the last file deletion, not each one.