This prevents too many pending writes building up. Webseed peers re-request synchronously, and the writes are done asynchronously, so they download too quickly and there was no backpressure. The backpressure now is provided by the upper limit on outstanding requests per connection.
New requests weren't being issued to the current peer when being deleted. For webseeds, this would cause them to not bother issuing new requests indefinitely.
(cherry picked from commit 146a16df4ea26d33b0ce0391c8220de14c9e18f4)
* Rename Peer to PeerInfo, and unexport PeerInfos
* Break peer out from PeerConn
* Abstract out segments mapping and use it in mmap storage
* Got file storage working with segment index
* Fix race in webtorrent.TrackerClient.Run
* storage file implementation: Error on short writes
* Remove debug logging from storage file implementation
* cmd/torrent-verify: Fix piece hash output
* Support disabling webtorrent
* Further progress on webseeding
* Handle webseed Client events
* Rename fastestConn->fastestPeer
* Add webseeds from magnet links
* Remove events from webseed
Manage this stuff inside the webseed peer instead.
* Make use of magnet source fields and expose Torrent.MergeSpec
* Add option to disable webseeds
* Fix webseeds when info isn't available immediately
* Handle webseed request errors
* Tidy up the interface changes
Also run the storage failure test with fast disabled for the seeder. This probably would have tickled some issues in the past, so it seems like a good place to try it out.
This should make the expected receive chunk counts match up more correctly. It doesn't seem to affect tests at the moment, but then we don't verify the expected receive chunk counts are correct either.
This can be racy. In the TestReceiveChunkStorageFailure, when we have a storage write failure, we request the chunk again, but the peer has sometimes already sent it, and we return from the connection read loop with unexpected chunk after receiving it twice.