This might help break the situation where anacrolix/torrent Clients that are connected to each other never release a connection until there's new connections that look more promising.
Pieces could get queued for hash multiple times when we receive chunks if the piece starts getting hashed before we're done writing all the chunks out. This was only found because piece hashing currently only checks the incomplete data, which is missing after the first piece hash passes, the data is marked complete, then the subsequently queued hash has nothing to read.
This prevents too many pending writes building up. Webseed peers re-request synchronously, and the writes are done asynchronously, so they download too quickly and there was no backpressure. The backpressure now is provided by the upper limit on outstanding requests per connection.
New requests weren't being issued to the current peer when being deleted. For webseeds, this would cause them to not bother issuing new requests indefinitely.
(cherry picked from commit 146a16df4ea26d33b0ce0391c8220de14c9e18f4)
* Rename Peer to PeerInfo, and unexport PeerInfos
* Break peer out from PeerConn
* Abstract out segments mapping and use it in mmap storage
* Got file storage working with segment index
* Fix race in webtorrent.TrackerClient.Run
* storage file implementation: Error on short writes
* Remove debug logging from storage file implementation
* cmd/torrent-verify: Fix piece hash output
* Support disabling webtorrent
* Further progress on webseeding
* Handle webseed Client events
* Rename fastestConn->fastestPeer
* Add webseeds from magnet links
* Remove events from webseed
Manage this stuff inside the webseed peer instead.
* Make use of magnet source fields and expose Torrent.MergeSpec
* Add option to disable webseeds
* Fix webseeds when info isn't available immediately
* Handle webseed request errors
* Tidy up the interface changes