It was incorrect to assume piece hashing only operates on incomplete chunk data. This actually uncovered a bug where duplicate hash checks occurred, and the redundant checks would fail due to not reading the completed data.
Pieces could get queued for hash multiple times when we receive chunks if the piece starts getting hashed before we're done writing all the chunks out. This was only found because piece hashing currently only checks the incomplete data, which is missing after the first piece hash passes, the data is marked complete, then the subsequently queued hash has nothing to read.
This prevents too many pending writes building up. Webseed peers re-request synchronously, and the writes are done asynchronously, so they download too quickly and there was no backpressure. The backpressure now is provided by the upper limit on outstanding requests per connection.
There's a bug in crawshaw.io/sqlite, and some forks where inserting []byte results in a text type instead of blob. To ensure things work correctly, we coerce data to blob wherever we can. See https://github.com/crawshaw/sqlite/issues/94 and the fork that fixes it.
New requests weren't being issued to the current peer when being deleted. For webseeds, this would cause them to not bother issuing new requests indefinitely.
(cherry picked from commit 146a16df4ea26d33b0ce0391c8220de14c9e18f4)
Chunk write errors to storage can disable data download. Previously Readers would wait indefinitely for the data to become available. This change returns an error instead of stalling.
This conforms more to the contract in io.Reader. It's possible the old behaviour was better in reducing overhead, but that can be iterated on (or added as comments next time).
Fixes exploit where specially crafted infos can cause the client to write files to arbitrary locations on local storage when using file-based storages like mmap and file.