udf: limit the maximum number of TD redirections

Filesystem fuzzing revealed that we could get stuck in the
udf_process_sequence() loop.

The maximum limit was chosen arbitrarily but fixes the problem I saw.

Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jan Kara <jack@suse.cz>
This commit is contained in:
Vegard Nossum 2015-12-10 17:13:41 +01:00 committed by Jan Kara
parent 331221fac7
commit e7a4eb8612
1 changed files with 14 additions and 0 deletions

View File

@ -1585,6 +1585,13 @@ static void udf_load_logicalvolint(struct super_block *sb, struct kernel_extent_
brelse(bh);
}
/*
* Maximum number of Terminating Descriptor redirections. The chosen number is
* arbitrary - just that we hopefully don't limit any real use of rewritten
* inode on write-once media but avoid looping for too long on corrupted media.
*/
#define UDF_MAX_TD_NESTING 64
/*
* Process a main/reserve volume descriptor sequence.
* @block First block of first extent of the sequence.
@ -1609,6 +1616,7 @@ static noinline int udf_process_sequence(
uint16_t ident;
long next_s = 0, next_e = 0;
int ret;
unsigned int indirections = 0;
memset(vds, 0, sizeof(struct udf_vds_record) * VDS_POS_LENGTH);
@ -1679,6 +1687,12 @@ static noinline int udf_process_sequence(
}
break;
case TAG_IDENT_TD: /* ISO 13346 3/10.9 */
if (++indirections > UDF_MAX_TD_NESTING) {
udf_err(sb, "too many TDs (max %u supported)\n", UDF_MAX_TD_NESTING);
brelse(bh);
return -EIO;
}
vds[VDS_POS_TERMINATING_DESC].block = block;
if (next_e) {
block = next_s;