Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mctp: perform route lookups under a RCU read-side lock

Our current route lookups (mctp_route_lookup and mctp_route_lookup_null)
traverse the net's route list without the RCU read lock held. This means
the route lookup is subject to preemption, resulting in an potential
grace period expiry, and so an eventual kfree() while we still have the
route pointer.

Add the proper read-side critical section locks around the route
lookups, preventing premption and a possible parallel kfree.

The remaining net->mctp.routes accesses are already under a
rcu_read_lock, or protected by the RTNL for updates.

Based on an analysis from Sili Luo <rootlab@huawei.com>, where
introducing a delay in the route lookup could cause a UAF on
simultaneous sendmsg() and route deletion.

Reported-by: Sili Luo <rootlab@huawei.com>
Fixes: 889b7da23abf ("mctp: Add initial routing framework")
Cc: stable@vger.kernel.org
Signed-off-by: Jeremy Kerr <jk@codeconstruct.com.au>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/29c4b0e67dc1bf3571df3982de87df90cae9b631.1696837310.git.jk@codeconstruct.com.au
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

authored by

Jeremy Kerr and committed by
Jakub Kicinski
5093bbfc 8527ca77

+16 -6
+16 -6
net/mctp/route.c
··· 737 737 { 738 738 struct mctp_route *tmp, *rt = NULL; 739 739 740 + rcu_read_lock(); 741 + 740 742 list_for_each_entry_rcu(tmp, &net->mctp.routes, list) { 741 743 /* TODO: add metrics */ 742 744 if (mctp_rt_match_eid(tmp, dnet, daddr)) { ··· 749 747 } 750 748 } 751 749 750 + rcu_read_unlock(); 751 + 752 752 return rt; 753 753 } 754 754 755 755 static struct mctp_route *mctp_route_lookup_null(struct net *net, 756 756 struct net_device *dev) 757 757 { 758 - struct mctp_route *rt; 758 + struct mctp_route *tmp, *rt = NULL; 759 759 760 - list_for_each_entry_rcu(rt, &net->mctp.routes, list) { 761 - if (rt->dev->dev == dev && rt->type == RTN_LOCAL && 762 - refcount_inc_not_zero(&rt->refs)) 763 - return rt; 760 + rcu_read_lock(); 761 + 762 + list_for_each_entry_rcu(tmp, &net->mctp.routes, list) { 763 + if (tmp->dev->dev == dev && tmp->type == RTN_LOCAL && 764 + refcount_inc_not_zero(&tmp->refs)) { 765 + rt = tmp; 766 + break; 767 + } 764 768 } 765 769 766 - return NULL; 770 + rcu_read_unlock(); 771 + 772 + return rt; 767 773 } 768 774 769 775 static int mctp_do_fragment_route(struct mctp_route *rt, struct sk_buff *skb,