| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
apparmor: fix race between freeing data and fs accessing it
AppArmor was putting the reference to i_private data on its end after
removing the original entry from the file system. However the inode
can aand does live beyond that point and it is possible that some of
the fs call back functions will be invoked after the reference has
been put, which results in a race between freeing the data and
accessing it through the fs.
While the rawdata/loaddata is the most likely candidate to fail the
race, as it has the fewest references. If properly crafted it might be
possible to trigger a race for the other types stored in i_private.
Fix this by moving the put of i_private referenced data to the correct
place which is during inode eviction. |
| In the Linux kernel, the following vulnerability has been resolved:
apparmor: fix race on rawdata dereference
There is a race condition that leads to a use-after-free situation:
because the rawdata inodes are not refcounted, an attacker can start
open()ing one of the rawdata files, and at the same time remove the
last reference to this rawdata (by removing the corresponding profile,
for example), which frees its struct aa_loaddata; as a result, when
seq_rawdata_open() is reached, i_private is a dangling pointer and
freed memory is accessed.
The rawdata inodes weren't refcounted to avoid a circular refcount and
were supposed to be held by the profile rawdata reference. However
during profile removal there is a window where the vfs and profile
destruction race, resulting in the use after free.
Fix this by moving to a double refcount scheme. Where the profile
refcount on rawdata is used to break the circular dependency. Allowing
for freeing of the rawdata once all inode references to the rawdata
are put. |
| In the Linux kernel, the following vulnerability has been resolved:
apparmor: Fix double free of ns_name in aa_replace_profiles()
if ns_name is NULL after
1071 error = aa_unpack(udata, &lh, &ns_name);
and if ent->ns_name contains an ns_name in
1089 } else if (ent->ns_name) {
then ns_name is assigned the ent->ns_name
1095 ns_name = ent->ns_name;
however ent->ns_name is freed at
1262 aa_load_ent_free(ent);
and then again when freeing ns_name at
1270 kfree(ns_name);
Fix this by NULLing out ent->ns_name after it is transferred to ns_name
") |
| In the Linux kernel, the following vulnerability has been resolved:
apparmor: fix missing bounds check on DEFAULT table in verify_dfa()
The verify_dfa() function only checks DEFAULT_TABLE bounds when the state
is not differentially encoded.
When the verification loop traverses the differential encoding chain,
it reads k = DEFAULT_TABLE[j] and uses k as an array index without
validation. A malformed DFA with DEFAULT_TABLE[j] >= state_count,
therefore, causes both out-of-bounds reads and writes.
[ 57.179855] ==================================================================
[ 57.180549] BUG: KASAN: slab-out-of-bounds in verify_dfa+0x59a/0x660
[ 57.180904] Read of size 4 at addr ffff888100eadec4 by task su/993
[ 57.181554] CPU: 1 UID: 0 PID: 993 Comm: su Not tainted 6.19.0-rc7-next-20260127 #1 PREEMPT(lazy)
[ 57.181558] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 57.181563] Call Trace:
[ 57.181572] <TASK>
[ 57.181577] dump_stack_lvl+0x5e/0x80
[ 57.181596] print_report+0xc8/0x270
[ 57.181605] ? verify_dfa+0x59a/0x660
[ 57.181608] kasan_report+0x118/0x150
[ 57.181620] ? verify_dfa+0x59a/0x660
[ 57.181623] verify_dfa+0x59a/0x660
[ 57.181627] aa_dfa_unpack+0x1610/0x1740
[ 57.181629] ? __kmalloc_cache_noprof+0x1d0/0x470
[ 57.181640] unpack_pdb+0x86d/0x46b0
[ 57.181647] ? srso_alias_return_thunk+0x5/0xfbef5
[ 57.181653] ? srso_alias_return_thunk+0x5/0xfbef5
[ 57.181656] ? aa_unpack_nameX+0x1a8/0x300
[ 57.181659] aa_unpack+0x20b0/0x4c30
[ 57.181662] ? srso_alias_return_thunk+0x5/0xfbef5
[ 57.181664] ? stack_depot_save_flags+0x33/0x700
[ 57.181681] ? kasan_save_track+0x4f/0x80
[ 57.181683] ? kasan_save_track+0x3e/0x80
[ 57.181686] ? __kasan_kmalloc+0x93/0xb0
[ 57.181688] ? __kvmalloc_node_noprof+0x44a/0x780
[ 57.181693] ? aa_simple_write_to_buffer+0x54/0x130
[ 57.181697] ? policy_update+0x154/0x330
[ 57.181704] aa_replace_profiles+0x15a/0x1dd0
[ 57.181707] ? srso_alias_return_thunk+0x5/0xfbef5
[ 57.181710] ? __kvmalloc_node_noprof+0x44a/0x780
[ 57.181712] ? aa_loaddata_alloc+0x77/0x140
[ 57.181715] ? srso_alias_return_thunk+0x5/0xfbef5
[ 57.181717] ? _copy_from_user+0x2a/0x70
[ 57.181730] policy_update+0x17a/0x330
[ 57.181733] profile_replace+0x153/0x1a0
[ 57.181735] ? rw_verify_area+0x93/0x2d0
[ 57.181740] vfs_write+0x235/0xab0
[ 57.181745] ksys_write+0xb0/0x170
[ 57.181748] do_syscall_64+0x8e/0x660
[ 57.181762] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 57.181765] RIP: 0033:0x7f6192792eb2
Remove the MATCH_FLAG_DIFF_ENCODE condition to validate all DEFAULT_TABLE
entries unconditionally. |
| In the Linux kernel, the following vulnerability has been resolved:
apparmor: fix side-effect bug in match_char() macro usage
The match_char() macro evaluates its character parameter multiple
times when traversing differential encoding chains. When invoked
with *str++, the string pointer advances on each iteration of the
inner do-while loop, causing the DFA to check different characters
at each iteration and therefore skip input characters.
This results in out-of-bounds reads when the pointer advances past
the input buffer boundary.
[ 94.984676] ==================================================================
[ 94.985301] BUG: KASAN: slab-out-of-bounds in aa_dfa_match+0x5ae/0x760
[ 94.985655] Read of size 1 at addr ffff888100342000 by task file/976
[ 94.986319] CPU: 7 UID: 1000 PID: 976 Comm: file Not tainted 6.19.0-rc7-next-20260127 #1 PREEMPT(lazy)
[ 94.986322] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[ 94.986329] Call Trace:
[ 94.986341] <TASK>
[ 94.986347] dump_stack_lvl+0x5e/0x80
[ 94.986374] print_report+0xc8/0x270
[ 94.986384] ? aa_dfa_match+0x5ae/0x760
[ 94.986388] kasan_report+0x118/0x150
[ 94.986401] ? aa_dfa_match+0x5ae/0x760
[ 94.986405] aa_dfa_match+0x5ae/0x760
[ 94.986408] __aa_path_perm+0x131/0x400
[ 94.986418] aa_path_perm+0x219/0x2f0
[ 94.986424] apparmor_file_open+0x345/0x570
[ 94.986431] security_file_open+0x5c/0x140
[ 94.986442] do_dentry_open+0x2f6/0x1120
[ 94.986450] vfs_open+0x38/0x2b0
[ 94.986453] ? may_open+0x1e2/0x2b0
[ 94.986466] path_openat+0x231b/0x2b30
[ 94.986469] ? __x64_sys_openat+0xf8/0x130
[ 94.986477] do_file_open+0x19d/0x360
[ 94.986487] do_sys_openat2+0x98/0x100
[ 94.986491] __x64_sys_openat+0xf8/0x130
[ 94.986499] do_syscall_64+0x8e/0x660
[ 94.986515] ? count_memcg_events+0x15f/0x3c0
[ 94.986526] ? srso_alias_return_thunk+0x5/0xfbef5
[ 94.986540] ? handle_mm_fault+0x1639/0x1ef0
[ 94.986551] ? vma_start_read+0xf0/0x320
[ 94.986558] ? srso_alias_return_thunk+0x5/0xfbef5
[ 94.986561] ? srso_alias_return_thunk+0x5/0xfbef5
[ 94.986563] ? fpregs_assert_state_consistent+0x50/0xe0
[ 94.986572] ? srso_alias_return_thunk+0x5/0xfbef5
[ 94.986574] ? arch_exit_to_user_mode_prepare+0x9/0xb0
[ 94.986587] ? srso_alias_return_thunk+0x5/0xfbef5
[ 94.986588] ? irqentry_exit+0x3c/0x590
[ 94.986595] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 94.986597] RIP: 0033:0x7fda4a79c3ea
Fix by extracting the character value before invoking match_char,
ensuring single evaluation per outer loop. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: x86/mmu: Only WARN in direct MMUs when overwriting shadow-present SPTE
Adjust KVM's sanity check against overwriting a shadow-present SPTE with a
another SPTE with a different target PFN to only apply to direct MMUs,
i.e. only to MMUs without shadowed gPTEs. While it's impossible for KVM
to overwrite a shadow-present SPTE in response to a guest write, writes
from outside the scope of KVM, e.g. from host userspace, aren't detected
by KVM's write tracking and so can break KVM's shadow paging rules.
------------[ cut here ]------------
pfn != spte_to_pfn(*sptep)
WARNING: arch/x86/kvm/mmu/mmu.c:3069 at mmu_set_spte+0x1e4/0x440 [kvm], CPU#0: vmx_ept_stale_r/872
Modules linked in: kvm_intel kvm irqbypass
CPU: 0 UID: 1000 PID: 872 Comm: vmx_ept_stale_r Not tainted 7.0.0-rc2-eafebd2d2ab0-sink-vm #319 PREEMPT
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
RIP: 0010:mmu_set_spte+0x1e4/0x440 [kvm]
Call Trace:
<TASK>
ept_page_fault+0x535/0x7f0 [kvm]
kvm_mmu_do_page_fault+0xee/0x1f0 [kvm]
kvm_mmu_page_fault+0x8d/0x620 [kvm]
vmx_handle_exit+0x18c/0x5a0 [kvm_intel]
kvm_arch_vcpu_ioctl_run+0xc55/0x1c20 [kvm]
kvm_vcpu_ioctl+0x2d5/0x980 [kvm]
__x64_sys_ioctl+0x8a/0xd0
do_syscall_64+0xb5/0x730
entry_SYSCALL_64_after_hwframe+0x4b/0x53
</TASK>
---[ end trace 0000000000000000 ]--- |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: call set_notification_done() without proc lock
Consider the following sequence of events on a death listener:
1. The remote process dies and sends a BR_DEAD_BINDER message.
2. The local process invokes the BC_CLEAR_DEATH_NOTIFICATION command.
3. The local process then invokes the BC_DEAD_BINDER_DONE.
Then, the kernel will reply to the BC_DEAD_BINDER_DONE command with a
BR_CLEAR_DEATH_NOTIFICATION_DONE reply using push_work_if_looper().
However, this can result in a deadlock if the current thread is not a
looper. This is because dead_binder_done() still holds the proc lock
during set_notification_done(), which called push_work_if_looper().
Normally, push_work_if_looper() takes the thread lock, which is fine to
take under the proc lock. But if the current thread is not a looper,
then it falls back to delivering the reply to the process work queue,
which involves taking the proc lock. Since the proc lock is already
held, this is a deadlock.
Fix this by releasing the proc lock during set_notification_done(). It
was not intentional that it was held during that function to begin with.
I don't think this ever happens in Android because BC_DEAD_BINDER_DONE
is only invoked in response to BR_DEAD_BINDER messages, and the kernel
always delivers BR_DEAD_BINDER to a looper. So there's no scenario where
Android userspace will call BC_DEAD_BINDER_DONE on a non-looper thread. |
| In the Linux kernel, the following vulnerability has been resolved:
nf_tables: nft_dynset: fix possible stateful expression memleak in error path
If cloning the second stateful expression in the element via GFP_ATOMIC
fails, then the first stateful expression remains in place without being
released.
unreferenced object (percpu) 0x607b97e9cab8 (size 16):
comm "softirq", pid 0, jiffies 4294931867
hex dump (first 16 bytes on cpu 3):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
backtrace (crc 0):
pcpu_alloc_noprof+0x453/0xd80
nft_counter_clone+0x9c/0x190 [nf_tables]
nft_expr_clone+0x8f/0x1b0 [nf_tables]
nft_dynset_new+0x2cb/0x5f0 [nf_tables]
nft_rhash_update+0x236/0x11c0 [nf_tables]
nft_dynset_eval+0x11f/0x670 [nf_tables]
nft_do_chain+0x253/0x1700 [nf_tables]
nft_do_chain_ipv4+0x18d/0x270 [nf_tables]
nf_hook_slow+0xaa/0x1e0
ip_local_deliver+0x209/0x330 |
| In the Linux kernel, the following vulnerability has been resolved:
bridge: cfm: Fix race condition in peer_mep deletion
When a peer MEP is being deleted, cancel_delayed_work_sync() is called
on ccm_rx_dwork before freeing. However, br_cfm_frame_rx() runs in
softirq context under rcu_read_lock (without RTNL) and can re-schedule
ccm_rx_dwork via ccm_rx_timer_start() between cancel_delayed_work_sync()
returning and kfree_rcu() being called.
The following is a simple race scenario:
cpu0 cpu1
mep_delete_implementation()
cancel_delayed_work_sync(ccm_rx_dwork);
br_cfm_frame_rx()
// peer_mep still in hlist
if (peer_mep->ccm_defect)
ccm_rx_timer_start()
queue_delayed_work(ccm_rx_dwork)
hlist_del_rcu(&peer_mep->head);
kfree_rcu(peer_mep, rcu);
ccm_rx_work_expired()
// on freed peer_mep
To prevent this, cancel_delayed_work_sync() is replaced with
disable_delayed_work_sync() in both peer MEP deletion paths, so
that subsequent queue_delayed_work() calls from br_cfm_frame_rx()
are silently rejected.
The cc_peer_disable() helper retains cancel_delayed_work_sync()
because it is also used for the CC enable/disable toggle path where
the work must remain re-schedulable. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nf_tables: release flowtable after rcu grace period on error
Call synchronize_rcu() after unregistering the hooks from error path,
since a hook that already refers to this flowtable can be already
registered, exposing this flowtable to packet path and nfnetlink_hook
control plane.
This error path is rare, it should only happen by reaching the maximum
number hooks or by failing to set up to hardware offload, just call
synchronize_rcu().
There is a check for already used device hooks by different flowtable
that could result in EEXIST at this late stage. The hook parser can be
updated to perform this check earlier to this error path really becomes
rarely exercised.
Uncovered by KASAN reported as use-after-free from nfnetlink_hook path
when dumping hooks. |
| In the Linux kernel, the following vulnerability has been resolved:
ice: Fix memory leak in ice_set_ringparam()
In ice_set_ringparam, tx_rings and xdp_rings are allocated before
rx_rings. If the allocation of rx_rings fails, the code jumps to
the done label leaking both tx_rings and xdp_rings. Furthermore, if
the setup of an individual Rx ring fails during the loop, the code jumps
to the free_tx label which releases tx_rings but leaks xdp_rings.
Fix this by introducing a free_xdp label and updating the error paths to
ensure both xdp_rings and tx_rings are properly freed if rx_rings
allocation or setup fails.
Compile tested only. Issue found using a prototype static analysis tool
and code review. |
| In the Linux kernel, the following vulnerability has been resolved:
Squashfs: check metadata block offset is within range
Syzkaller reports a "general protection fault in squashfs_copy_data"
This is ultimately caused by a corrupted index look-up table, which
produces a negative metadata block offset.
This is subsequently passed to squashfs_copy_data (via
squashfs_read_metadata) where the negative offset causes an out of bounds
access.
The fix is to check that the offset is within range in
squashfs_read_metadata. This will trap this and other cases. |
| In the Linux kernel, the following vulnerability has been resolved:
pinctrl: cirrus: cs42l43: Fix double-put in cs42l43_pin_probe()
devm_add_action_or_reset() already invokes the action on failure,
so the explicit put causes a double-put. |
| In the Linux kernel, the following vulnerability has been resolved:
gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA
buffer cleanup path. It iterates num_bufs times and attempts to unmap
entries in the dma array.
This leads to two issues:
1. The dma array shares storage with tx_qpl_buf_ids (union).
Interpreting buffer IDs as DMA addresses results in attempting to
unmap incorrect memory locations.
2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed
the size of the dma array, causing out-of-bounds access warnings
(trace below is how we noticed this issue).
UBSAN: array-index-out-of-bounds in
drivers/net/ethernet/drivers/net/ethernet/google/gve/gve_tx_dqo.c:178:5 index 18 is out of
range for type 'dma_addr_t[18]' (aka 'unsigned long long[18]')
Workqueue: gve gve_service_task [gve]
Call Trace:
<TASK>
dump_stack_lvl+0x33/0xa0
__ubsan_handle_out_of_bounds+0xdc/0x110
gve_tx_stop_ring_dqo+0x182/0x200 [gve]
gve_close+0x1be/0x450 [gve]
gve_reset+0x99/0x120 [gve]
gve_service_task+0x61/0x100 [gve]
process_scheduled_works+0x1e9/0x380
Fix this by properly checking for QPL mode and delegating to
gve_free_tx_qpl_bufs() to reclaim the buffers. |
| In the Linux kernel, the following vulnerability has been resolved:
netfilter: nf_tables: clone set on flush only
Syzbot with fault injection triggered a failing memory allocation with
GFP_KERNEL which results in a WARN splat:
iter.err
WARNING: net/netfilter/nf_tables_api.c:845 at nft_map_deactivate+0x34e/0x3c0 net/netfilter/nf_tables_api.c:845, CPU#0: syz.0.17/5992
Modules linked in:
CPU: 0 UID: 0 PID: 5992 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
RIP: 0010:nft_map_deactivate+0x34e/0x3c0 net/netfilter/nf_tables_api.c:845
Code: 8b 05 86 5a 4e 09 48 3b 84 24 a0 00 00 00 75 62 48 8d 65 d8 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 63 6d fa f7 90 <0f> 0b 90 43
+80 7c 35 00 00 0f 85 23 fe ff ff e9 26 fe ff ff 89 d9
RSP: 0018:ffffc900045af780 EFLAGS: 00010293
RAX: ffffffff89ca45bd RBX: 00000000fffffff4 RCX: ffff888028111e40
RDX: 0000000000000000 RSI: 00000000fffffff4 RDI: 0000000000000000
RBP: ffffc900045af870 R08: 0000000000400dc0 R09: 00000000ffffffff
R10: dffffc0000000000 R11: fffffbfff1d141db R12: ffffc900045af7e0
R13: 1ffff920008b5f24 R14: dffffc0000000000 R15: ffffc900045af920
FS: 000055557a6a5500(0000) GS:ffff888125496000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb5ea271fc0 CR3: 000000003269e000 CR4: 00000000003526f0
Call Trace:
<TASK>
__nft_release_table+0xceb/0x11f0 net/netfilter/nf_tables_api.c:12115
nft_rcv_nl_event+0xc25/0xdb0 net/netfilter/nf_tables_api.c:12187
notifier_call_chain+0x19d/0x3a0 kernel/notifier.c:85
blocking_notifier_call_chain+0x6a/0x90 kernel/notifier.c:380
netlink_release+0x123b/0x1ad0 net/netlink/af_netlink.c:761
__sock_release net/socket.c:662 [inline]
sock_close+0xc3/0x240 net/socket.c:1455
Restrict set clone to the flush set command in the preparation phase.
Add NFT_ITER_UPDATE_CLONE and use it for this purpose, update the rbtree
and pipapo backends to only clone the set when this iteration type is
used.
As for the existing NFT_ITER_UPDATE type, update the pipapo backend to
use the existing set clone if available, otherwise use the existing set
representation. After this update, there is no need to clone a set that
is being deleted, this includes bound anonymous set.
An alternative approach to NFT_ITER_UPDATE_CLONE is to add a .clone
interface and call it from the flush set path. |
| In the Linux kernel, the following vulnerability has been resolved:
RDMA/ionic: Fix kernel stack leak in ionic_create_cq()
struct ionic_cq_resp resp {
__u32 cqid[2]; // offset 0 - PARTIALLY SET (see below)
__u8 udma_mask; // offset 8 - SET (resp.udma_mask = vcq->udma_mask)
__u8 rsvd[7]; // offset 9 - NEVER SET <- LEAK
};
rsvd[7]: 7 bytes of stack memory leaked unconditionally.
cqid[2]: The loop at line 1256 iterates over udma_idx but skips indices
where !(vcq->udma_mask & BIT(udma_idx)). The array has 2 entries but
udma_count could be 1, meaning cqid[1] might never be written via
ionic_create_cq_common(). If udma_mask only has bit 0 set, cqid[1] (4
bytes) is also leaked. So potentially 11 bytes leaked. |
| In the Linux kernel, the following vulnerability has been resolved:
bpf, arm64: Force 8-byte alignment for JIT buffer to prevent atomic tearing
struct bpf_plt contains a u64 target field. Currently, the BPF JIT
allocator requests an alignment of 4 bytes (sizeof(u32)) for the JIT
buffer.
Because the base address of the JIT buffer can be 4-byte aligned (e.g.,
ending in 0x4 or 0xc), the relative padding logic in build_plt() fails
to ensure that target lands on an 8-byte boundary.
This leads to two issues:
1. UBSAN reports misaligned-access warnings when dereferencing the
structure.
2. More critically, target is updated concurrently via WRITE_ONCE() in
bpf_arch_text_poke() while the JIT'd code executes ldr. On arm64,
64-bit loads/stores are only guaranteed to be single-copy atomic if
they are 64-bit aligned. A misaligned target risks a torn read,
causing the JIT to jump to a corrupted address.
Fix this by increasing the allocation alignment requirement to 8 bytes
(sizeof(u64)) in bpf_jit_binary_pack_alloc(). This anchors the base of
the JIT buffer to an 8-byte boundary, allowing the relative padding math
in build_plt() to correctly align the target field. |
| In the Linux kernel, the following vulnerability has been resolved:
HID: Add HID_CLAIMED_INPUT guards in raw_event callbacks missing them
In commit 2ff5baa9b527 ("HID: appleir: Fix potential NULL dereference at
raw event handle"), we handle the fact that raw event callbacks
can happen even for a HID device that has not been "claimed" causing a
crash if a broken device were attempted to be connected to the system.
Fix up the remaining in-tree HID drivers that forgot to add this same
check to resolve the same issue. |
| In the Linux kernel, the following vulnerability has been resolved:
net: bridge: fix nd_tbl NULL dereference when IPv6 is disabled
When booting with the 'ipv6.disable=1' parameter, the nd_tbl is never
initialized because inet6_init() exits before ndisc_init() is called
which initializes it. Then, if neigh_suppress is enabled and an ICMPv6
Neighbor Discovery packet reaches the bridge, br_do_suppress_nd() will
dereference ipv6_stub->nd_tbl which is NULL, passing it to
neigh_lookup(). This causes a kernel NULL pointer dereference.
BUG: kernel NULL pointer dereference, address: 0000000000000268
Oops: 0000 [#1] PREEMPT SMP NOPTI
[...]
RIP: 0010:neigh_lookup+0x16/0xe0
[...]
Call Trace:
<IRQ>
? neigh_lookup+0x16/0xe0
br_do_suppress_nd+0x160/0x290 [bridge]
br_handle_frame_finish+0x500/0x620 [bridge]
br_handle_frame+0x353/0x440 [bridge]
__netif_receive_skb_core.constprop.0+0x298/0x1110
__netif_receive_skb_one_core+0x3d/0xa0
process_backlog+0xa0/0x140
__napi_poll+0x2c/0x170
net_rx_action+0x2c4/0x3a0
handle_softirqs+0xd0/0x270
do_softirq+0x3f/0x60
Fix this by replacing IS_ENABLED(IPV6) call with ipv6_mod_enabled() in
the callers. This is in essence disabling NS/NA suppression when IPv6 is
disabled. |
| In the Linux kernel, the following vulnerability has been resolved:
tracing: Fix WARN_ON in tracing_buffers_mmap_close
When a process forks, the child process copies the parent's VMAs but the
user_mapped reference count is not incremented. As a result, when both the
parent and child processes exit, tracing_buffers_mmap_close() is called
twice. On the second call, user_mapped is already 0, causing the function to
return -ENODEV and triggering a WARN_ON.
Normally, this isn't an issue as the memory is mapped with VM_DONTCOPY set.
But this is only a hint, and the application can call
madvise(MADVISE_DOFORK) which resets the VM_DONTCOPY flag. When the
application does that, it can trigger this issue on fork.
Fix it by incrementing the user_mapped reference count without re-mapping
the pages in the VMA's open callback. |