| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
net: bonding: Fix nd_tbl NULL dereference when IPv6 is disabled
When booting with the 'ipv6.disable=1' parameter, the nd_tbl is never
initialized because inet6_init() exits before ndisc_init() is called
which initializes it. If bonding ARP/NS validation is enabled, an IPv6
NS/NA packet received on a slave can reach bond_validate_na(), which
calls bond_has_this_ip6(). That path calls ipv6_chk_addr() and can
crash in __ipv6_chk_addr_and_flags().
BUG: kernel NULL pointer dereference, address: 00000000000005d8
Oops: Oops: 0000 [#1] SMP NOPTI
RIP: 0010:__ipv6_chk_addr_and_flags+0x69/0x170
Call Trace:
<IRQ>
ipv6_chk_addr+0x1f/0x30
bond_validate_na+0x12e/0x1d0 [bonding]
? __pfx_bond_handle_frame+0x10/0x10 [bonding]
bond_rcv_validate+0x1a0/0x450 [bonding]
bond_handle_frame+0x5e/0x290 [bonding]
? srso_alias_return_thunk+0x5/0xfbef5
__netif_receive_skb_core.constprop.0+0x3e8/0xe50
? srso_alias_return_thunk+0x5/0xfbef5
? update_cfs_rq_load_avg+0x1a/0x240
? srso_alias_return_thunk+0x5/0xfbef5
? __enqueue_entity+0x5e/0x240
__netif_receive_skb_one_core+0x39/0xa0
process_backlog+0x9c/0x150
__napi_poll+0x30/0x200
? srso_alias_return_thunk+0x5/0xfbef5
net_rx_action+0x338/0x3b0
handle_softirqs+0xc9/0x2a0
do_softirq+0x42/0x60
</IRQ>
<TASK>
__local_bh_enable_ip+0x62/0x70
__dev_queue_xmit+0x2d3/0x1000
? srso_alias_return_thunk+0x5/0xfbef5
? srso_alias_return_thunk+0x5/0xfbef5
? packet_parse_headers+0x10a/0x1a0
packet_sendmsg+0x10da/0x1700
? kick_pool+0x5f/0x140
? srso_alias_return_thunk+0x5/0xfbef5
? __queue_work+0x12d/0x4f0
__sys_sendto+0x1f3/0x220
__x64_sys_sendto+0x24/0x30
do_syscall_64+0x101/0xf80
? exc_page_fault+0x6e/0x170
? srso_alias_return_thunk+0x5/0xfbef5
entry_SYSCALL_64_after_hwframe+0x77/0x7f
</TASK>
Fix this by checking ipv6_mod_enabled() before dispatching IPv6 packets to
bond_na_rcv(). If IPv6 is disabled, return early from bond_rcv_validate()
and avoid the path to ipv6_chk_addr(). |
| In the Linux kernel, the following vulnerability has been resolved:
sched_ext: Remove redundant css_put() in scx_cgroup_init()
The iterator css_for_each_descendant_pre() walks the cgroup hierarchy
under cgroup_lock(). It does not increment the reference counts on
yielded css structs.
According to the cgroup documentation, css_put() should only be used
to release a reference obtained via css_get() or css_tryget_online().
Since the iterator does not use either of these to acquire a reference,
calling css_put() in the error path of scx_cgroup_init() causes a
refcount underflow.
Remove the unbalanced css_put() to prevent a potential Use-After-Free
(UAF) vulnerability. |
| In the Linux kernel, the following vulnerability has been resolved:
ALSA: pcm: fix use-after-free on linked stream runtime in snd_pcm_drain()
In the drain loop, the local variable 'runtime' is reassigned to a
linked stream's runtime (runtime = s->runtime at line 2157). After
releasing the stream lock at line 2169, the code accesses
runtime->no_period_wakeup, runtime->rate, and runtime->buffer_size
(lines 2170-2178) — all referencing the linked stream's runtime without
any lock or refcount protecting its lifetime.
A concurrent close() on the linked stream's fd triggers
snd_pcm_release_substream() → snd_pcm_drop() → pcm_release_private()
→ snd_pcm_unlink() → snd_pcm_detach_substream() → kfree(runtime).
No synchronization prevents kfree(runtime) from completing while the
drain path dereferences the stale pointer.
Fix by caching the needed runtime fields (no_period_wakeup, rate,
buffer_size) into local variables while still holding the stream lock,
and using the cached values after the lock is released. |
| In the Linux kernel, the following vulnerability has been resolved:
rust_binder: check ownership before using vma
When installing missing pages (or zapping them), Rust Binder will look
up the vma in the mm by address, and then call vm_insert_page (or
zap_page_range_single). However, if the vma is closed and replaced with
a different vma at the same address, this can lead to Rust Binder
installing pages into the wrong vma.
By installing the page into a writable vma, it becomes possible to write
to your own binder pages, which are normally read-only. Although you're
not supposed to be able to write to those pages, the intent behind the
design of Rust Binder is that even if you get that ability, it should not
lead to anything bad. Unfortunately, due to another bug, that is not the
case.
To fix this, store a pointer in vm_private_data and check that the vma
returned by vma_lookup() has the right vm_ops and vm_private_data before
trying to use the vma. This should ensure that Rust Binder will refuse
to interact with any other VMA. The plan is to introduce more vma
abstractions to avoid this unsafe access to vm_ops and vm_private_data,
but for now let's start with the simplest possible fix.
C Binder performs the same check in a slightly different way: it
provides a vm_ops->close that sets a boolean to true, then checks that
boolean after calling vma_lookup(), but this is more fragile
than the solution in this patch. (We probably still want to do both, but
the vm_ops->close callback will be added later as part of the follow-up
vma API changes.)
It's still possible to remap the vma so that pages appear in the right
vma, but at the wrong offset, but this is a separate issue and will be
fixed when Rust Binder gets a vm_ops->close callback. |
| In the Linux kernel, the following vulnerability has been resolved:
scsi: qla2xxx: Completely fix fcport double free
In qla24xx_els_dcmd_iocb() sp->free is set to qla2x00_els_dcmd_sp_free().
When an error happens, this function is called by qla2x00_sp_release(),
when kref_put() releases the first and the last reference.
qla2x00_els_dcmd_sp_free() frees fcport by calling qla2x00_free_fcport().
Doing it one more time after kref_put() is a bad idea. |
| In the Linux kernel, the following vulnerability has been resolved:
ceph: add a bunch of missing ceph_path_info initializers
ceph_mdsc_build_path() must be called with a zero-initialized
ceph_path_info parameter, or else the following
ceph_mdsc_free_path_info() may crash.
Example crash (on Linux 6.18.12):
virt_to_cache: Object is not a Slab page!
WARNING: CPU: 184 PID: 2871736 at mm/slub.c:6732 kmem_cache_free+0x316/0x400
[...]
Call Trace:
[...]
ceph_open+0x13d/0x3e0
do_dentry_open+0x134/0x480
vfs_open+0x2a/0xe0
path_openat+0x9a3/0x1160
[...]
cache_from_obj: Wrong slab cache. names_cache but object is from ceph_inode_info
WARNING: CPU: 184 PID: 2871736 at mm/slub.c:6746 kmem_cache_free+0x2dd/0x400
[...]
kernel BUG at mm/slub.c:634!
Oops: invalid opcode: 0000 [#1] SMP NOPTI
RIP: 0010:__slab_free+0x1a4/0x350
Some of the ceph_mdsc_build_path() callers had initializers, but
others had not, even though they were all added by commit 15f519e9f883
("ceph: fix race condition validating r_parent before applying state").
The ones without initializer are suspectible to random crashes. (I can
imagine it could even be possible to exploit this bug to elevate
privileges.)
Unfortunately, these Ceph functions are undocumented and its semantics
can only be derived from the code. I see that ceph_mdsc_build_path()
initializes the structure only on success, but not on error.
Calling ceph_mdsc_free_path_info() after a failed
ceph_mdsc_build_path() call does not even make sense, but that's what
all callers do, and for it to be safe, the structure must be
zero-initialized. The least intrusive approach to fix this is
therefore to add initializers everywhere. |
| In the Linux kernel, the following vulnerability has been resolved:
libceph: Fix potential out-of-bounds access in ceph_handle_auth_reply()
This patch fixes an out-of-bounds access in ceph_handle_auth_reply()
that can be triggered by a message of type CEPH_MSG_AUTH_REPLY. In
ceph_handle_auth_reply(), the value of the payload_len field of such a
message is stored in a variable of type int. A value greater than
INT_MAX leads to an integer overflow and is interpreted as a negative
value. This leads to decrementing the pointer address by this value and
subsequently accessing it because ceph_decode_need() only checks that
the memory access does not exceed the end address of the allocation.
This patch fixes the issue by changing the data type of payload_len to
u32. Additionally, the data type of result_msg_len is changed to u32,
as it is also a variable holding a non-negative length.
Also, an additional layer of sanity checks is introduced, ensuring that
directly after reading it from the message, payload_len and
result_msg_len are not greater than the overall segment length.
BUG: KASAN: slab-out-of-bounds in ceph_handle_auth_reply+0x642/0x7a0 [libceph]
Read of size 4 at addr ffff88811404df14 by task kworker/20:1/262
CPU: 20 UID: 0 PID: 262 Comm: kworker/20:1 Not tainted 6.19.2 #5 PREEMPT(voluntary)
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Workqueue: ceph-msgr ceph_con_workfn [libceph]
Call Trace:
<TASK>
dump_stack_lvl+0x76/0xa0
print_report+0xd1/0x620
? __pfx__raw_spin_lock_irqsave+0x10/0x10
? kasan_complete_mode_report_info+0x72/0x210
kasan_report+0xe7/0x130
? ceph_handle_auth_reply+0x642/0x7a0 [libceph]
? ceph_handle_auth_reply+0x642/0x7a0 [libceph]
__asan_report_load_n_noabort+0xf/0x20
ceph_handle_auth_reply+0x642/0x7a0 [libceph]
mon_dispatch+0x973/0x23d0 [libceph]
? apparmor_socket_recvmsg+0x6b/0xa0
? __pfx_mon_dispatch+0x10/0x10 [libceph]
? __kasan_check_write+0x14/0x30i
? mutex_unlock+0x7f/0xd0
? __pfx_mutex_unlock+0x10/0x10
? __pfx_do_recvmsg+0x10/0x10 [libceph]
ceph_con_process_message+0x1f1/0x650 [libceph]
process_message+0x1e/0x450 [libceph]
ceph_con_v2_try_read+0x2e48/0x6c80 [libceph]
? __pfx_ceph_con_v2_try_read+0x10/0x10 [libceph]
? save_fpregs_to_fpstate+0xb0/0x230
? raw_spin_rq_unlock+0x17/0xa0
? finish_task_switch.isra.0+0x13b/0x760
? __switch_to+0x385/0xda0
? __kasan_check_write+0x14/0x30
? mutex_lock+0x8d/0xe0
? __pfx_mutex_lock+0x10/0x10
ceph_con_workfn+0x248/0x10c0 [libceph]
process_one_work+0x629/0xf80
? __kasan_check_write+0x14/0x30
worker_thread+0x87f/0x1570
? __pfx__raw_spin_lock_irqsave+0x10/0x10
? __pfx_try_to_wake_up+0x10/0x10
? kasan_print_address_stack_frame+0x1f7/0x280
? __pfx_worker_thread+0x10/0x10
kthread+0x396/0x830
? __pfx__raw_spin_lock_irq+0x10/0x10
? __pfx_kthread+0x10/0x10
? __kasan_check_write+0x14/0x30
? recalc_sigpending+0x180/0x210
? __pfx_kthread+0x10/0x10
ret_from_fork+0x3f7/0x610
? __pfx_ret_from_fork+0x10/0x10
? __switch_to+0x385/0xda0
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
[ idryomov: replace if statements with ceph_decode_need() for
payload_len and result_msg_len ] |
| In the Linux kernel, the following vulnerability has been resolved:
libceph: prevent potential out-of-bounds reads in process_message_header()
If the message frame is (maliciously) corrupted in a way that the
length of the control segment ends up being less than the size of the
message header or a different frame is made to look like a message
frame, out-of-bounds reads may ensue in process_message_header().
Perform an explicit bounds check before decoding the message header. |
| In the Linux kernel, the following vulnerability has been resolved:
libceph: Use u32 for non-negative values in ceph_monmap_decode()
This patch fixes unnecessary implicit conversions that change signedness
of blob_len and num_mon in ceph_monmap_decode().
Currently blob_len and num_mon are (signed) int variables. They are used
to hold values that are always non-negative and get assigned in
ceph_decode_32_safe(), which is meant to assign u32 values. Both
variables are subsequently used as unsigned values, and the value of
num_mon is further assigned to monmap->num_mon, which is of type u32.
Therefore, both variables should be of type u32. This is especially
relevant for num_mon. If the value read from the incoming message is
very large, it is interpreted as a negative value, and the check for
num_mon > CEPH_MAX_MON does not catch it. This leads to the attempt to
allocate a very large chunk of memory for monmap, which will most likely
fail. In this case, an unnecessary attempt to allocate memory is
performed, and -ENOMEM is returned instead of -EINVAL. |
| In the Linux kernel, the following vulnerability has been resolved:
nsfs: tighten permission checks for ns iteration ioctls
Even privileged services should not necessarily be able to see other
privileged service's namespaces so they can't leak information to each
other. Use may_see_all_namespaces() helper that centralizes this policy
until the nstree adapts. |
| In the Linux kernel, the following vulnerability has been resolved:
kthread: consolidate kthread exit paths to prevent use-after-free
Guillaume reported crashes via corrupted RCU callback function pointers
during KUnit testing. The crash was traced back to the pidfs rhashtable
conversion which replaced the 24-byte rb_node with an 8-byte rhash_head
in struct pid, shrinking it from 160 to 144 bytes.
struct kthread (without CONFIG_BLK_CGROUP) is also 144 bytes. With
CONFIG_SLAB_MERGE_DEFAULT and SLAB_HWCACHE_ALIGN both round up to
192 bytes and share the same slab cache. struct pid.rcu.func and
struct kthread.affinity_node both sit at offset 0x78.
When a kthread exits via make_task_dead() it bypasses kthread_exit() and
misses the affinity_node cleanup. free_kthread_struct() frees the memory
while the node is still linked into the global kthread_affinity_list. A
subsequent list_del() by another kthread writes through dangling list
pointers into the freed and reused memory, corrupting the pid's
rcu.func pointer.
Instead of patching free_kthread_struct() to handle the missed cleanup,
consolidate all kthread exit paths. Turn kthread_exit() into a macro
that calls do_exit() and add kthread_do_exit() which is called from
do_exit() for any task with PF_KTHREAD set. This guarantees that
kthread-specific cleanup always happens regardless of the exit path -
make_task_dead(), direct do_exit(), or kthread_exit().
Replace __to_kthread() with a new tsk_is_kthread() accessor in the
public header. Export do_exit() since module code using the
kthread_exit() macro now needs it directly. |
| In the Linux kernel, the following vulnerability has been resolved:
nsfs: tighten permission checks for handle opening
Even privileged services should not necessarily be able to see other
privileged service's namespaces so they can't leak information to each
other. Use may_see_all_namespaces() helper that centralizes this policy
until the nstree adapts. |
| In the Linux kernel, the following vulnerability has been resolved:
net: Fix rcu_tasks stall in threaded busypoll
I was debugging a NIC driver when I noticed that when I enable
threaded busypoll, bpftrace hangs when starting up. dmesg showed:
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 10658 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 40793 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 131273 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 402058 jiffies old.
INFO: rcu_tasks detected stalls on tasks:
00000000769f52cd: .N nvcsw: 2/2 holdout: 1 idle_cpu: -1/64
task:napi/eth2-8265 state:R running task stack:0 pid:48300 tgid:48300 ppid:2 task_flags:0x208040 flags:0x00004000
Call Trace:
<TASK>
? napi_threaded_poll_loop+0x27c/0x2c0
? __pfx_napi_threaded_poll+0x10/0x10
? napi_threaded_poll+0x26/0x80
? kthread+0xfa/0x240
? __pfx_kthread+0x10/0x10
? ret_from_fork+0x31/0x50
? __pfx_kthread+0x10/0x10
? ret_from_fork_asm+0x1a/0x30
</TASK>
The cause is that in threaded busypoll, the main loop is in
napi_threaded_poll rather than napi_threaded_poll_loop, where the
latter rarely iterates more than once within its loop. For
rcu_softirq_qs_periodic inside napi_threaded_poll_loop to report its
qs state, the last_qs must be 100ms behind, and this can't happen
because napi_threaded_poll_loop rarely iterates in threaded busypoll,
and each time napi_threaded_poll_loop is called last_qs is reset to
latest jiffies.
This patch changes so that in threaded busypoll, last_qs is saved
in the outer napi_threaded_poll, and whether busy_poll_last_qs
is NULL indicates whether napi_threaded_poll_loop is called for
busypoll. This way last_qs would not reset to latest jiffies on
each invocation of napi_threaded_poll_loop. |
| In the Linux kernel, the following vulnerability has been resolved:
net/tcp-ao: Fix MAC comparison to be constant-time
To prevent timing attacks, MACs need to be compared in constant
time. Use the appropriate helper function for this. |
| In the Linux kernel, the following vulnerability has been resolved:
net/tcp-md5: Fix MAC comparison to be constant-time
To prevent timing attacks, MACs need to be compared in constant
time. Use the appropriate helper function for this. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free in smb_lazy_parent_lease_break_close()
opinfo pointer obtained via rcu_dereference(fp->f_opinfo) is being
accessed after rcu_read_unlock() has been called. This creates a
race condition where the memory could be freed by a concurrent
writer between the unlock and the subsequent pointer dereferences
(opinfo->is_lease, etc.), leading to a use-after-free. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: Don't log keys in SMB3 signing and encryption key generation
When KSMBD_DEBUG_AUTH logging is enabled, generate_smb3signingkey() and
generate_smb3encryptionkey() log the session, signing, encryption, and
decryption key bytes. Remove the logs to avoid exposing credentials. |
| In the Linux kernel, the following vulnerability has been resolved:
ksmbd: fix use-after-free by using call_rcu() for oplock_info
ksmbd currently frees oplock_info immediately using kfree(), even
though it is accessed under RCU read-side critical sections in places
like opinfo_get() and proc_show_files().
Since there is no RCU grace period delay between nullifying the pointer
and freeing the memory, a reader can still access oplock_info
structure after it has been freed. This can leads to a use-after-free
especially in opinfo_get() where atomic_inc_not_zero() is called on
already freed memory.
Fix this by switching to deferred freeing using call_rcu(). |
| In the Linux kernel, the following vulnerability has been resolved:
net: nexthop: fix percpu use-after-free in remove_nh_grp_entry
When removing a nexthop from a group, remove_nh_grp_entry() publishes
the new group via rcu_assign_pointer() then immediately frees the
removed entry's percpu stats with free_percpu(). However, the
synchronize_net() grace period in the caller remove_nexthop_from_groups()
runs after the free. RCU readers that entered before the publish still
see the old group and can dereference the freed stats via
nh_grp_entry_stats_inc() -> get_cpu_ptr(nhge->stats), causing a
use-after-free on percpu memory.
Fix by deferring the free_percpu() until after synchronize_net() in the
caller. Removed entries are chained via nh_list onto a local deferred
free list. After the grace period completes and all RCU readers have
finished, the percpu stats are safely freed. |
| In the Linux kernel, the following vulnerability has been resolved:
net: ncsi: fix skb leak in error paths
Early return paths in NCSI RX and AEN handlers fail to release
the received skb, resulting in a memory leak.
Specifically, ncsi_aen_handler() returns on invalid AEN packets
without consuming the skb. Similarly, ncsi_rcv_rsp() exits early
when failing to resolve the NCSI device, response handler, or
request, leaving the skb unfreed. |