| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| n8n is an open source workflow automation platform. Prior to versions 1.123.32, 2.17.4, and 2.18.1, the fix for GHSA-f3f2-mcxc-pwjx did not cover the Snowflake node or the legacy MySQL v1 node. Both nodes construct SQL queries by directly interpolating user-controlled table names, column names, and update keys into query strings without identifier escaping, enabling SQL injection against the connected database. This issue has been patched in versions 1.123.32, 2.17.4, and 2.18.1. |
| n8n is an open source workflow automation platform. Prior to versions 1.123.32, 2.17.4, and 2.18.1, the MCP OAuth client registration endpoint accepted unauthenticated requests and stored client data without adequate resource controls. An unauthenticated remote attacker could exhaust server memory resources by sending large registration payloads, rendering the n8n instance unavailable. The MCP enable/disable toggle gates MCP access but did not restrict client registrations, meaning the endpoint is reachable regardless of whether MCP access is enabled on the instance. This issue has been patched in versions 1.123.32, 2.17.4, and 2.18.1. |
| n8n is an open source workflow automation platform. Prior to versions 1.123.32, 2.17.4, and 2.18.1, an authenticated user with permission to create or modify workflows could achieve global prototype pollution via the XML Node leading to RCE when combined with other nodes exploiting the prototype pollution. This issue has been patched in versions 1.123.32, 2.17.4, and 2.18.1. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/vc4: platform_get_irq_byname() returns an int
platform_get_irq_byname() will return a negative value if an error
happens, so it should be checked and not just passed directly into
devm_request_threaded_irq() hoping all will be ok. |
| In the Linux kernel, the following vulnerability has been resolved:
most: core: fix resource leak in most_register_interface error paths
The function most_register_interface() did not correctly release resources
if it failed early (before registering the device). In these cases, it
returned an error code immediately, leaking the memory allocated for the
interface.
Fix this by initializing the device early via device_initialize() and
calling put_device() on all error paths.
The most_register_interface() is expected to call put_device() on
error which frees the resources allocated in the caller. The
put_device() either calls release_mdev() or dim2_release(),
depending on the caller.
Switch to using device_add() instead of device_register() to handle
the split initialization. |
| In the Linux kernel, the following vulnerability has been resolved:
net/mlx5e: Fix "scheduling while atomic" in IPsec MAC address query
Fix a "scheduling while atomic" bug in mlx5e_ipsec_init_macs() by
replacing mlx5_query_mac_address() with ether_addr_copy() to get the
local MAC address directly from netdev->dev_addr.
The issue occurs because mlx5_query_mac_address() queries the hardware
which involves mlx5_cmd_exec() that can sleep, but it is called from
the mlx5e_ipsec_handle_event workqueue which runs in atomic context.
The MAC address is already available in netdev->dev_addr, so no need
to query hardware. This avoids the sleeping call and resolves the bug.
Call trace:
BUG: scheduling while atomic: kworker/u112:2/69344/0x00000200
__schedule+0x7ab/0xa20
schedule+0x1c/0xb0
schedule_timeout+0x6e/0xf0
__wait_for_common+0x91/0x1b0
cmd_exec+0xa85/0xff0 [mlx5_core]
mlx5_cmd_exec+0x1f/0x50 [mlx5_core]
mlx5_query_nic_vport_mac_address+0x7b/0xd0 [mlx5_core]
mlx5_query_mac_address+0x19/0x30 [mlx5_core]
mlx5e_ipsec_init_macs+0xc1/0x720 [mlx5_core]
mlx5e_ipsec_build_accel_xfrm_attrs+0x422/0x670 [mlx5_core]
mlx5e_ipsec_handle_event+0x2b9/0x460 [mlx5_core]
process_one_work+0x178/0x2e0
worker_thread+0x2ea/0x430 |
| n8n is an open source workflow automation platform. Prior to versions 1.123.32, 2.17.4, and 2.18.1, a flaw in the xml2js library used to parse XML request bodies in n8n's webhook handler allowed prototype pollution via a crafted XML payload. An authenticated user with permission to create or modify workflows could exploit this to pollute the JavaScript object prototype and, by chaining the pollution with the Git node's SSH operations, achieve remote code execution on the n8n host. This issue has been patched in versions 1.123.32, 2.17.4, and 2.18.1. |
| gopls by default communicates via pipe. However, -port and -listen flags are supported as means of debugging.
If -listen is given a value without an explicit host (e.g. :8080), or -port is used, gopls will listen on 0.0.0.0.
As a result, users might inadvertently cause gopls to bind 0.0.0.0.
This can allow a malicious party on the same network to execute code arbitrarily via gopls. |
| The bson_validate function may return early on specific inputs and incorrectly report success. This behavior could result in skipping validation for BSON data, allowing malformed or invalid UTF-8 sequences to bypass validation and be processed incorrectly. The issue may affect applications that rely on these functions to validate untrusted BSON data before further processing. This issue affects MongoDB C Driver versions prior to 1.30.5, MongoDB C Driver version 2.0.0 and MongoDB C Driver version 2.0.1 |
| The fix for CVE-2025-68161 https://logging.apache.org/security.html#CVE-2025-68161 was incomplete: it addressed hostname verification only when enabled via the log4j2.sslVerifyHostName https://logging.apache.org/log4j/2.x/manual/systemproperties.html#log4j2.sslVerifyHostName system property, but not when configured through the verifyHostName https://logging.apache.org/log4j/2.x/manual/appenders/network.html#SslConfiguration-attr-verifyHostName attribute of the <Ssl> element.
Although the verifyHostName configuration attribute was introduced in Log4j Core 2.12.0, it was silently ignored in all versions through 2.25.3, leaving TLS connections vulnerable to interception regardless of the configured value.
A network-based attacker may be able to perform a man-in-the-middle attack when all of the following conditions are met:
* An SMTP, Socket, or Syslog appender is in use.
* TLS is configured via a nested <Ssl> element.
* The attacker can present a certificate issued by a CA trusted by the appender's configured trust store, or by the default Java trust store if none is configured.
This issue does not affect users of the HTTP appender, which uses a separate verifyHostname https://logging.apache.org/log4j/2.x/manual/appenders/network.html#HttpAppender-attr-verifyHostName attribute that was not subject to this bug and verifies host names by default.
Users are advised to upgrade to Apache Log4j Core 2.25.4, which corrects this issue. |
| A flaw was found in ansible-collection-community-general. This vulnerability allows for information exposure (IE) of sensitive credentials, specifically plaintext passwords, via verbose output when running Ansible with debug modes. Attackers with access to logs could retrieve these secrets and potentially compromise Keycloak accounts or administrative access. |
| A flaw was found in the Keycloak identity and access management system when Fine-Grained Admin Permissions(FGAPv2) are enabled. An administrative user with the manage-users role can escalate their privileges to realm-admin due to improper privilege enforcement. This vulnerability allows unauthorized elevation of access rights, compromising the intended separation of administrative duties and posing a security risk to the realm. |
| A flaw was found in Keycloak. When an authenticated attacker attempts to merge accounts with another existing account during an identity provider (IdP) login, the attacker will subsequently be prompted to "review profile" information. This vulnerability allows the attacker to modify their email address to match that of a victim's account, triggering a verification email sent to the victim's email address. The attacker's email address is not present in the verification email content, making it a potential phishing opportunity. If the victim clicks the verification link, the attacker can gain access to the victim's account. |
| A flaw was found in Keycloak. When the configuration uses JWT tokens for authentication, the tokens are cached until expiration. If a client uses JWT tokens with an excessively long expiration time, for example, 24 or 48 hours, the cache can grow indefinitely, leading to an OutOfMemoryError. This issue could result in a denial of service condition, preventing legitimate users from accessing the system. |
| A flaw was found in the Keycloak organization feature, which allows the incorrect assignment of an organization to a user if their username or email matches the organization’s domain pattern. This issue occurs at the mapper level, leading to misrepresentation in tokens. If an application relies on these claims for authorization, it may incorrectly assume a user belongs to an organization they are not a member of, potentially granting unauthorized access or privileges. |
| A vulnerability was found in the Keycloak-services package. If untrusted data is passed to the SearchQueryUtils method, it could lead to a denial of service (DoS) scenario by exhausting system resources due to a Regex complexity. |
| In the Linux kernel, the following vulnerability has been resolved:
dcache: Limit the minimal number of bucket to two
There is an OOB read problem on dentry_hashtable when user sets
'dhash_entries=1':
BUG: unable to handle page fault for address: ffff888b30b774b0
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
Oops: Oops: 0000 [#1] SMP PTI
RIP: 0010:__d_lookup+0x56/0x120
Call Trace:
d_lookup.cold+0x16/0x5d
lookup_dcache+0x27/0xf0
lookup_one_qstr_excl+0x2a/0x180
start_dirop+0x55/0xa0
simple_start_creating+0x8d/0xa0
debugfs_start_creating+0x8c/0x180
debugfs_create_dir+0x1d/0x1c0
pinctrl_init+0x6d/0x140
do_one_initcall+0x6d/0x3d0
kernel_init_freeable+0x39f/0x460
kernel_init+0x2a/0x260
There will be only one bucket in dentry_hashtable when dhash_entries is
set as one, and d_hash_shift is calculated as 32 by dcache_init(). Then,
following process will access more than one buckets(which memory region
is not allocated) in dentry_hashtable:
d_lookup
b = d_hash(hash)
dentry_hashtable + ((u32)hashlen >> d_hash_shift)
// The C standard defines the behavior of right shift amounts
// exceeding the bit width of the operand as undefined. The
// result of '(u32)hashlen >> d_hash_shift' becomes 'hashlen',
// so 'b' will point to an unallocated memory region.
hlist_bl_for_each_entry_rcu(b)
hlist_bl_first_rcu(head)
h->first // read OOB!
Fix it by limiting the minimal number of dentry_hashtable bucket to two,
so that 'd_hash_shift' won't exceeds the bit width of type u32. |
| In the Linux kernel, the following vulnerability has been resolved:
x86-64: rename misleadingly named '__copy_user_nocache()' function
This function was a masterclass in bad naming, for various historical
reasons.
It claimed to be a non-cached user copy. It is literally _neither_ of
those things. It's a specialty memory copy routine that uses
non-temporal stores for the destination (but not the source), and that
does exception handling for both source and destination accesses.
Also note that while it works for unaligned targets, any unaligned parts
(whether at beginning or end) will not use non-temporal stores, since
only words and quadwords can be non-temporal on x86.
The exception handling means that it _can_ be used for user space
accesses, but not on its own - it needs all the normal "start user space
access" logic around it.
But typically the user space access would be the source, not the
non-temporal destination. That was the original intention of this,
where the destination was some fragile persistent memory target that
needed non-temporal stores in order to catch machine check exceptions
synchronously and deal with them gracefully.
Thus that non-descriptive name: one use case was to copy from user space
into a non-cached kernel buffer. However, the existing users are a mix
of that intended use-case, and a couple of random drivers that just did
this as a performance tweak.
Some of those random drivers then actively misused the user copying
version (with STAC/CLAC and all) to do kernel copies without ever even
caring about the exception handling, _just_ for the non-temporal
destination.
Rename it as a first small step to actually make it halfway sane, and
change the prototype to be more normal: it doesn't take a user pointer
unless the caller has done the proper conversion, and the argument size
is the full size_t (it still won't actually copy more than 4GB in one
go, but there's also no reason to silently truncate the size argument in
the caller).
Finally, use this now sanely named function in the NTB code, which
mis-used a user copy version (with STAC/CLAC and all) of this interface
despite it not actually being a user copy at all. |
| In the Linux kernel, the following vulnerability has been resolved:
media: ipu6: Fix RPM reference leak in probe error paths
Several error paths in ipu6_pci_probe() were jumping directly to
out_ipu6_bus_del_devices without releasing the runtime PM reference.
Add pm_runtime_put_sync() before cleaning up other resources. |
| In the Linux kernel, the following vulnerability has been resolved:
dm mpath: Add missing dm_put_device when failing to get scsi dh name
When commit fd81bc5cca8f ("scsi: device_handler: Return error pointer in
scsi_dh_attached_handler_name()") added code to fail parsing the path if
scsi_dh_attached_handler_name() failed with -ENOMEM, it didn't clean up
the reference to the path device that had just been taken. Fix this, and
steamline the error paths of parse_path() a little. |