Export limit exceeded: 351246 CVEs match your query. Please refine your search to export 10,000 CVEs or fewer.
Search
Search Results (351246 CVEs found)
| CVE | Vendors | Products | Updated | CVSS v3.1 |
|---|---|---|---|---|
| CVE-2026-4527 | 1 Gitlab | 1 Gitlab | 2026-05-15 | 6.5 Medium |
| GitLab has remediated an issue in GitLab CE/EE affecting all versions from 11.10 before 18.9.7, 18.10 before 18.10.6, and 18.11 before 18.11.3 that could have allowed an unauthenticated user to create unauthorized Jira subscriptions for a targeted user's namespace via a specially crafted link due to missing CSRF protection. | ||||
| CVE-2026-6335 | 1 Gitlab | 1 Gitlab | 2026-05-15 | 5.4 Medium |
| GitLab has remediated an issue in GitLab CE/EE affecting all versions from 18.11 before 18.11.3 that under certain conditions could have allowed an authenticated user to execute arbitrary code in another user's browser session due to improper sanitization. | ||||
| CVE-2026-43306 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: bpf: crypto: Use the correct destructor kfunc type With CONFIG_CFI enabled, the kernel strictly enforces that indirect function calls use a function pointer type that matches the target function. I ran into the following type mismatch when running BPF self-tests: CFI failure at bpf_obj_free_fields+0x190/0x238 (target: bpf_crypto_ctx_release+0x0/0x94; expected type: 0xa488ebfc) Internal error: Oops - CFI: 00000000f2008228 [#1] SMP ... As bpf_crypto_ctx_release() is also used in BPF programs and using a void pointer as the argument would make the verifier unhappy, add a simple stub function with the correct type and register it as the destructor kfunc instead. | ||||
| CVE-2026-43307 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: iio: accel: adxl380: Avoid reading more entries than present in FIFO The interrupt handler reads FIFO entries in batches of N samples, where N is the number of scan elements that have been enabled. However, the sensor fills the FIFO one sample at a time, even when more than one channel is enabled. Therefore,the number of entries reported by the FIFO status registers may not be a multiple of N; if this number is not a multiple, the number of entries read from the FIFO may exceed the number of entries actually present. To fix the above issue, round down the number of FIFO entries read from the status registers so that it is always a multiple of N. | ||||
| CVE-2026-43339 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: ipv6: prevent possible UaF in addrconf_permanent_addr() The mentioned helper try to warn the user about an exceptional condition, but the message is delivered too late, accessing the ipv6 after its possible deletion. Reorder the statement to avoid the possible UaF; while at it, place the warning outside the idev->lock as it needs no protection. | ||||
| CVE-2026-43340 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: comedi: Reinit dev->spinlock between attachments to low-level drivers `struct comedi_device` is the main controlling structure for a COMEDI device created by the COMEDI subsystem. It contains a member `spinlock` containing a spin-lock that is initialized by the COMEDI subsystem, but is reserved for use by a low-level driver attached to the COMEDI device (at least since commit 25436dc9d84f ("Staging: comedi: remove RT code")). Some COMEDI devices (those created on initialization of the COMEDI subsystem when the "comedi.comedi_num_legacy_minors" parameter is non-zero) can be attached to different low-level drivers over their lifetime using the `COMEDI_DEVCONFIG` ioctl command. This can result in inconsistent lock states being reported when there is a mismatch in the spin-lock locking levels used by each low-level driver to which the COMEDI device has been attached. Fix it by reinitializing `dev->spinlock` before calling the low-level driver's `attach` function pointer if `CONFIG_LOCKDEP` is enabled. | ||||
| CVE-2026-43341 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 9.8 Critical |
| In the Linux kernel, the following vulnerability has been resolved: net/ipv6: ioam6: prevent schema length wraparound in trace fill ioam6_fill_trace_data() stores the schema contribution to the trace length in a u8. With bit 22 enabled and the largest schema payload, sclen becomes 1 + 1020 / 4, wraps from 256 to 0, and bypasses the remaining-space check. __ioam6_fill_trace_data() then positions the write cursor without reserving the schema area but still copies the 4-byte schema header and the full schema payload, overrunning the trace buffer. Keep sclen in an unsigned int so the remaining-space check and the write cursor calculation both see the full schema length. | ||||
| CVE-2026-43907 | 2 Academysoftwarefoundation, Openimageio | 2 Openimageio, Openimageio | 2026-05-15 | 8.3 High |
| OpenImageIO is a toolset for reading, writing, and manipulating image files of any image file format relevant to VFX / animation. Prior to 3.0.18.0 and 3.1.13.0, a signed integer overflow in QueryRGBBufferSizeInternal() in DPXColorConverter.cpp leads to a heap-based out-of-bounds write when processing crafted DPX image files. The function computes buffer sizes using 32-bit signed integer arithmetic with negative multipliers (e.g., pixels * -3 * bytes for kCbYCr descriptors and pixels * -4 * bytes for kABGR descriptors), where a negative result is used as an in-band signal that no separate buffer is needed. When the pixel count is sufficiently large, the multiplication overflows INT_MIN and wraps to a small positive value. The caller in dpxinput.cpp interprets this positive value as a required buffer size, allocates an undersized heap buffer via m_decodebuf.resize(), and then writes the full image data into it via fread, resulting in a heap buffer overflow. An attacker can exploit this by crafting a DPX file that triggers the overflow, causing a denial of service (crash) or potentially arbitrary code execution through heap corruption in any application that reads pixel data using OpenImageIO. This vulnerability is fixed in 3.0.18.0 and 3.1.13.0. | ||||
| CVE-2026-43905 | 2 Academysoftwarefoundation, Openimageio | 2 Openimageio, Openimageio | 2026-05-15 | 7.8 High |
| OpenImageIO is a toolset for reading, writing, and manipulating image files of any image file format relevant to VFX / animation. Prior to 3.0.18.0 and 3.1.13.0, jpeg2000input.cpp:395 computes buffer size as const int bufsize = w * h * ch * buffer_bpp using signed 32-bit arithmetic. When the product exceeds INT_MAX, the result wraps to 0 or a small value. m_buf.resize() allocates an undersized buffer, and subsequent pixel write loops cause heap overflow. Conditional on USE_OPENJPH build flag. This vulnerability is fixed in 3.0.18.0 and 3.1.13.0. | ||||
| CVE-2026-43903 | 2 Academysoftwarefoundation, Openimageio | 2 Openimageio, Openimageio | 2026-05-15 | 7.8 High |
| OpenImageIO is a toolset for reading, writing, and manipulating image files of any image file format relevant to VFX / animation. Prior to 3.0.18.0 and 3.1.13.0, sgiinput.cpp:265,274 use OIIO_DASSERT for bounds checking in the RLE decode loop. In release builds, OIIO_DASSERT compiles to ((void)sizeof(x)) (dassert.h:210), making all bounds checks no-ops. A crafted .sgi file with RLE count exceeding scanline width causes heap buffer overflow and crash. This vulnerability is fixed in 3.0.18.0 and 3.1.13.0. | ||||
| CVE-2026-43906 | 1 Openimageio | 1 Openimageio | 2026-05-15 | 7.8 High |
| OpenImageIO is a toolset for reading, writing, and manipulating image files of any image file format relevant to VFX / animation. Prior to 3.0.18.0 and 3.1.13.0, a heap-based buffer overflow in the HEIF decoder of OpenImageIO allows out-of-bounds writes via crafted images due to a subimage metadata mismatch, leading to memory corruption and potential code execution. This vulnerability is fixed in 3.0.18.0 and 3.1.13.0. | ||||
| CVE-2026-42295 | 1 Argoproj | 2 Argo-workflows, Argo Workflows | 2026-05-15 | 4.9 Medium |
| Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. From version 4.0.0 to before version 4.0.5, the workflow executor logs all artifact repository credentials (S3 access keys, secret keys, GCS service account keys, Azure account keys, Git passwords, etc.) in plaintext on artifact operation. Any user with read access to workflow pod logs can extract these credentials. This issue has been patched in version 4.0.5. | ||||
| CVE-2026-42296 | 1 Argoproj | 2 Argo-workflows, Argo Workflows | 2026-05-15 | 8.1 High |
| Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Prior to versions 3.7.14 and 4.0.5, a user with create Workflow permission can bypass templateReferencing: Strict to get host network access, switch service accounts, override pod security context, add tolerations to schedule on control-plane nodes, or enable SA token mounting. This defeats the stated purpose of the feature. The practical impact depends on what Kubernetes-level controls are in place. Clusters with PodSecurity admission or OPA/Gatekeeper would independently block some of these (like hostNetwork). Clusters that rely on Argo's Strict mode as the primary enforcement layer are fully exposed. This issue has been patched in versions 3.7.14 and 4.0.5. | ||||
| CVE-2026-43352 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 7.8 High |
| In the Linux kernel, the following vulnerability has been resolved: i3c: mipi-i3c-hci: Correct RING_CTRL_ABORT handling in DMA dequeue The logic used to abort the DMA ring contains several flaws: 1. The driver unconditionally issues a ring abort even when the ring has already stopped. 2. The completion used to wait for abort completion is never re-initialized, resulting in incorrect wait behavior. 3. The abort sequence unintentionally clears RING_CTRL_ENABLE, which resets hardware ring pointers and disrupts the controller state. 4. If the ring is already stopped, the abort operation should be considered successful without attempting further action. Fix the abort handling by checking whether the ring is running before issuing an abort, re-initializing the completion when needed, ensuring that RING_CTRL_ENABLE remains asserted during abort, and treating an already stopped ring as a successful condition. | ||||
| CVE-2026-43351 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: KVM: arm64: Eagerly init vgic dist/redist on vgic creation If vgic_allocate_private_irqs_locked() fails for any odd reason, we exit kvm_vgic_create() early, leaving dist->rd_regions uninitialised. kvm_vgic_dist_destroy() then comes along and walks into the weeds trying to free the RDs. Got to love this stuff. Solve it by moving all the static initialisation early, and make sure that if we fail halfway, we're in a reasonable shape to perform the rest of the teardown. While at it, reset the vgic model on failure, just in case... | ||||
| CVE-2026-43315 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: KVM: nSVM: Remove a user-triggerable WARN on nested_svm_load_cr3() succeeding Drop the WARN in svm_set_nested_state() on nested_svm_load_cr3() failing as it is trivially easy to trigger from userspace by modifying CPUID after loading CR3. E.g. modifying the state restoration selftest like so: --- tools/testing/selftests/kvm/x86/state_test.c +++ tools/testing/selftests/kvm/x86/state_test.c @@ -280,7 +280,16 @@ int main(int argc, char *argv[]) /* Restore state in a new VM. */ vcpu = vm_recreate_with_one_vcpu(vm); - vcpu_load_state(vcpu, state); + + if (stage == 4) { + state->sregs.cr3 = BIT(44); + vcpu_load_state(vcpu, state); + + vcpu_set_cpuid_property(vcpu, X86_PROPERTY_MAX_PHY_ADDR, 36); + __vcpu_nested_state_set(vcpu, &state->nested); + } else { + vcpu_load_state(vcpu, state); + } /* * Restore XSAVE state in a dummy vCPU, first without doing generates: WARNING: CPU: 30 PID: 938 at arch/x86/kvm/svm/nested.c:1877 svm_set_nested_state+0x34a/0x360 [kvm_amd] Modules linked in: kvm_amd kvm irqbypass [last unloaded: kvm] CPU: 30 UID: 1000 PID: 938 Comm: state_test Tainted: G W 6.18.0-rc7-58e10b63777d-next-vm Tainted: [W]=WARN Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:svm_set_nested_state+0x34a/0x360 [kvm_amd] Call Trace: <TASK> kvm_arch_vcpu_ioctl+0xf33/0x1700 [kvm] kvm_vcpu_ioctl+0x4e6/0x8f0 [kvm] __x64_sys_ioctl+0x8f/0xd0 do_syscall_64+0x61/0xad0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 Simply delete the WARN instead of trying to prevent userspace from shoving "illegal" state into CR3. For better or worse, KVM's ABI allows userspace to set CPUID after SREGS, and vice versa, and KVM is very permissive when it comes to guest CPUID. I.e. attempting to enforce the virtual CPU model when setting CPUID could break userspace. Given that the WARN doesn't provide any meaningful protection for KVM or benefit for userspace, simply drop it even though the odds of breaking userspace are minuscule. Opportunistically delete a spurious newline. | ||||
| CVE-2026-31221 | 1 Lightningai | 1 Pytorch Lightning | 2026-05-15 | 8.8 High |
| PyTorch-Lightning versions 2.6.0 and earlier contain an insecure deserialization vulnerability (CWE-502) in the checkpoint loading mechanism. The LightningModule.load_from_checkpoint() method, which is commonly used to load saved model states, internally calls torch.load() without setting the security-restrictive weights_only=True parameter. This default behavior allows the deserialization of arbitrary Python objects via the Pickle module. A remote attacker can exploit this by providing a maliciously crafted checkpoint file, leading to arbitrary code execution on the victim's system when the file is loaded. | ||||
| CVE-2026-23695 | 1 Cockpit-hq | 1 Cockpit | 2026-05-15 | 5.4 Medium |
| Cockpit CMS through version 2.14.0, patched in commit 72a83fc, contains a stored cross-site scripting vulnerability in the Set field type's Display template option, where the template string is processed by the $interpolate function using new Function() and rendered via Vue's v-html directive without sanitization. An attacker with content/:models/manage permission can inject arbitrary JavaScript into the Display template, which executes in the browser of any user viewing the collection items list. | ||||
| CVE-2026-43308 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: btrfs: don't BUG() on unexpected delayed ref type in run_one_delayed_ref() There is no need to BUG(), we can just return an error and log an error message. | ||||
| CVE-2026-43348 | 1 Linux | 1 Linux Kernel | 2026-05-15 | 5.5 Medium |
| In the Linux kernel, the following vulnerability has been resolved: mshv_vtl: Fix vmemmap_shift exceeding MAX_FOLIO_ORDER When registering VTL0 memory via MSHV_ADD_VTL0_MEMORY, the kernel computes pgmap->vmemmap_shift as the number of trailing zeros in the OR of start_pfn and last_pfn, intending to use the largest compound page order both endpoints are aligned to. However, this value is not clamped to MAX_FOLIO_ORDER, so a sufficiently aligned range (e.g. physical range [0x800000000000, 0x800080000000), corresponding to start_pfn=0x800000000 with 35 trailing zeros) can produce a shift larger than what memremap_pages() accepts, triggering a WARN and returning -EINVAL: WARNING: ... memremap_pages+0x512/0x650 requested folio size unsupported The MAX_FOLIO_ORDER check was added by commit 646b67d57589 ("mm/memremap: reject unreasonable folio/compound page sizes in memremap_pages()"). Fix this by clamping vmemmap_shift to MAX_FOLIO_ORDER so we always request the largest order the kernel supports, in those cases, rather than an out-of-range value. Also fix the error path to propagate the actual error code from devm_memremap_pages() instead of hard-coding -EFAULT, which was masking the real -EINVAL return. | ||||