commit 1b8629e7c9b52079a6471973a1e2e14012b885e9 Author: Greg Kroah-Hartman Date: Sun Nov 10 11:21:39 2019 +0100 Linux 4.4.200 commit bb756a3f302cb7a813b7db30dc96c1bfe6335f26 Author: zhangyi (F) Date: Wed Nov 6 17:43:52 2019 +0800 fs/dcache: move security_d_instantiate() behind attaching dentry to inode During backport 1e2e547a93a "do d_instantiate/unlock_new_inode combinations safely", there was a error instantiating sequence of attaching dentry to inode and calling security_d_instantiate(). Before commit ce23e640133 "->getxattr(): pass dentry and inode as separate arguments" and b96809173e9 "security_d_instantiate(): move to the point prior to attaching dentry to inode", security_d_instantiate() should be called beind __d_instantiate(), otherwise it will trigger below problem when CONFIG_SECURITY_SMACK on ext4 was enabled because d_inode(dentry) used by ->getxattr() is NULL before __d_instantiate() instantiate inode. [ 31.858026] BUG: unable to handle kernel paging request at ffffffffffffff70 ... [ 31.882024] Call Trace: [ 31.882378] [] ext4_xattr_get+0x8c/0x3e0 [ 31.883195] [] ext4_xattr_security_get+0x24/0x40 [ 31.884086] [] generic_getxattr+0x5b/0x90 [ 31.884907] [] smk_fetch+0xb4/0x150 [ 31.885634] [] smack_d_instantiate+0x1c2/0x550 [ 31.886508] [] security_d_instantiate+0x3a/0x80 [ 31.887389] [] d_instantiate_new+0x36/0x130 [ 31.888223] [] ext4_mkdir+0x4af/0x6a0 [ 31.888928] [] vfs_mkdir+0x100/0x280 [ 31.889536] [] SyS_mkdir+0xb6/0x170 [ 31.890255] [] ? trace_do_page_fault+0x95/0x2b0 [ 31.891134] [] entry_SYSCALL_64_fastpath+0x18/0x73 Cc: # 3.16, 4.4 Signed-off-by: zhangyi (F) Signed-off-by: Greg Kroah-Hartman commit 87fb5df321742e5d58156f9d06646e7ad48798ab Author: Petr Vorel Date: Fri Nov 8 16:51:16 2019 +0100 alarmtimer: Change remaining ENOTSUPP to EOPNOTSUPP Fix backport of commit f18ddc13af981ce3c7b7f26925f099e7c6929aba upstream. Update backport to change ENOTSUPP to EOPNOTSUPP in alarm_timer_{del,set}(), which were removed in f2c45807d3992fe0f173f34af9c347d907c31686 in v4.13-rc1. Fixes: c22df8ea7c5831d6fdca2f6f136f0d32d7064ff9 Signed-off-by: Petr Vorel Acked-by: Thadeu Lima de Souza Cascardo Signed-off-by: Greg Kroah-Hartman commit 9370e15d54f1c7eb208e2506dcf4be8560caca63 Author: Russell King Date: Fri Nov 8 13:35:54 2019 +0100 ARM: fix the cockup in the previous patch Commit d6951f582cc50ba0ad22ef46b599740966599b14 upstream. The intention in the previous patch was to only place the processor tables in the .rodata section if big.Little was being built and we wanted the branch target hardening, but instead (due to the way it was tested) it ended up always placing the tables into the .rodata section. Although harmless, let's correct this anyway. Fixes: 3a4d0c2172bc ("ARM: ensure that processor vtables is not lost after boot") Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 52117da87994ff1c428c68d9a493c16168969065 Author: Russell King Date: Fri Nov 8 13:35:53 2019 +0100 ARM: ensure that processor vtables is not lost after boot Commit 3a4d0c2172bcf15b7a3d9d498b2b355f9864286b upstream. Marek Szyprowski reported problems with CPU hotplug in current kernels. This was tracked down to the processor vtables being located in an init section, and therefore discarded after kernel boot, despite being required after boot to properly initialise the non-boot CPUs. Arrange for these tables to end up in .rodata when required. Reported-by: Marek Szyprowski Tested-by: Krzysztof Kozlowski Fixes: 383fb3ee8024 ("ARM: spectre-v2: per-CPU vtables to work around big.Little systems") Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 7e1992962129e932d49c2f701284d43ee92be14e Author: Russell King Date: Fri Nov 8 13:35:52 2019 +0100 ARM: spectre-v2: per-CPU vtables to work around big.Little systems Commit 383fb3ee8024d596f488d2dbaf45e572897acbdb upstream. In big.Little systems, some CPUs require the Spectre workarounds in paths such as the context switch, but other CPUs do not. In order to handle these differences, we need per-CPU vtables. We are unable to use the kernel's per-CPU variables to support this as per-CPU is not initialised at times when we need access to the vtables, so we have to use an array indexed by logical CPU number. We use an array-of-pointers to avoid having function pointers in the kernel's read/write .data section. Note: Added include of linux/slab.h in arch/arm/smp.c. Reviewed-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 4628d245b5f56105f1238463a9aea9772ac1c79c Author: Russell King Date: Fri Nov 8 13:35:51 2019 +0100 ARM: add PROC_VTABLE and PROC_TABLE macros Commit e209950fdd065d2cc46e6338e47e52841b830cba upstream. Allow the way we access members of the processor vtable to be changed at compile time. We will need to move to per-CPU vtables to fix the Spectre variant 2 issues on big.Little systems. However, we have a couple of calls that do not need the vtable treatment, and indeed cause a kernel warning due to the (later) use of smp_processor_id(), so also introduce the PROC_TABLE macro for these which always use CPU 0's function pointers. Reviewed-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit afa68159e492b3e7087697017c315cab9fb12527 Author: Russell King Date: Fri Nov 8 13:35:50 2019 +0100 ARM: clean up per-processor check_bugs method call Commit 945aceb1db8885d3a35790cf2e810f681db52756 upstream. Call the per-processor type check_bugs() method in the same way as we do other per-processor functions - move the "processor." detail into proc-fns.h. Reviewed-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 0350343ee6fdb253b901e30c7f23fcd17c64d342 Author: Russell King Date: Fri Nov 8 13:35:49 2019 +0100 ARM: split out processor lookup Commit 65987a8553061515b5851b472081aedb9837a391 upstream. Split out the lookup of the processor type and associated error handling from the rest of setup_processor() - we will need to use this in the secondary CPU bringup path for big.Little Spectre variant 2 mitigation. Reviewed-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 14899ca00b30ad8237e3f8cc2d049c5b200e9dcf Author: Russell King Date: Fri Nov 8 13:35:48 2019 +0100 ARM: make lookup_processor_type() non-__init Commit 899a42f836678a595f7d2bc36a5a0c2b03d08cbc upstream. Move lookup_processor_type() out of the __init section so it is callable from (eg) the secondary startup code during hotplug. Reviewed-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 2e9ae9a66fbb16a8763ae1a8856f06c31e1f63f5 Author: Julien Thierry Date: Fri Nov 8 13:35:47 2019 +0100 ARM: 8810/1: vfp: Fix wrong assignement to ufp_exc Commit 5df7a99bdd0de4a0480320264c44c04543c29d5a upstream. In vfp_preserve_user_clear_hwstate, ufp_exc->fpinst2 gets assigned to itself. It should actually be hwstate->fpinst2 that gets assigned to the ufp_exc field. Fixes commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 ("ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state"). Reported-by: David Binderman Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 56e635e9c9ba77f91523949037b2a13c3e99f67c Author: Julien Thierry Date: Fri Nov 8 13:35:46 2019 +0100 ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization Commit afaf6838f4bc896a711180b702b388b8cfa638fc upstream. Introduce C and asm helpers to sanitize user address, taking the address range they target into account. Use asm helper for existing sanitization in __copy_from_user(). Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 6bb42e6d14004f6535b404b377db9b0e14c770dd Author: Julien Thierry Date: Fri Nov 8 13:35:45 2019 +0100 ARM: 8795/1: spectre-v1.1: use put_user() for __put_user() Commit e3aa6243434fd9a82e84bb79ab1abd14f2d9a5a7 upstream. When Spectre mitigation is required, __put_user() needs to include check_uaccess. This is already the case for put_user(), so just make __put_user() an alias of put_user(). Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit f8c5a575c5dd82d9303cc091782fae8895ff0ff3 Author: Julien Thierry Date: Fri Nov 8 13:35:44 2019 +0100 ARM: 8794/1: uaccess: Prevent speculative use of the current addr_limit Commit 621afc677465db231662ed126ae1f355bf8eac47 upstream. A mispredicted conditional call to set_fs could result in the wrong addr_limit being forwarded under speculation to a subsequent access_ok check, potentially forming part of a spectre-v1 attack using uaccess routines. This patch prevents this forwarding from taking place, but putting heavy barriers in set_fs after writing the addr_limit. Porting commit c2f0ad4fc089cff8 ("arm64: uaccess: Prevent speculative use of the current addr_limit"). Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 796e5eb9683bd2abf64f9e60dae8c31b2a5bd07b Author: Julien Thierry Date: Fri Nov 8 13:35:43 2019 +0100 ARM: 8793/1: signal: replace __put_user_error with __put_user Commit 18ea66bd6e7a95bdc598223d72757190916af28b upstream. With Spectre-v1.1 mitigations, __put_user_error is pointless. In an attempt to remove it, replace its references in frame setups with __put_user. Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit b905a10fd986bb69e4f47a557f1ebd36c4f85ec2 Author: Julien Thierry Date: Fri Nov 8 13:35:42 2019 +0100 ARM: 8792/1: oabi-compat: copy oabi events using __copy_to_user() Commit 319508902600c2688e057750148487996396e9ca upstream. Copy events to user using __copy_to_user() rather than copy members of individually with __put_user_error(). This has the benefit of disabling/enabling PAN once per event intead of once per event member. Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 8d81dfc0500780f30a512e58afa652617dadf368 Author: Julien Thierry Date: Fri Nov 8 13:35:41 2019 +0100 ARM: 8791/1: vfp: use __copy_to_user() when saving VFP state Commit 3aa2df6ec2ca6bc143a65351cca4266d03a8bc41 upstream. Use __copy_to_user() rather than __put_user_error() for individual members when saving VFP state. This has the benefit of disabling/enabling PAN once per copied struct intead of once per write. Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 97af4d1a2fee66a83c9a21b91ac24715787dd3ea Author: Julien Thierry Date: Fri Nov 8 13:35:40 2019 +0100 ARM: 8789/1: signal: copy registers using __copy_to_user() Commit 5ca451cf6ed04443774bbb7ee45332dafa42e99f upstream. When saving the ARM integer registers, use __copy_to_user() to copy them into user signal frame, rather than __put_user_error(). This has the benefit of disabling/enabling PAN once for the whole copy intead of once per write. Signed-off-by: Julien Thierry Signed-off-by: Russell King Signed-off-by: David A. Long Reviewed-by: Julien Thierry Signed-off-by: Sasha Levin Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit e0a0a5eede1797251429afe60304bc6aa6c12d87 Author: Russell King Date: Fri Nov 8 13:35:39 2019 +0100 ARM: spectre-v1: mitigate user accesses Commit a3c0f84765bb429ba0fd23de1c57b5e1591c9389 upstream. Spectre variant 1 attacks are about this sequence of pseudo-code: index = load(user-manipulated pointer); access(base + index * stride); In order for the cache side-channel to work, the access() must me made to memory which userspace can detect whether cache lines have been loaded. On 32-bit ARM, this must be either user accessible memory, or a kernel mapping of that same user accessible memory. The problem occurs when the load() speculatively loads privileged data, and the subsequent access() is made to user accessible memory. Any load() which makes use of a user-maniplated pointer is a potential problem if the data it has loaded is used in a subsequent access. This also applies for the access() if the data loaded by that access is used by a subsequent access. Harden the get_user() accessors against Spectre attacks by forcing out of bounds addresses to a NULL pointer. This prevents get_user() being used as the load() step above. As a side effect, put_user() will also be affected even though it isn't implicated. Also harden copy_from_user() by redoing the bounds check within the arm_copy_from_user() code, and NULLing the pointer if out of bounds. Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 3ff229a261ff35f874497bed74307634748edf77 Author: Russell King Date: Fri Nov 8 13:35:38 2019 +0100 ARM: spectre-v1: use get_user() for __get_user() Commit b1cd0a14806321721aae45f5446ed83a3647c914 upstream. Fixing __get_user() for spectre variant 1 is not sane: we would have to add address space bounds checking in order to validate that the location should be accessed, and then zero the address if found to be invalid. Since __get_user() is supposed to avoid the bounds check, and this is exactly what get_user() does, there's no point having two different implementations that are doing the same thing. So, when the Spectre workarounds are required, make __get_user() an alias of get_user(). Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 5156abdf3eb8cd9fa74ce852edda547f98f6c8f0 Author: Russell King Date: Fri Nov 8 13:35:37 2019 +0100 ARM: use __inttype() in get_user() Commit d09fbb327d670737ab40fd8bbb0765ae06b8b739 upstream. Borrow the x86 implementation of __inttype() to use in get_user() to select an integer type suitable to temporarily hold the result value. This is necessary to avoid propagating the volatile nature of the result argument, which can cause the following warning: lib/iov_iter.c:413:5: warning: optimization may eliminate reads and/or writes to register variables [-Wvolatile-register-var] Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 0253fb2101640613a0e6d080fb021cad61fa642f Author: Russell King Date: Fri Nov 8 13:35:36 2019 +0100 ARM: oabi-compat: copy semops using __copy_from_user() Commit 8c8484a1c18e3231648f5ba7cc5ffb7fd70b3ca4 upstream. __get_user_error() is used as a fast accessor to make copying structure members as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. Rather than using __get_user_error() to copy each semops element member, copy each semops element in full using __copy_from_user(). Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 318625c2b8522da85b7da326127a4bf897931b4f Author: Russell King Date: Fri Nov 8 13:35:35 2019 +0100 ARM: vfp: use __copy_from_user() when restoring VFP state Commit 42019fc50dfadb219f9e6ddf4c354f3837057d80 upstream. __get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. Use __copy_from_user() rather than __get_user_err() for individual members when restoring VFP state. Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit d9230bf9127df2956fd45ae198ab9bd137feaa9e Author: Russell King Date: Fri Nov 8 13:35:34 2019 +0100 ARM: signal: copy registers using __copy_from_user() Commit c32cd419d6650e42b9cdebb83c672ec945e6bd7e upstream. __get_user_error() is used as a fast accessor to make copying structure members in the signal handling path as efficient as possible. However, with software PAN and the recent Spectre variant 1, the efficiency is reduced as these are no longer fast accessors. In the case of software PAN, it has to switch the domain register around each access, and with Spectre variant 1, it would have to repeat the access_ok() check for each access. It becomes much more efficient to use __copy_from_user() instead, so let's use this for the ARM integer registers. Acked-by: Mark Rutland Signed-off-by: Russell King Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit e4f30ae336f67f32dbd12072039654dc46e6f1a7 Author: Russell King Date: Fri Nov 8 13:35:33 2019 +0100 ARM: spectre-v1: fix syscall entry Commit 10573ae547c85b2c61417ff1a106cffbfceada35 upstream. Prevent speculation at the syscall table decoding by clamping the index used to zero on invalid system call numbers, and using the csdb speculative barrier. Signed-off-by: Russell King Acked-by: Mark Rutland Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 36250826443658d511639a9b94afae6af864c08d Author: Russell King Date: Fri Nov 8 13:35:32 2019 +0100 ARM: spectre-v1: add array_index_mask_nospec() implementation Commit 1d4238c56f9816ce0f9c8dbe42d7f2ad81cb6613 upstream. Add an implementation of the array_index_mask_nospec() function for mitigating Spectre variant 1 throughout the kernel. Signed-off-by: Russell King Acked-by: Mark Rutland Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 8143a0f9c06359d9746a4c425d077a8ecf528ddb Author: Russell King Date: Fri Nov 8 13:35:31 2019 +0100 ARM: spectre-v1: add speculation barrier (csdb) macros Commit a78d156587931a2c3b354534aa772febf6c9e855 upstream. Add assembly and C macros for the new CSDB instruction. Signed-off-by: Russell King Acked-by: Mark Rutland Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 0df031995c6572a0f00589f5c615ef7bca0b0c43 Author: Russell King Date: Fri Nov 8 13:35:30 2019 +0100 ARM: spectre-v2: warn about incorrect context switching functions Commit c44f366ea7c85e1be27d08f2f0880f4120698125 upstream. Warn at error level if the context switching function is not what we are expecting. This can happen with big.Little systems, which we currently do not support. Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 81482138c8a522b179318f830715ec0c354d0ef1 Author: Russell King Date: Fri Nov 8 13:35:29 2019 +0100 ARM: spectre-v2: add firmware based hardening Commit 10115105cb3aa17b5da1cb726ae8dd5f6854bd93 upstream. Add firmware based hardening for cores that require more complex handling in firmware. Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Reviewed-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit a645847f4435a38e242609b76e31b78ef0d4cf92 Author: Russell King Date: Fri Nov 8 13:35:28 2019 +0100 ARM: spectre-v2: harden user aborts in kernel space Commit f5fe12b1eaee220ce62ff9afb8b90929c396595f upstream. In order to prevent aliasing attacks on the branch predictor, invalidate the BTB or instruction cache on CPUs that are known to be affected when taking an abort on a address that is outside of a user task limit: Cortex A8, A9, A12, A17, A73, A75: flush BTB. Cortex A15, Brahma B15: invalidate icache. If the IBE bit is not set, then there is little point to enabling the workaround. Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 4d027736deefe93443f86a3c26c810314e507b5b Author: Russell King Date: Fri Nov 8 13:35:27 2019 +0100 ARM: spectre-v2: add Cortex A8 and A15 validation of the IBE bit Commit e388b80288aade31135aca23d32eee93dd106795 upstream. When the branch predictor hardening is enabled, firmware must have set the IBE bit in the auxiliary control register. If this bit has not been set, the Spectre workarounds will not be functional. Add validation that this bit is set, and print a warning at alert level if this is not the case. Signed-off-by: Russell King Reviewed-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 521bb23af00c153223ec16032157df94dbff5717 Author: Russell King Date: Fri Nov 8 13:35:26 2019 +0100 ARM: spectre-v2: harden branch predictor on context switches Commit 06c23f5ffe7ad45b908d0fff604dae08a7e334b9 upstream. Required manual merge of arch/arm/mm/proc-v7.S. Harden the branch predictor against Spectre v2 attacks on context switches for ARMv7 and later CPUs. We do this by: Cortex A9, A12, A17, A73, A75: invalidating the BTB. Cortex A15, Brahma B15: invalidating the instruction cache. Cortex A57 and Cortex A72 are not addressed in this patch. Cortex R7 and Cortex R8 are also not addressed as we do not enforce memory protection on these cores. Signed-off-by: Russell King Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 22b1077759ad1f6de0864b589455550bd872e561 Author: Russell King Date: Fri Nov 8 13:35:25 2019 +0100 ARM: spectre: add Kconfig symbol for CPUs vulnerable to Spectre Commit c58d237d0852a57fde9bc2c310972e8f4e3d155d upstream. Add a Kconfig symbol for CPUs which are vulnerable to the Spectre attacks. Signed-off-by: Russell King Reviewed-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 564f56594e9f254b20298f62557f6933b27241d0 Author: Russell King Date: Fri Nov 8 13:35:24 2019 +0100 ARM: bugs: add support for per-processor bug checking Commit 9d3a04925deeabb97c8e26d940b501a2873e8af3 upstream. Add support for per-processor bug checking - each processor function descriptor gains a function pointer for this check, which must not be an __init function. If non-NULL, this will be called whenever a CPU enters the kernel via which ever path (boot CPU, secondary CPU startup, CPU resuming, etc.) This allows processor specific bug checks to validate that workaround bits are properly enabled by firmware via all entry paths to the kernel. Signed-off-by: Russell King Reviewed-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 71dd24046bfd93ae3d642fbb8b7e8d431a2a10dd Author: Russell King Date: Fri Nov 8 13:35:23 2019 +0100 ARM: bugs: hook processor bug checking into SMP and suspend paths Commit 26602161b5ba795928a5a719fe1d5d9f2ab5c3ef upstream. Check for CPU bugs when secondary processors are being brought online, and also when CPUs are resuming from a low power mode. This gives an opportunity to check that processor specific bug workarounds are correctly enabled for all paths that a CPU re-enters the kernel. Signed-off-by: Russell King Reviewed-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 1797ff1ac57c45a6c0145b0d0ce16a1c2284a077 Author: Russell King Date: Fri Nov 8 13:35:22 2019 +0100 ARM: bugs: prepare processor bug infrastructure Commit a5b9177f69329314721aa7022b7e69dab23fa1f0 upstream. Prepare the processor bug infrastructure so that it can be expanded to check for per-processor bugs. Signed-off-by: Russell King Reviewed-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 6ba73321bb93ebc005b663c375618b1a4ee60e02 Author: Russell King Date: Fri Nov 8 13:35:21 2019 +0100 ARM: add more CPU part numbers for Cortex and Brahma B15 CPUs Commit f5683e76f35b4ec5891031b6a29036efe0a1ff84 upstream. Add CPU part numbers for Cortex A53, A57, A72, A73, A75 and the Broadcom Brahma B15 CPU. Signed-off-by: Russell King Acked-by: Florian Fainelli Boot-tested-by: Tony Lindgren Reviewed-by: Tony Lindgren Acked-by: Marc Zyngier Signed-off-by: David A. Long Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 8ca266df65a4b7e255e0f18a79ce157e87ede274 Author: Marc Zyngier Date: Fri Nov 8 13:35:20 2019 +0100 arm/arm64: smccc-1.1: Handle function result as parameters [ Upstream commit 755a8bf5579d22eb5636685c516d8dede799e27b ] If someone has the silly idea to write something along those lines: extern u64 foo(void); void bar(struct arm_smccc_res *res) { arm_smccc_1_1_smc(0xbad, foo(), res); } they are in for a surprise, as this gets compiled as: 0000000000000588 : 588: a9be7bfd stp x29, x30, [sp, #-32]! 58c: 910003fd mov x29, sp 590: f9000bf3 str x19, [sp, #16] 594: aa0003f3 mov x19, x0 598: aa1e03e0 mov x0, x30 59c: 94000000 bl 0 <_mcount> 5a0: 94000000 bl 0 5a4: aa0003e1 mov x1, x0 5a8: d4000003 smc #0x0 5ac: b4000073 cbz x19, 5b8 5b0: a9000660 stp x0, x1, [x19] 5b4: a9010e62 stp x2, x3, [x19, #16] 5b8: f9400bf3 ldr x19, [sp, #16] 5bc: a8c27bfd ldp x29, x30, [sp], #32 5c0: d65f03c0 ret 5c4: d503201f nop The call to foo "overwrites" the x0 register for the return value, and we end up calling the wrong secure service. A solution is to evaluate all the parameters before assigning anything to specific registers, leading to the expected result: 0000000000000588 : 588: a9be7bfd stp x29, x30, [sp, #-32]! 58c: 910003fd mov x29, sp 590: f9000bf3 str x19, [sp, #16] 594: aa0003f3 mov x19, x0 598: aa1e03e0 mov x0, x30 59c: 94000000 bl 0 <_mcount> 5a0: 94000000 bl 0 5a4: aa0003e1 mov x1, x0 5a8: d28175a0 mov x0, #0xbad 5ac: d4000003 smc #0x0 5b0: b4000073 cbz x19, 5bc 5b4: a9000660 stp x0, x1, [x19] 5b8: a9010e62 stp x2, x3, [x19, #16] 5bc: f9400bf3 ldr x19, [sp, #16] 5c0: a8c27bfd ldp x29, x30, [sp], #32 5c4: d65f03c0 ret Reported-by: Julien Grall Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 1b8430525195acc007155728bcb1e4eb0f5ded44 Author: Marc Zyngier Date: Fri Nov 8 13:35:19 2019 +0100 arm/arm64: smccc-1.1: Make return values unsigned long [ Upstream commit 1d8f574708a3fb6f18c85486d0c5217df893c0cf ] An unfortunate consequence of having a strong typing for the input values to the SMC call is that it also affects the type of the return values, limiting r0 to 32 bits and r{1,2,3} to whatever was passed as an input. Let's turn everything into "unsigned long", which satisfies the requirements of both architectures, and allows for the full range of return values. Reported-by: Julien Grall Signed-off-by: Marc Zyngier Signed-off-by: Will Deacon Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 0d4e2980b38b86ebfb864b402bce290d7575ba64 Author: Marc Zyngier Date: Fri Nov 8 13:35:18 2019 +0100 arm/arm64: smccc: Add SMCCC-specific return codes commit eff0e9e1078ea7dc1d794dc50e31baef984c46d7 upstream. We've so far used the PSCI return codes for SMCCC because they were extremely similar. But with the new ARM DEN 0070A specification, "NOT_REQUIRED" (-2) is clashing with PSCI's "PSCI_RET_INVALID_PARAMS". Let's bite the bullet and add SMCCC specific return codes. Users can be repainted as and when required. Acked-by: Will Deacon Reviewed-by: Mark Rutland Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Marc Zyngier Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit deb97e65fa574ff8d5ed6b36a267c205c1bce618 Author: Marc Zyngier Date: Fri Nov 8 13:35:17 2019 +0100 arm/arm64: smccc: Implement SMCCC v1.1 inline primitive commit f2d3b2e8759a5833df6f022e42df2d581e6d843c upstream. One of the major improvement of SMCCC v1.1 is that it only clobbers the first 4 registers, both on 32 and 64bit. This means that it becomes very easy to provide an inline version of the SMC call primitive, and avoid performing a function call to stash the registers that would otherwise be clobbered by SMCCC v1.0. Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 36329b1d312e84a7a1e70f757a8993a3277e67ba Author: Marc Zyngier Date: Fri Nov 8 13:35:16 2019 +0100 arm/arm64: smccc: Make function identifiers an unsigned quantity commit ded4c39e93f3b72968fdb79baba27f3b83dad34c upstream. Function identifiers are a 32bit, unsigned quantity. But we never tell so to the compiler, resulting in the following: 4ac: b26187e0 mov x0, #0xffffffff80000001 We thus rely on the firmware narrowing it for us, which is not always a reasonable expectation. Cc: stable@vger.kernel.org Reported-by: Ard Biesheuvel Acked-by: Ard Biesheuvel Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 2217fa9f88753f455f948e89a21f7b5e882809e1 Author: Marc Zyngier Date: Fri Nov 8 13:35:15 2019 +0100 firmware/psci: Expose SMCCC version through psci_ops commit e78eef554a912ef6c1e0bbf97619dafbeae3339f upstream. Since PSCI 1.0 allows the SMCCC version to be (indirectly) probed, let's do that at boot time, and expose the version of the calling convention as part of the psci_ops structure. Acked-by: Lorenzo Pieralisi Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 0a65b836f2bf5bcd46afd62a9b322065a68afdf7 Author: Marc Zyngier Date: Fri Nov 8 13:35:14 2019 +0100 firmware/psci: Expose PSCI conduit commit 09a8d6d48499f93e2abde691f5800081cd858726 upstream. In order to call into the firmware to apply workarounds, it is useful to find out whether we're using HVC or SMC. Let's expose this through the psci_ops. Acked-by: Lorenzo Pieralisi Reviewed-by: Robin Murphy Tested-by: Ard Biesheuvel Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit d10ce9593da9edf141b7bfcf9e8f62d39a523c84 Author: Marc Zyngier Date: Fri Nov 8 13:35:13 2019 +0100 arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support commit 6167ec5c9145cdf493722dfd80a5d48bafc4a18a upstream. A new feature of SMCCC 1.1 is that it offers firmware-based CPU workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides BP hardening for CVE-2017-5715. If the host has some mitigation for this issue, report that we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the host workaround on every guest exit. Tested-by: Ard Biesheuvel Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas [v4.9: account for files moved to virt/ upstream] Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman [ardb: restrict to include/linux/arm-smccc.h] Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit f13582bcc607252f7309723aa903b4ed857de94b Author: Marc Zyngier Date: Fri Nov 8 13:35:12 2019 +0100 arm/arm64: KVM: Advertise SMCCC v1.1 commit 09e6be12effdb33bf7210c8867bbd213b66a499e upstream. The new SMC Calling Convention (v1.1) allows for a reduced overhead when calling into the firmware, and provides a new feature discovery mechanism. Make it visible to KVM guests. Tested-by: Ard Biesheuvel Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Catalin Marinas [v4.9: account for files moved to virt/ upstream] Signed-off-by: Mark Rutland [v4.9 backport] Tested-by: Greg Hackmann Signed-off-by: Greg Kroah-Hartman [ardb: restrict to include/linux/arm-smccc.h, drop KVM bits] Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 3759abdf7aa76d9c826dba5dbd1df4c88d9369ba Author: Vladimir Murzin Date: Fri Nov 8 13:35:11 2019 +0100 ARM: Move system register accessors to asm/cp15.h Commit 4f2546384150e78cad8045e59a9587fabcd9f9fe upstream. Headers linux/irqchip/arm-gic.v3.h and arch/arm/include/asm/kvm_hyp.h are included in virt/kvm/arm/hyp/vgic-v3-sr.c and both define macros called __ACCESS_CP15 and __ACCESS_CP15_64 which obviously creates a conflict. These macros were introduced independently for GIC and KVM and, in fact, do the same thing. As an option we could add prefixes to KVM and GIC version of macros so they won't clash, but it'd introduce code duplication. Alternatively, we could keep macro in, say, GIC header and include it in KVM one (or vice versa), but such dependency would not look nicer. So we follow arm64 way (it handles this via sysreg.h) and move only single set of macros to asm/cp15.h Cc: Russell King Acked-by: Marc Zyngier Signed-off-by: Vladimir Murzin Signed-off-by: Christoffer Dall Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit a6e65291d418deb9ec3c5022be22a014e80948fb Author: Russell King Date: Fri Nov 8 13:35:10 2019 +0100 ARM: uaccess: remove put_user() code duplication Commit 9f73bd8bb445e0cbe4bcef6d4cfc788f1e184007 upstream. Remove the code duplication between put_user() and __put_user(). The code which selected the implementation based upon the pointer size, and declared the local variable to hold the value to be put are common to both implementations. Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 8e0a4c1012500d837e8f611fd2c0465ac671c913 Author: Jens Wiklander Date: Fri Nov 8 13:35:09 2019 +0100 ARM: 8481/2: drivers: psci: replace psci firmware calls Commit e679660dbb8347f275fe5d83a5dd59c1fb6c8e63 upstream. Switch to use a generic interface for issuing SMC/HVC based on ARM SMC Calling Convention. Removes now the now unused psci-call.S. Acked-by: Will Deacon Reviewed-by: Mark Rutland Tested-by: Mark Rutland Acked-by: Lorenzo Pieralisi Tested-by: Lorenzo Pieralisi Signed-off-by: Jens Wiklander Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 596efa35fb66c236abaab4245ffa2a38970978f7 Author: Jens Wiklander Date: Fri Nov 8 13:35:08 2019 +0100 ARM: 8480/2: arm64: add implementation for arm-smccc Commit 14457459f9ca2ff8521686168ea179edc3a56a44 upstream. Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC. Acked-by: Will Deacon Signed-off-by: Jens Wiklander Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 91719b66d59590cff71b0b6f18b6bf40ebddf9c8 Author: Jens Wiklander Date: Fri Nov 8 13:35:07 2019 +0100 ARM: 8479/2: add implementation for arm-smccc Commit b329f95d70f3f955093e9a2b18ac1ed3587a8f73 upstream. Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC for architectures that may support arm-smccc. It's the responsibility of the caller to know if the SMC instruction is supported by the platform. Reviewed-by: Lars Persson Signed-off-by: Jens Wiklander Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit 36e7c2e687603c81dc47152537bd91cce681f0dc Author: Jens Wiklander Date: Fri Nov 8 13:35:06 2019 +0100 ARM: 8478/2: arm/arm64: add arm-smccc Commit 98dd64f34f47ce19b388d9015f767f48393a81eb upstream. Adds helpers to do SMC and HVC based on ARM SMC Calling Convention. CONFIG_HAVE_ARM_SMCCC is enabled for architectures that may support the SMC or HVC instruction. It's the responsibility of the caller to know if the SMC instruction is supported by the platform. This patch doesn't provide an implementation of the declared functions. Later patches will bring in implementations and set CONFIG_HAVE_ARM_SMCCC for ARM and ARM64 respectively. Reviewed-by: Lorenzo Pieralisi Signed-off-by: Jens Wiklander Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit b4fe083f59a6f9911e00affd42fc42d046f11504 Author: Andrey Ryabinin Date: Fri Nov 8 13:35:05 2019 +0100 ARM: 8051/1: put_user: fix possible data corruption in put_user Commit 537094b64b229bf3ad146042f83e74cf6abe59df upstream. According to arm procedure call standart r2 register is call-cloberred. So after the result of x expression was put into r2 any following function call in p may overwrite r2. To fix this, the result of p expression must be saved to the temporary variable before the assigment x expression to __r2. Signed-off-by: Andrey Ryabinin Reviewed-by: Nicolas Pitre Cc: stable@vger.kernel.org Signed-off-by: Russell King Signed-off-by: Ard Biesheuvel Signed-off-by: Greg Kroah-Hartman commit e4cea4d4c5d9f5162c62bd1af102a781987489b6 Author: Jeffrey Hugo Date: Thu Oct 17 08:26:06 2019 -0700 dmaengine: qcom: bam_dma: Fix resource leak commit 7667819385457b4aeb5fac94f67f52ab52cc10d5 upstream. bam_dma_terminate_all() will leak resources if any of the transactions are committed to the hardware (present in the desc fifo), and not complete. Since bam_dma_terminate_all() does not cause the hardware to be updated, the hardware will still operate on any previously committed transactions. This can cause memory corruption if the memory for the transaction has been reassigned, and will cause a sync issue between the BAM and its client(s). Fix this by properly updating the hardware in bam_dma_terminate_all(). Fixes: e7c0fe2a5c84 ("dmaengine: add Qualcomm BAM dma driver") Signed-off-by: Jeffrey Hugo Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20191017152606.34120-1-jeffrey.l.hugo@gmail.com Signed-off-by: Vinod Koul Signed-off-by: Greg Kroah-Hartman commit 491cd03f3b44f58f346ec15e2fc958d0def7b5a7 Author: Eric Dumazet Date: Tue Oct 22 07:57:46 2019 -0700 net/flow_dissector: switch to siphash commit 55667441c84fa5e0911a0aac44fb059c15ba6da2 upstream. UDP IPv6 packets auto flowlabels are using a 32bit secret (static u32 hashrnd in net/core/flow_dissector.c) and apply jhash() over fields known by the receivers. Attackers can easily infer the 32bit secret and use this information to identify a device and/or user, since this 32bit secret is only set at boot time. Really, using jhash() to generate cookies sent on the wire is a serious security concern. Trying to change the rol32(hash, 16) in ip6_make_flowlabel() would be a dead end. Trying to periodically change the secret (like in sch_sfq.c) could change paths taken in the network for long lived flows. Let's switch to siphash, as we did in commit df453700e8d8 ("inet: switch IP ID generator to siphash") Using a cryptographically strong pseudo random function will solve this privacy issue and more generally remove other weak points in the stack. Packet schedulers using skb_get_hash_perturb() benefit from this change. Fixes: b56774163f99 ("ipv6: Enable auto flow labels by default") Fixes: 42240901f7c4 ("ipv6: Implement different admin modes for automatic flow labels") Fixes: 67800f9b1f4e ("ipv6: Call skb_get_hash_flowi6 to get skb->hash in ip6_make_flowlabel") Fixes: cb1ce2ef387b ("ipv6: Implement automatic flow label generation on transmit") Signed-off-by: Eric Dumazet Reported-by: Jonathan Berger Reported-by: Amit Klein Reported-by: Benny Pinkas Cc: Tom Herbert Signed-off-by: David S. Miller Signed-off-by: Mahesh Bandewar Signed-off-by: Greg Kroah-Hartman commit 993e400581c3b03bf7817607a8a5e84ea3fc6645 Author: Eric Dumazet Date: Fri Nov 1 10:32:19 2019 -0700 inet: stop leaking jiffies on the wire [ Upstream commit a904a0693c189691eeee64f6c6b188bd7dc244e9 ] Historically linux tried to stick to RFC 791, 1122, 2003 for IPv4 ID field generation. RFC 6864 made clear that no matter how hard we try, we can not ensure unicity of IP ID within maximum lifetime for all datagrams with a given source address/destination address/protocol tuple. Linux uses a per socket inet generator (inet_id), initialized at connection startup with a XOR of 'jiffies' and other fields that appear clear on the wire. Thiemo Nagel pointed that this strategy is a privacy concern as this provides 16 bits of entropy to fingerprint devices. Let's switch to a random starting point, this is just as good as far as RFC 6864 is concerned and does not leak anything critical. Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Eric Dumazet Reported-by: Thiemo Nagel Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit 1475db309435e0002c7defa3ebf23922f6820336 Author: Eran Ben Elisha Date: Sun Oct 27 16:39:15 2019 +0200 net/mlx4_core: Dynamically set guaranteed amount of counters per VF [ Upstream commit e19868efea0c103f23b4b7e986fd0a703822111f ] Prior to this patch, the amount of counters guaranteed per VF in the resource tracker was MLX4_VF_COUNTERS_PER_PORT * MLX4_MAX_PORTS. It was set regardless if the VF was single or dual port. This caused several VFs to have no guaranteed counters although the system could satisfy their request. The fix is to dynamically guarantee counters, based on each VF specification. Fixes: 9de92c60beaa ("net/mlx4_core: Adjust counter grant policy in the resource tracker") Signed-off-by: Eran Ben Elisha Signed-off-by: Jack Morgenstein Signed-off-by: Tariq Toukan Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit a1014da6569786109872df6af0200cdf952043ed Author: Xin Long Date: Tue Oct 29 01:24:32 2019 +0800 vxlan: check tun_info options_len properly [ Upstream commit eadf52cf1852196a1363044dcda22fa5d7f296f7 ] This patch is to improve the tun_info options_len by dropping the skb when TUNNEL_VXLAN_OPT is set but options_len is less than vxlan_metadata. This can void a potential out-of-bounds access on ip_tun_info. Fixes: ee122c79d422 ("vxlan: Flow based tunneling") Signed-off-by: Xin Long Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit af26f04e074fe94c4cd7a58edbb919b56b7cba4a Author: Eric Dumazet Date: Wed Oct 23 22:44:52 2019 -0700 net: add READ_ONCE() annotation in __skb_wait_for_more_packets() [ Upstream commit 7c422d0ce97552dde4a97e6290de70ec6efb0fc6 ] __skb_wait_for_more_packets() can be called while other cpus can feed packets to the socket receive queue. KCSAN reported : BUG: KCSAN: data-race in __skb_wait_for_more_packets / __udp_enqueue_schedule_skb write to 0xffff888102e40b58 of 8 bytes by interrupt on cpu 0: __skb_insert include/linux/skbuff.h:1852 [inline] __skb_queue_before include/linux/skbuff.h:1958 [inline] __skb_queue_tail include/linux/skbuff.h:1991 [inline] __udp_enqueue_schedule_skb+0x2d7/0x410 net/ipv4/udp.c:1470 __udp_queue_rcv_skb net/ipv4/udp.c:1940 [inline] udp_queue_rcv_one_skb+0x7bd/0xc70 net/ipv4/udp.c:2057 udp_queue_rcv_skb+0xb5/0x400 net/ipv4/udp.c:2074 udp_unicast_rcv_skb.isra.0+0x7e/0x1c0 net/ipv4/udp.c:2233 __udp4_lib_rcv+0xa44/0x17c0 net/ipv4/udp.c:2300 udp_rcv+0x2b/0x40 net/ipv4/udp.c:2470 ip_protocol_deliver_rcu+0x4d/0x420 net/ipv4/ip_input.c:204 ip_local_deliver_finish+0x110/0x140 net/ipv4/ip_input.c:231 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ip_local_deliver+0x133/0x210 net/ipv4/ip_input.c:252 dst_input include/net/dst.h:442 [inline] ip_rcv_finish+0x121/0x160 net/ipv4/ip_input.c:413 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ip_rcv+0x18f/0x1a0 net/ipv4/ip_input.c:523 __netif_receive_skb_one_core+0xa7/0xe0 net/core/dev.c:5010 __netif_receive_skb+0x37/0xf0 net/core/dev.c:5124 process_backlog+0x1d3/0x420 net/core/dev.c:5955 read to 0xffff888102e40b58 of 8 bytes by task 13035 on cpu 1: __skb_wait_for_more_packets+0xfa/0x320 net/core/datagram.c:100 __skb_recv_udp+0x374/0x500 net/ipv4/udp.c:1683 udp_recvmsg+0xe1/0xb10 net/ipv4/udp.c:1712 inet_recvmsg+0xbb/0x250 net/ipv4/af_inet.c:838 sock_recvmsg_nosec+0x5c/0x70 net/socket.c:871 ___sys_recvmsg+0x1a0/0x3e0 net/socket.c:2480 do_recvmmsg+0x19a/0x5c0 net/socket.c:2601 __sys_recvmmsg+0x1ef/0x200 net/socket.c:2680 __do_sys_recvmmsg net/socket.c:2703 [inline] __se_sys_recvmmsg net/socket.c:2696 [inline] __x64_sys_recvmmsg+0x89/0xb0 net/socket.c:2696 do_syscall_64+0xcc/0x370 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 13035 Comm: syz-executor.3 Not tainted 5.4.0-rc3+ #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Signed-off-by: Eric Dumazet Reported-by: syzbot Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit 8ac802ed70492190f1d733662dcd753cc786884f Author: zhanglin Date: Sat Oct 26 15:54:16 2019 +0800 net: Zeroing the structure ethtool_wolinfo in ethtool_get_wol() [ Upstream commit 5ff223e86f5addbfae26419cbb5d61d98f6fbf7d ] memset() the structure ethtool_wolinfo that has padded bytes but the padded bytes have not been zeroed out. Signed-off-by: zhanglin Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit 4ef32dfb62cb88d2efbd15e9aa9d148681b6791e Author: Jiangfeng Xiao Date: Mon Oct 28 13:09:46 2019 +0800 net: hisilicon: Fix ping latency when deal with high throughput [ Upstream commit e56bd641ca61beb92b135298d5046905f920b734 ] This is due to error in over budget processing. When dealing with high throughput, the used buffers that exceeds the budget is not cleaned up. In addition, it takes a lot of cycles to clean up the used buffer, and then the buffer where the valid data is located can take effect. Signed-off-by: Jiangfeng Xiao Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit 2f036ae97f3ddac74d40cdf6306473dc28200138 Author: Tejun Heo Date: Thu Oct 24 13:50:27 2019 -0700 net: fix sk_page_frag() recursion from memory reclaim [ Upstream commit 20eb4f29b60286e0d6dc01d9c260b4bd383c58fb ] sk_page_frag() optimizes skb_frag allocations by using per-task skb_frag cache when it knows it's the only user. The condition is determined by seeing whether the socket allocation mask allows blocking - if the allocation may block, it obviously owns the task's context and ergo exclusively owns current->task_frag. Unfortunately, this misses recursion through memory reclaim path. Please take a look at the following backtrace. [2] RIP: 0010:tcp_sendmsg_locked+0xccf/0xe10 ... tcp_sendmsg+0x27/0x40 sock_sendmsg+0x30/0x40 sock_xmit.isra.24+0xa1/0x170 [nbd] nbd_send_cmd+0x1d2/0x690 [nbd] nbd_queue_rq+0x1b5/0x3b0 [nbd] __blk_mq_try_issue_directly+0x108/0x1b0 blk_mq_request_issue_directly+0xbd/0xe0 blk_mq_try_issue_list_directly+0x41/0xb0 blk_mq_sched_insert_requests+0xa2/0xe0 blk_mq_flush_plug_list+0x205/0x2a0 blk_flush_plug_list+0xc3/0xf0 [1] blk_finish_plug+0x21/0x2e _xfs_buf_ioapply+0x313/0x460 __xfs_buf_submit+0x67/0x220 xfs_buf_read_map+0x113/0x1a0 xfs_trans_read_buf_map+0xbf/0x330 xfs_btree_read_buf_block.constprop.42+0x95/0xd0 xfs_btree_lookup_get_block+0x95/0x170 xfs_btree_lookup+0xcc/0x470 xfs_bmap_del_extent_real+0x254/0x9a0 __xfs_bunmapi+0x45c/0xab0 xfs_bunmapi+0x15/0x30 xfs_itruncate_extents_flags+0xca/0x250 xfs_free_eofblocks+0x181/0x1e0 xfs_fs_destroy_inode+0xa8/0x1b0 destroy_inode+0x38/0x70 dispose_list+0x35/0x50 prune_icache_sb+0x52/0x70 super_cache_scan+0x120/0x1a0 do_shrink_slab+0x120/0x290 shrink_slab+0x216/0x2b0 shrink_node+0x1b6/0x4a0 do_try_to_free_pages+0xc6/0x370 try_to_free_mem_cgroup_pages+0xe3/0x1e0 try_charge+0x29e/0x790 mem_cgroup_charge_skmem+0x6a/0x100 __sk_mem_raise_allocated+0x18e/0x390 __sk_mem_schedule+0x2a/0x40 [0] tcp_sendmsg_locked+0x8eb/0xe10 tcp_sendmsg+0x27/0x40 sock_sendmsg+0x30/0x40 ___sys_sendmsg+0x26d/0x2b0 __sys_sendmsg+0x57/0xa0 do_syscall_64+0x42/0x100 entry_SYSCALL_64_after_hwframe+0x44/0xa9 In [0], tcp_send_msg_locked() was using current->page_frag when it called sk_wmem_schedule(). It already calculated how many bytes can be fit into current->page_frag. Due to memory pressure, sk_wmem_schedule() called into memory reclaim path which called into xfs and then IO issue path. Because the filesystem in question is backed by nbd, the control goes back into the tcp layer - back into tcp_sendmsg_locked(). nbd sets sk_allocation to (GFP_NOIO | __GFP_MEMALLOC) which makes sense - it's in the process of freeing memory and wants to be able to, e.g., drop clean pages to make forward progress. However, this confused sk_page_frag() called from [2]. Because it only tests whether the allocation allows blocking which it does, it now thinks current->page_frag can be used again although it already was being used in [0]. After [2] used current->page_frag, the offset would be increased by the used amount. When the control returns to [0], current->page_frag's offset is increased and the previously calculated number of bytes now may overrun the end of allocated memory leading to silent memory corruptions. Fix it by adding gfpflags_normal_context() which tests sleepable && !reclaim and use it to determine whether to use current->task_frag. v2: Eric didn't like gfp flags being tested twice. Introduce a new helper gfpflags_normal_context() and combine the two tests. Signed-off-by: Tejun Heo Cc: Josef Bacik Cc: Eric Dumazet Cc: stable@vger.kernel.org Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit 888913ed7d7a825686bf255611a50904096abd17 Author: Eric Dumazet Date: Mon Nov 4 07:57:55 2019 -0800 dccp: do not leak jiffies on the wire [ Upstream commit 3d1e5039f5f87a8731202ceca08764ee7cb010d3 ] For some reason I missed the case of DCCP passive flows in my previous patch. Fixes: a904a0693c18 ("inet: stop leaking jiffies on the wire") Signed-off-by: Eric Dumazet Reported-by: Thiemo Nagel Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman commit ad56882f0cbaa8af92d022d1958c8c3bee56e59c Author: Dave Wysochanski Date: Wed Oct 23 05:02:33 2019 -0400 cifs: Fix cifsInodeInfo lock_sem deadlock when reconnect occurs [ Upstream commit d46b0da7a33dd8c99d969834f682267a45444ab3 ] There's a deadlock that is possible and can easily be seen with a test where multiple readers open/read/close of the same file and a disruption occurs causing reconnect. The deadlock is due a reader thread inside cifs_strict_readv calling down_read and obtaining lock_sem, and then after reconnect inside cifs_reopen_file calling down_read a second time. If in between the two down_read calls, a down_write comes from another process, deadlock occurs. CPU0 CPU1 ---- ---- cifs_strict_readv() down_read(&cifsi->lock_sem); _cifsFileInfo_put OR cifs_new_fileinfo down_write(&cifsi->lock_sem); cifs_reopen_file() down_read(&cifsi->lock_sem); Fix the above by changing all down_write(lock_sem) calls to down_write_trylock(lock_sem)/msleep() loop, which in turn makes the second down_read call benign since it will never block behind the writer while holding lock_sem. Signed-off-by: Dave Wysochanski Suggested-by: Ronnie Sahlberg Reviewed--by: Ronnie Sahlberg Reviewed-by: Pavel Shilovsky Signed-off-by: Sasha Levin commit 771fd6cdb9f84d34a18a432e490cab88d0ba48cd Author: Jonas Gorski Date: Tue Oct 22 21:11:00 2019 +0200 MIPS: bmips: mark exception vectors as char arrays [ Upstream commit e4f5cb1a9b27c0f94ef4f5a0178a3fde2d3d0e9e ] The vectors span more than one byte, so mark them as arrays. Fixes the following build error when building when using GCC 8.3: In file included from ./include/linux/string.h:19, from ./include/linux/bitmap.h:9, from ./include/linux/cpumask.h:12, from ./arch/mips/include/asm/processor.h:15, from ./arch/mips/include/asm/thread_info.h:16, from ./include/linux/thread_info.h:38, from ./include/asm-generic/preempt.h:5, from ./arch/mips/include/generated/asm/preempt.h:1, from ./include/linux/preempt.h:81, from ./include/linux/spinlock.h:51, from ./include/linux/mmzone.h:8, from ./include/linux/bootmem.h:8, from arch/mips/bcm63xx/prom.c:10: arch/mips/bcm63xx/prom.c: In function 'prom_init': ./arch/mips/include/asm/string.h:162:11: error: '__builtin_memcpy' forming offset [2, 32] is out of the bounds [0, 1] of object 'bmips_smp_movevec' with type 'char' [-Werror=array-bounds] __ret = __builtin_memcpy((dst), (src), __len); \ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/mips/bcm63xx/prom.c:97:3: note: in expansion of macro 'memcpy' memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20); ^~~~~~ In file included from arch/mips/bcm63xx/prom.c:14: ./arch/mips/include/asm/bmips.h:80:13: note: 'bmips_smp_movevec' declared here extern char bmips_smp_movevec; Fixes: 18a1eef92dcd ("MIPS: BMIPS: Introduce bmips.h") Signed-off-by: Jonas Gorski Reviewed-by: Florian Fainelli Signed-off-by: Paul Burton Cc: linux-mips@vger.kernel.org Cc: Ralf Baechle Cc: James Hogan Signed-off-by: Sasha Levin commit 265c6b8ab54cf46ac4e3c768f2be1489dc13a494 Author: Navid Emamdoost Date: Fri Oct 4 13:58:43 2019 -0500 of: unittest: fix memory leak in unittest_data_add [ Upstream commit e13de8fe0d6a51341671bbe384826d527afe8d44 ] In unittest_data_add, a copy buffer is created via kmemdup. This buffer is leaked if of_fdt_unflatten_tree fails. The release for the unittest_data buffer is added. Fixes: b951f9dc7f25 ("Enabling OF selftest to run without machine's devicetree") Signed-off-by: Navid Emamdoost Reviewed-by: Frank Rowand Signed-off-by: Rob Herring Signed-off-by: Sasha Levin commit 5500494160247be0356bd7235f37093362061125 Author: Bodo Stroesser Date: Mon Oct 14 20:29:04 2019 +0200 scsi: target: core: Do not overwrite CDB byte 1 [ Upstream commit 27e84243cb63601a10e366afe3e2d05bb03c1cb5 ] passthrough_parse_cdb() - used by TCMU and PSCSI - attepts to reset the LUN field of SCSI-2 CDBs (bits 5,6,7 of byte 1). The current code is wrong as for newer commands not having the LUN field it overwrites relevant command bits (e.g. for SECURITY PROTOCOL IN / OUT). We think this code was unnecessary from the beginning or at least it is no longer useful. So we remove it entirely. Link: https://lore.kernel.org/r/12498eab-76fd-eaad-1316-c2827badb76a@ts.fujitsu.com Signed-off-by: Bodo Stroesser Reviewed-by: Bart Van Assche Reviewed-by: Hannes Reinecke Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit 5952dc84e22d78af29bc8a8026c35f2a9e68b029 Author: Yunfeng Ye Date: Wed Oct 16 16:38:45 2019 +0800 perf kmem: Fix memory leak in compact_gfp_flags() [ Upstream commit 1abecfcaa7bba21c9985e0136fa49836164dd8fd ] The memory @orig_flags is allocated by strdup(), it is freed on the normal path, but leak to free on the error path. Fix this by adding free(orig_flags) on the error path. Fixes: 0e11115644b3 ("perf kmem: Print gfp flags in human readable string") Signed-off-by: Yunfeng Ye Cc: Alexander Shishkin Cc: Feilong Lin Cc: Hu Shiyuan Cc: Jiri Olsa Cc: Mark Rutland Cc: Namhyung Kim Cc: Peter Zijlstra Link: http://lore.kernel.org/lkml/f9e9f458-96f3-4a97-a1d5-9feec2420e07@huawei.com Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Sasha Levin commit e45c51514417b5cfe298f586b20ea1a859eb68cc Author: Thomas Bogendoerfer Date: Wed Oct 9 17:11:28 2019 +0200 scsi: fix kconfig dependency warning related to 53C700_LE_ON_BE [ Upstream commit 8cbf0c173aa096dda526d1ccd66fc751c31da346 ] When building a kernel with SCSI_SNI_53C710 enabled, Kconfig warns: WARNING: unmet direct dependencies detected for 53C700_LE_ON_BE Depends on [n]: SCSI_LOWLEVEL [=y] && SCSI [=y] && SCSI_LASI700 [=n] Selected by [y]: - SCSI_SNI_53C710 [=y] && SCSI_LOWLEVEL [=y] && SNI_RM [=y] && SCSI [=y] Add the missing depends SCSI_SNI_53C710 to 53C700_LE_ON_BE to fix it. Link: https://lore.kernel.org/r/20191009151128.32411-1-tbogendoerfer@suse.de Signed-off-by: Thomas Bogendoerfer Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit 880c4664458b6308ba32453ba5c3a823c8d92610 Author: Thomas Bogendoerfer Date: Wed Oct 9 17:11:18 2019 +0200 scsi: sni_53c710: fix compilation error [ Upstream commit 0ee6211408a8e939428f662833c7301394125b80 ] Drop out memory dev_printk() with wrong device pointer argument. [mkp: typo] Link: https://lore.kernel.org/r/20191009151118.32350-1-tbogendoerfer@suse.de Signed-off-by: Thomas Bogendoerfer Signed-off-by: Martin K. Petersen Signed-off-by: Sasha Levin commit 15d24beb35ad2216ca2f97f4027a3e06bbb5860d Author: Russell King Date: Sat Aug 31 17:01:58 2019 +0100 ARM: mm: fix alignment handler faults under memory pressure [ Upstream commit 67e15fa5b487adb9b78a92789eeff2d6ec8f5cee ] When the system has high memory pressure, the page containing the instruction may be paged out. Using probe_kernel_address() means that if the page is swapped out, the resulting page fault will not be handled because page faults are disabled by this function. Use get_user() to read the instruction instead. Reported-by: Jing Xiangfeng Fixes: b255188f90e2 ("ARM: fix scheduling while atomic warning in alignment handling code") Signed-off-by: Russell King Signed-off-by: Sasha Levin commit c8561c78723505aae1e10aaafb8a89e689665210 Author: Adam Ford Date: Fri Aug 16 17:58:12 2019 -0500 ARM: dts: logicpd-torpedo-som: Remove twl_keypad [ Upstream commit 6b512b0ee091edcb8e46218894e4c917d919d3dc ] The TWL4030 used on the Logit PD Torpedo SOM does not have the keypad pins routed. This patch disables the twl_keypad driver to remove some splat during boot: twl4030_keypad 48070000.i2c:twl@48:keypad: missing or malformed property linux,keymap: -22 twl4030_keypad 48070000.i2c:twl@48:keypad: Failed to build keymap twl4030_keypad: probe of 48070000.i2c:twl@48:keypad failed with error -22 Signed-off-by: Adam Ford [tony@atomide.com: removed error time stamps] Signed-off-by: Tony Lindgren Signed-off-by: Sasha Levin commit 04f85d1992c03de5e2deddf2a0fbfadf4eee2eb1 Author: Robin Murphy Date: Wed Oct 2 16:30:37 2019 +0100 ASoc: rockchip: i2s: Fix RPM imbalance [ Upstream commit b1e620e7d32f5aad5353cc3cfc13ed99fea65d3a ] If rockchip_pcm_platform_register() fails, e.g. upon deferring to wait for an absent DMA channel, we return without disabling RPM, which makes subsequent re-probe attempts scream with errors about the unbalanced enable. Don't do that. Fixes: ebb75c0bdba2 ("ASoC: rockchip: i2s: Adjust devm usage") Signed-off-by: Robin Murphy Link: https://lore.kernel.org/r/bcb12a849a05437fb18372bc7536c649b94bdf07.1570029862.git.robin.murphy@arm.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit 43de4298fb869a5a165b53a52f8e5496bce9aa3b Author: Yizhuo Date: Sun Sep 29 10:09:57 2019 -0700 regulator: pfuze100-regulator: Variable "val" in pfuze100_regulator_probe() could be uninitialized [ Upstream commit 1252b283141f03c3dffd139292c862cae10e174d ] In function pfuze100_regulator_probe(), variable "val" could be initialized if regmap_read() fails. However, "val" is used to decide the control flow later in the if statement, which is potentially unsafe. Signed-off-by: Yizhuo Link: https://lore.kernel.org/r/20190929170957.14775-1-yzhai003@ucr.edu Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit e5b219fb25c897339dfdd5abee083d57c67e435e Author: Axel Lin Date: Sun Sep 29 17:58:48 2019 +0800 regulator: ti-abb: Fix timeout in ti_abb_wait_txdone/ti_abb_clear_all_txdone [ Upstream commit f64db548799e0330897c3203680c2ee795ade518 ] ti_abb_wait_txdone() may return -ETIMEDOUT when ti_abb_check_txdone() returns true in the latest iteration of the while loop because the timeout value is abb->settling_time + 1. Similarly, ti_abb_clear_all_txdone() may return -ETIMEDOUT when ti_abb_check_txdone() returns false in the latest iteration of the while loop. Fix it. Signed-off-by: Axel Lin Acked-by: Nishanth Menon Link: https://lore.kernel.org/r/20190929095848.21960-1-axel.lin@ingics.com Signed-off-by: Mark Brown Signed-off-by: Sasha Levin commit 5fc9e98768f72b2643f554ae375b8c3b6ba6513d Author: Seth Forshee Date: Wed Jul 17 11:06:26 2019 -0500 kbuild: add -fcf-protection=none when using retpoline flags [ Upstream commit 29be86d7f9cb18df4123f309ac7857570513e8bc ] The gcc -fcf-protection=branch option is not compatible with -mindirect-branch=thunk-extern. The latter is used when CONFIG_RETPOLINE is selected, and this will fail to build with a gcc which has -fcf-protection=branch enabled by default. Adding -fcf-protection=none when building with retpoline enabled prevents such build failures. Signed-off-by: Seth Forshee Signed-off-by: Masahiro Yamada Signed-off-by: Sasha Levin