Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (39 commits)
random: Reorder struct entropy_store to remove padding on 64bits
padata: update API documentation
padata: Remove padata_get_cpumask
crypto: pcrypt - Update pcrypt cpumask according to the padata cpumask notifier
crypto: pcrypt - Rename pcrypt_instance
padata: Pass the padata cpumasks to the cpumask_change_notifier chain
padata: Rearrange set_cpumask functions
padata: Rename padata_alloc functions
crypto: pcrypt - Dont calulate a callback cpu on empty callback cpumask
padata: Check for valid cpumasks
padata: Allocate cpumask dependend recources in any case
padata: Fix cpu index counting
crypto: geode_aes - Convert pci_table entries to PCI_VDEVICE (if PCI_ANY_ID is used)
pcrypt: Added sysfs interface to pcrypt
padata: Added sysfs primitives to padata subsystem
padata: Make two separate cpumasks
padata: update documentation
padata: simplify serialization mechanism
padata: make padata_do_parallel to return zero on success
padata: Handle empty padata cpumasks
...

+1311 -712
+75 -22
Documentation/padata.txt
··· 1 The padata parallel execution mechanism 2 - Last updated for 2.6.34 3 4 Padata is a mechanism by which the kernel can farm work out to be done in 5 parallel on multiple CPUs while retaining the ordering of tasks. It was ··· 13 14 #include <linux/padata.h> 15 16 - struct padata_instance *padata_alloc(const struct cpumask *cpumask, 17 - struct workqueue_struct *wq); 18 19 - The cpumask describes which processors will be used to execute work 20 - submitted to this instance. The workqueue wq is where the work will 21 - actually be done; it should be a multithreaded queue, naturally. 22 23 There are functions for enabling and disabling the instance: 24 25 - void padata_start(struct padata_instance *pinst); 26 void padata_stop(struct padata_instance *pinst); 27 28 - These functions literally do nothing beyond setting or clearing the 29 - "padata_start() was called" flag; if that flag is not set, other functions 30 - will refuse to work. 31 32 The list of CPUs to be used can be adjusted with these functions: 33 34 - int padata_set_cpumask(struct padata_instance *pinst, 35 cpumask_var_t cpumask); 36 - int padata_add_cpu(struct padata_instance *pinst, int cpu); 37 - int padata_remove_cpu(struct padata_instance *pinst, int cpu); 38 39 - Changing the CPU mask has the look of an expensive operation, though, so it 40 - probably should not be done with great frequency. 41 42 Actually submitting work to the padata instance requires the creation of a 43 padata_priv structure: ··· 105 106 This structure will almost certainly be embedded within some larger 107 structure specific to the work to be done. Most its fields are private to 108 - padata, but the structure should be zeroed at initialization time, and the 109 parallel() and serial() functions should be provided. Those functions will 110 be called in the process of getting the work done as we will see 111 momentarily. ··· 118 The pinst and padata structures must be set up as described above; cb_cpu 119 specifies which CPU will be used for the final callback when the work is 120 done; it must be in the current instance's CPU mask. The return value from 121 - padata_do_parallel() is a little strange; zero is an error return 122 - indicating that the caller forgot the padata_start() formalities. -EBUSY 123 - means that somebody, somewhere else is messing with the instance's CPU 124 - mask, while -EINVAL is a complaint about cb_cpu not being in that CPU mask. 125 - If all goes well, this function will return -EINPROGRESS, indicating that 126 - the work is in progress. 127 128 Each task submitted to padata_do_parallel() will, in turn, be passed to 129 exactly one call to the above-mentioned parallel() function, on one CPU, so
··· 1 The padata parallel execution mechanism 2 + Last updated for 2.6.36 3 4 Padata is a mechanism by which the kernel can farm work out to be done in 5 parallel on multiple CPUs while retaining the ordering of tasks. It was ··· 13 14 #include <linux/padata.h> 15 16 + struct padata_instance *padata_alloc(struct workqueue_struct *wq, 17 + const struct cpumask *pcpumask, 18 + const struct cpumask *cbcpumask); 19 20 + The pcpumask describes which processors will be used to execute work 21 + submitted to this instance in parallel. The cbcpumask defines which 22 + processors are allowed to use as the serialization callback processor. 23 + The workqueue wq is where the work will actually be done; it should be 24 + a multithreaded queue, naturally. 25 + 26 + To allocate a padata instance with the cpu_possible_mask for both 27 + cpumasks this helper function can be used: 28 + 29 + struct padata_instance *padata_alloc_possible(struct workqueue_struct *wq); 30 + 31 + Note: Padata maintains two kinds of cpumasks internally. The user supplied 32 + cpumasks, submitted by padata_alloc/padata_alloc_possible and the 'usable' 33 + cpumasks. The usable cpumasks are always the subset of active cpus in the 34 + user supplied cpumasks, these are the cpumasks padata actually use. So 35 + it is legal to supply a cpumask to padata that contains offline cpus. 36 + Once a offline cpu in the user supplied cpumask comes online, padata 37 + is going to use it. 38 39 There are functions for enabling and disabling the instance: 40 41 + int padata_start(struct padata_instance *pinst); 42 void padata_stop(struct padata_instance *pinst); 43 44 + These functions are setting or clearing the "PADATA_INIT" flag; 45 + if that flag is not set, other functions will refuse to work. 46 + padata_start returns zero on success (flag set) or -EINVAL if the 47 + padata cpumask contains no active cpu (flag not set). 48 + padata_stop clears the flag and blocks until the padata instance 49 + is unused. 50 51 The list of CPUs to be used can be adjusted with these functions: 52 53 + int padata_set_cpumasks(struct padata_instance *pinst, 54 + cpumask_var_t pcpumask, 55 + cpumask_var_t cbcpumask); 56 + int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, 57 cpumask_var_t cpumask); 58 + int padata_add_cpu(struct padata_instance *pinst, int cpu, int mask); 59 + int padata_remove_cpu(struct padata_instance *pinst, int cpu, int mask); 60 61 + Changing the CPU masks are expensive operations, though, so it should not be 62 + done with great frequency. 63 + 64 + It's possible to change both cpumasks of a padata instance with 65 + padata_set_cpumasks by specifying the cpumasks for parallel execution (pcpumask) 66 + and for the serial callback function (cbcpumask). padata_set_cpumask is to 67 + change just one of the cpumasks. Here cpumask_type is one of PADATA_CPU_SERIAL, 68 + PADATA_CPU_PARALLEL and cpumask specifies the new cpumask to use. 69 + To simply add or remove one cpu from a certain cpumask the functions 70 + padata_add_cpu/padata_remove_cpu are used. cpu specifies the cpu to add or 71 + remove and mask is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL. 72 + 73 + If a user is interested in padata cpumask changes, he can register to 74 + the padata cpumask change notifier: 75 + 76 + int padata_register_cpumask_notifier(struct padata_instance *pinst, 77 + struct notifier_block *nblock); 78 + 79 + To unregister from that notifier: 80 + 81 + int padata_unregister_cpumask_notifier(struct padata_instance *pinst, 82 + struct notifier_block *nblock); 83 + 84 + The padata cpumask change notifier notifies about changes of the usable 85 + cpumasks, i.e. the subset of active cpus in the user supplied cpumask. 86 + 87 + Padata calls the notifier chain with: 88 + 89 + blocking_notifier_call_chain(&pinst->cpumask_change_notifier, 90 + notification_mask, 91 + &pd_new->cpumask); 92 + 93 + Here cpumask_change_notifier is registered notifier, notification_mask 94 + is one of PADATA_CPU_SERIAL, PADATA_CPU_PARALLEL and cpumask is a pointer 95 + to a struct padata_cpumask that contains the new cpumask informations. 96 97 Actually submitting work to the padata instance requires the creation of a 98 padata_priv structure: ··· 50 51 This structure will almost certainly be embedded within some larger 52 structure specific to the work to be done. Most its fields are private to 53 + padata, but the structure should be zeroed at initialisation time, and the 54 parallel() and serial() functions should be provided. Those functions will 55 be called in the process of getting the work done as we will see 56 momentarily. ··· 63 The pinst and padata structures must be set up as described above; cb_cpu 64 specifies which CPU will be used for the final callback when the work is 65 done; it must be in the current instance's CPU mask. The return value from 66 + padata_do_parallel() is zero on success, indicating that the work is in 67 + progress. -EBUSY means that somebody, somewhere else is messing with the 68 + instance's CPU mask, while -EINVAL is a complaint about cb_cpu not being 69 + in that CPU mask or about a not running instance. 70 71 Each task submitted to padata_do_parallel() will, in turn, be passed to 72 exactly one call to the above-mentioned parallel() function, on one CPU, so
+1 -1
arch/s390/crypto/Makefile
··· 5 obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o 6 obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o 7 obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o 8 - obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o des_check_key.o 9 obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o 10 obj-$(CONFIG_S390_PRNG) += prng.o
··· 5 obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o 6 obj-$(CONFIG_CRYPTO_SHA256_S390) += sha256_s390.o sha_common.o 7 obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o 8 + obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o 9 obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o 10 obj-$(CONFIG_S390_PRNG) += prng.o
+1 -1
arch/s390/crypto/crypto_des.h
··· 15 16 extern int crypto_des_check_key(const u8*, unsigned int, u32*); 17 18 - #endif //__CRYPTO_DES_H__
··· 15 16 extern int crypto_des_check_key(const u8*, unsigned int, u32*); 17 18 + #endif /*__CRYPTO_DES_H__*/
+21 -217
arch/s390/crypto/des_s390.c
··· 14 * 15 */ 16 17 - #include <crypto/algapi.h> 18 #include <linux/init.h> 19 #include <linux/module.h> 20 21 #include "crypt_s390.h" 22 - #include "crypto_des.h" 23 - 24 - #define DES_BLOCK_SIZE 8 25 - #define DES_KEY_SIZE 8 26 - 27 - #define DES3_128_KEY_SIZE (2 * DES_KEY_SIZE) 28 - #define DES3_128_BLOCK_SIZE DES_BLOCK_SIZE 29 30 #define DES3_192_KEY_SIZE (3 * DES_KEY_SIZE) 31 - #define DES3_192_BLOCK_SIZE DES_BLOCK_SIZE 32 33 struct crypt_s390_des_ctx { 34 u8 iv[DES_BLOCK_SIZE]; 35 u8 key[DES_KEY_SIZE]; 36 - }; 37 - 38 - struct crypt_s390_des3_128_ctx { 39 - u8 iv[DES_BLOCK_SIZE]; 40 - u8 key[DES3_128_KEY_SIZE]; 41 }; 42 43 struct crypt_s390_des3_192_ctx { ··· 39 { 40 struct crypt_s390_des_ctx *dctx = crypto_tfm_ctx(tfm); 41 u32 *flags = &tfm->crt_flags; 42 - int ret; 43 44 - /* test if key is valid (not a weak key) */ 45 - ret = crypto_des_check_key(key, keylen, flags); 46 - if (ret == 0) 47 - memcpy(dctx->key, key, keylen); 48 - return ret; 49 } 50 51 static void des_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) ··· 229 * complementation keys. Any weakness is obviated by the use of 230 * multiple keys. 231 * 232 - * However, if the two independent 64-bit keys are equal, 233 - * then the DES3 operation is simply the same as DES. 234 - * Implementers MUST reject keys that exhibit this property. 235 - * 236 - */ 237 - static int des3_128_setkey(struct crypto_tfm *tfm, const u8 *key, 238 - unsigned int keylen) 239 - { 240 - int i, ret; 241 - struct crypt_s390_des3_128_ctx *dctx = crypto_tfm_ctx(tfm); 242 - const u8 *temp_key = key; 243 - u32 *flags = &tfm->crt_flags; 244 - 245 - if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE)) && 246 - (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) { 247 - *flags |= CRYPTO_TFM_RES_WEAK_KEY; 248 - return -EINVAL; 249 - } 250 - for (i = 0; i < 2; i++, temp_key += DES_KEY_SIZE) { 251 - ret = crypto_des_check_key(temp_key, DES_KEY_SIZE, flags); 252 - if (ret < 0) 253 - return ret; 254 - } 255 - memcpy(dctx->key, key, keylen); 256 - return 0; 257 - } 258 - 259 - static void des3_128_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) 260 - { 261 - struct crypt_s390_des3_128_ctx *dctx = crypto_tfm_ctx(tfm); 262 - 263 - crypt_s390_km(KM_TDEA_128_ENCRYPT, dctx->key, dst, (void*)src, 264 - DES3_128_BLOCK_SIZE); 265 - } 266 - 267 - static void des3_128_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) 268 - { 269 - struct crypt_s390_des3_128_ctx *dctx = crypto_tfm_ctx(tfm); 270 - 271 - crypt_s390_km(KM_TDEA_128_DECRYPT, dctx->key, dst, (void*)src, 272 - DES3_128_BLOCK_SIZE); 273 - } 274 - 275 - static struct crypto_alg des3_128_alg = { 276 - .cra_name = "des3_ede128", 277 - .cra_driver_name = "des3_ede128-s390", 278 - .cra_priority = CRYPT_S390_PRIORITY, 279 - .cra_flags = CRYPTO_ALG_TYPE_CIPHER, 280 - .cra_blocksize = DES3_128_BLOCK_SIZE, 281 - .cra_ctxsize = sizeof(struct crypt_s390_des3_128_ctx), 282 - .cra_module = THIS_MODULE, 283 - .cra_list = LIST_HEAD_INIT(des3_128_alg.cra_list), 284 - .cra_u = { 285 - .cipher = { 286 - .cia_min_keysize = DES3_128_KEY_SIZE, 287 - .cia_max_keysize = DES3_128_KEY_SIZE, 288 - .cia_setkey = des3_128_setkey, 289 - .cia_encrypt = des3_128_encrypt, 290 - .cia_decrypt = des3_128_decrypt, 291 - } 292 - } 293 - }; 294 - 295 - static int ecb_des3_128_encrypt(struct blkcipher_desc *desc, 296 - struct scatterlist *dst, 297 - struct scatterlist *src, unsigned int nbytes) 298 - { 299 - struct crypt_s390_des3_128_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); 300 - struct blkcipher_walk walk; 301 - 302 - blkcipher_walk_init(&walk, dst, src, nbytes); 303 - return ecb_desall_crypt(desc, KM_TDEA_128_ENCRYPT, sctx->key, &walk); 304 - } 305 - 306 - static int ecb_des3_128_decrypt(struct blkcipher_desc *desc, 307 - struct scatterlist *dst, 308 - struct scatterlist *src, unsigned int nbytes) 309 - { 310 - struct crypt_s390_des3_128_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); 311 - struct blkcipher_walk walk; 312 - 313 - blkcipher_walk_init(&walk, dst, src, nbytes); 314 - return ecb_desall_crypt(desc, KM_TDEA_128_DECRYPT, sctx->key, &walk); 315 - } 316 - 317 - static struct crypto_alg ecb_des3_128_alg = { 318 - .cra_name = "ecb(des3_ede128)", 319 - .cra_driver_name = "ecb-des3_ede128-s390", 320 - .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 321 - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 322 - .cra_blocksize = DES3_128_BLOCK_SIZE, 323 - .cra_ctxsize = sizeof(struct crypt_s390_des3_128_ctx), 324 - .cra_type = &crypto_blkcipher_type, 325 - .cra_module = THIS_MODULE, 326 - .cra_list = LIST_HEAD_INIT( 327 - ecb_des3_128_alg.cra_list), 328 - .cra_u = { 329 - .blkcipher = { 330 - .min_keysize = DES3_128_KEY_SIZE, 331 - .max_keysize = DES3_128_KEY_SIZE, 332 - .setkey = des3_128_setkey, 333 - .encrypt = ecb_des3_128_encrypt, 334 - .decrypt = ecb_des3_128_decrypt, 335 - } 336 - } 337 - }; 338 - 339 - static int cbc_des3_128_encrypt(struct blkcipher_desc *desc, 340 - struct scatterlist *dst, 341 - struct scatterlist *src, unsigned int nbytes) 342 - { 343 - struct crypt_s390_des3_128_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); 344 - struct blkcipher_walk walk; 345 - 346 - blkcipher_walk_init(&walk, dst, src, nbytes); 347 - return cbc_desall_crypt(desc, KMC_TDEA_128_ENCRYPT, sctx->iv, &walk); 348 - } 349 - 350 - static int cbc_des3_128_decrypt(struct blkcipher_desc *desc, 351 - struct scatterlist *dst, 352 - struct scatterlist *src, unsigned int nbytes) 353 - { 354 - struct crypt_s390_des3_128_ctx *sctx = crypto_blkcipher_ctx(desc->tfm); 355 - struct blkcipher_walk walk; 356 - 357 - blkcipher_walk_init(&walk, dst, src, nbytes); 358 - return cbc_desall_crypt(desc, KMC_TDEA_128_DECRYPT, sctx->iv, &walk); 359 - } 360 - 361 - static struct crypto_alg cbc_des3_128_alg = { 362 - .cra_name = "cbc(des3_ede128)", 363 - .cra_driver_name = "cbc-des3_ede128-s390", 364 - .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 365 - .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 366 - .cra_blocksize = DES3_128_BLOCK_SIZE, 367 - .cra_ctxsize = sizeof(struct crypt_s390_des3_128_ctx), 368 - .cra_type = &crypto_blkcipher_type, 369 - .cra_module = THIS_MODULE, 370 - .cra_list = LIST_HEAD_INIT( 371 - cbc_des3_128_alg.cra_list), 372 - .cra_u = { 373 - .blkcipher = { 374 - .min_keysize = DES3_128_KEY_SIZE, 375 - .max_keysize = DES3_128_KEY_SIZE, 376 - .ivsize = DES3_128_BLOCK_SIZE, 377 - .setkey = des3_128_setkey, 378 - .encrypt = cbc_des3_128_encrypt, 379 - .decrypt = cbc_des3_128_decrypt, 380 - } 381 - } 382 - }; 383 - 384 - /* 385 - * RFC2451: 386 - * 387 - * For DES-EDE3, there is no known need to reject weak or 388 - * complementation keys. Any weakness is obviated by the use of 389 - * multiple keys. 390 - * 391 * However, if the first two or last two independent 64-bit keys are 392 * equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the 393 * same as DES. Implementers MUST reject keys that exhibit this ··· 238 static int des3_192_setkey(struct crypto_tfm *tfm, const u8 *key, 239 unsigned int keylen) 240 { 241 - int i, ret; 242 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 243 - const u8 *temp_key = key; 244 u32 *flags = &tfm->crt_flags; 245 246 if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE) && ··· 247 (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) { 248 *flags |= CRYPTO_TFM_RES_WEAK_KEY; 249 return -EINVAL; 250 - } 251 - for (i = 0; i < 3; i++, temp_key += DES_KEY_SIZE) { 252 - ret = crypto_des_check_key(temp_key, DES_KEY_SIZE, flags); 253 - if (ret < 0) 254 - return ret; 255 } 256 memcpy(dctx->key, key, keylen); 257 return 0; ··· 257 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 258 259 crypt_s390_km(KM_TDEA_192_ENCRYPT, dctx->key, dst, (void*)src, 260 - DES3_192_BLOCK_SIZE); 261 } 262 263 static void des3_192_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) ··· 265 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 266 267 crypt_s390_km(KM_TDEA_192_DECRYPT, dctx->key, dst, (void*)src, 268 - DES3_192_BLOCK_SIZE); 269 } 270 271 static struct crypto_alg des3_192_alg = { ··· 273 .cra_driver_name = "des3_ede-s390", 274 .cra_priority = CRYPT_S390_PRIORITY, 275 .cra_flags = CRYPTO_ALG_TYPE_CIPHER, 276 - .cra_blocksize = DES3_192_BLOCK_SIZE, 277 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 278 .cra_module = THIS_MODULE, 279 .cra_list = LIST_HEAD_INIT(des3_192_alg.cra_list), ··· 315 .cra_driver_name = "ecb-des3_ede-s390", 316 .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 317 .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 318 - .cra_blocksize = DES3_192_BLOCK_SIZE, 319 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 320 .cra_type = &crypto_blkcipher_type, 321 .cra_module = THIS_MODULE, ··· 359 .cra_driver_name = "cbc-des3_ede-s390", 360 .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 361 .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 362 - .cra_blocksize = DES3_192_BLOCK_SIZE, 363 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 364 .cra_type = &crypto_blkcipher_type, 365 .cra_module = THIS_MODULE, ··· 369 .blkcipher = { 370 .min_keysize = DES3_192_KEY_SIZE, 371 .max_keysize = DES3_192_KEY_SIZE, 372 - .ivsize = DES3_192_BLOCK_SIZE, 373 .setkey = des3_192_setkey, 374 .encrypt = cbc_des3_192_encrypt, 375 .decrypt = cbc_des3_192_decrypt, ··· 379 380 static int des_s390_init(void) 381 { 382 - int ret = 0; 383 384 if (!crypt_s390_func_available(KM_DEA_ENCRYPT) || 385 - !crypt_s390_func_available(KM_TDEA_128_ENCRYPT) || 386 !crypt_s390_func_available(KM_TDEA_192_ENCRYPT)) 387 return -EOPNOTSUPP; 388 ··· 394 ret = crypto_register_alg(&cbc_des_alg); 395 if (ret) 396 goto cbc_des_err; 397 - 398 - ret = crypto_register_alg(&des3_128_alg); 399 - if (ret) 400 - goto des3_128_err; 401 - ret = crypto_register_alg(&ecb_des3_128_alg); 402 - if (ret) 403 - goto ecb_des3_128_err; 404 - ret = crypto_register_alg(&cbc_des3_128_alg); 405 - if (ret) 406 - goto cbc_des3_128_err; 407 - 408 ret = crypto_register_alg(&des3_192_alg); 409 if (ret) 410 goto des3_192_err; ··· 403 ret = crypto_register_alg(&cbc_des3_192_alg); 404 if (ret) 405 goto cbc_des3_192_err; 406 - 407 out: 408 return ret; 409 ··· 411 ecb_des3_192_err: 412 crypto_unregister_alg(&des3_192_alg); 413 des3_192_err: 414 - crypto_unregister_alg(&cbc_des3_128_alg); 415 - cbc_des3_128_err: 416 - crypto_unregister_alg(&ecb_des3_128_alg); 417 - ecb_des3_128_err: 418 - crypto_unregister_alg(&des3_128_alg); 419 - des3_128_err: 420 crypto_unregister_alg(&cbc_des_alg); 421 cbc_des_err: 422 crypto_unregister_alg(&ecb_des_alg); ··· 420 goto out; 421 } 422 423 - static void __exit des_s390_fini(void) 424 { 425 crypto_unregister_alg(&cbc_des3_192_alg); 426 crypto_unregister_alg(&ecb_des3_192_alg); 427 crypto_unregister_alg(&des3_192_alg); 428 - crypto_unregister_alg(&cbc_des3_128_alg); 429 - crypto_unregister_alg(&ecb_des3_128_alg); 430 - crypto_unregister_alg(&des3_128_alg); 431 crypto_unregister_alg(&cbc_des_alg); 432 crypto_unregister_alg(&ecb_des_alg); 433 crypto_unregister_alg(&des_alg); 434 } 435 436 module_init(des_s390_init); 437 - module_exit(des_s390_fini); 438 439 MODULE_ALIAS("des"); 440 MODULE_ALIAS("des3_ede");
··· 14 * 15 */ 16 17 #include <linux/init.h> 18 #include <linux/module.h> 19 + #include <linux/crypto.h> 20 + #include <crypto/algapi.h> 21 + #include <crypto/des.h> 22 23 #include "crypt_s390.h" 24 25 #define DES3_192_KEY_SIZE (3 * DES_KEY_SIZE) 26 27 struct crypt_s390_des_ctx { 28 u8 iv[DES_BLOCK_SIZE]; 29 u8 key[DES_KEY_SIZE]; 30 }; 31 32 struct crypt_s390_des3_192_ctx { ··· 50 { 51 struct crypt_s390_des_ctx *dctx = crypto_tfm_ctx(tfm); 52 u32 *flags = &tfm->crt_flags; 53 + u32 tmp[DES_EXPKEY_WORDS]; 54 55 + /* check for weak keys */ 56 + if (!des_ekey(tmp, key) && (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) { 57 + *flags |= CRYPTO_TFM_RES_WEAK_KEY; 58 + return -EINVAL; 59 + } 60 + 61 + memcpy(dctx->key, key, keylen); 62 + return 0; 63 } 64 65 static void des_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) ··· 237 * complementation keys. Any weakness is obviated by the use of 238 * multiple keys. 239 * 240 * However, if the first two or last two independent 64-bit keys are 241 * equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the 242 * same as DES. Implementers MUST reject keys that exhibit this ··· 405 static int des3_192_setkey(struct crypto_tfm *tfm, const u8 *key, 406 unsigned int keylen) 407 { 408 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 409 u32 *flags = &tfm->crt_flags; 410 411 if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE) && ··· 416 (*flags & CRYPTO_TFM_REQ_WEAK_KEY)) { 417 *flags |= CRYPTO_TFM_RES_WEAK_KEY; 418 return -EINVAL; 419 } 420 memcpy(dctx->key, key, keylen); 421 return 0; ··· 431 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 432 433 crypt_s390_km(KM_TDEA_192_ENCRYPT, dctx->key, dst, (void*)src, 434 + DES_BLOCK_SIZE); 435 } 436 437 static void des3_192_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) ··· 439 struct crypt_s390_des3_192_ctx *dctx = crypto_tfm_ctx(tfm); 440 441 crypt_s390_km(KM_TDEA_192_DECRYPT, dctx->key, dst, (void*)src, 442 + DES_BLOCK_SIZE); 443 } 444 445 static struct crypto_alg des3_192_alg = { ··· 447 .cra_driver_name = "des3_ede-s390", 448 .cra_priority = CRYPT_S390_PRIORITY, 449 .cra_flags = CRYPTO_ALG_TYPE_CIPHER, 450 + .cra_blocksize = DES_BLOCK_SIZE, 451 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 452 .cra_module = THIS_MODULE, 453 .cra_list = LIST_HEAD_INIT(des3_192_alg.cra_list), ··· 489 .cra_driver_name = "ecb-des3_ede-s390", 490 .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 491 .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 492 + .cra_blocksize = DES_BLOCK_SIZE, 493 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 494 .cra_type = &crypto_blkcipher_type, 495 .cra_module = THIS_MODULE, ··· 533 .cra_driver_name = "cbc-des3_ede-s390", 534 .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY, 535 .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, 536 + .cra_blocksize = DES_BLOCK_SIZE, 537 .cra_ctxsize = sizeof(struct crypt_s390_des3_192_ctx), 538 .cra_type = &crypto_blkcipher_type, 539 .cra_module = THIS_MODULE, ··· 543 .blkcipher = { 544 .min_keysize = DES3_192_KEY_SIZE, 545 .max_keysize = DES3_192_KEY_SIZE, 546 + .ivsize = DES_BLOCK_SIZE, 547 .setkey = des3_192_setkey, 548 .encrypt = cbc_des3_192_encrypt, 549 .decrypt = cbc_des3_192_decrypt, ··· 553 554 static int des_s390_init(void) 555 { 556 + int ret; 557 558 if (!crypt_s390_func_available(KM_DEA_ENCRYPT) || 559 !crypt_s390_func_available(KM_TDEA_192_ENCRYPT)) 560 return -EOPNOTSUPP; 561 ··· 569 ret = crypto_register_alg(&cbc_des_alg); 570 if (ret) 571 goto cbc_des_err; 572 ret = crypto_register_alg(&des3_192_alg); 573 if (ret) 574 goto des3_192_err; ··· 589 ret = crypto_register_alg(&cbc_des3_192_alg); 590 if (ret) 591 goto cbc_des3_192_err; 592 out: 593 return ret; 594 ··· 598 ecb_des3_192_err: 599 crypto_unregister_alg(&des3_192_alg); 600 des3_192_err: 601 crypto_unregister_alg(&cbc_des_alg); 602 cbc_des_err: 603 crypto_unregister_alg(&ecb_des_alg); ··· 613 goto out; 614 } 615 616 + static void __exit des_s390_exit(void) 617 { 618 crypto_unregister_alg(&cbc_des3_192_alg); 619 crypto_unregister_alg(&ecb_des3_192_alg); 620 crypto_unregister_alg(&des3_192_alg); 621 crypto_unregister_alg(&cbc_des_alg); 622 crypto_unregister_alg(&ecb_des_alg); 623 crypto_unregister_alg(&des_alg); 624 } 625 626 module_init(des_s390_init); 627 + module_exit(des_s390_exit); 628 629 MODULE_ALIAS("des"); 630 MODULE_ALIAS("des3_ede");
+14 -1
crypto/Kconfig
··· 80 81 config CRYPTO_PCOMP 82 tristate 83 select CRYPTO_ALGAPI2 84 85 config CRYPTO_MANAGER ··· 99 select CRYPTO_AEAD2 100 select CRYPTO_HASH2 101 select CRYPTO_BLKCIPHER2 102 - select CRYPTO_PCOMP 103 104 config CRYPTO_GF128MUL 105 tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
··· 80 81 config CRYPTO_PCOMP 82 tristate 83 + select CRYPTO_PCOMP2 84 + select CRYPTO_ALGAPI 85 + 86 + config CRYPTO_PCOMP2 87 + tristate 88 select CRYPTO_ALGAPI2 89 90 config CRYPTO_MANAGER ··· 94 select CRYPTO_AEAD2 95 select CRYPTO_HASH2 96 select CRYPTO_BLKCIPHER2 97 + select CRYPTO_PCOMP2 98 + 99 + config CRYPTO_MANAGER_TESTS 100 + bool "Run algolithms' self-tests" 101 + default y 102 + depends on CRYPTO_MANAGER2 103 + help 104 + Run cryptomanager's tests for the new crypto algorithms being 105 + registered. 106 107 config CRYPTO_GF128MUL 108 tristate "GF(2^128) multiplication functions (EXPERIMENTAL)"
+2 -2
crypto/Makefile
··· 26 crypto_hash-objs += shash.o 27 obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o 28 29 - obj-$(CONFIG_CRYPTO_PCOMP) += pcompress.o 30 31 cryptomgr-objs := algboss.o testmgr.o 32 ··· 61 obj-$(CONFIG_CRYPTO_DES) += des_generic.o 62 obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o 63 obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o 64 - obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o 65 obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o 66 obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o 67 obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
··· 26 crypto_hash-objs += shash.o 27 obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o 28 29 + obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o 30 31 cryptomgr-objs := algboss.o testmgr.o 32 ··· 61 obj-$(CONFIG_CRYPTO_DES) += des_generic.o 62 obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o 63 obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o 64 + obj-$(CONFIG_CRYPTO_TWOFISH) += twofish_generic.o 65 obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o 66 obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o 67 obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
+4
crypto/algboss.c
··· 206 return NOTIFY_OK; 207 } 208 209 static int cryptomgr_test(void *data) 210 { 211 struct crypto_test_param *param = data; ··· 267 err: 268 return NOTIFY_OK; 269 } 270 271 static int cryptomgr_notify(struct notifier_block *this, unsigned long msg, 272 void *data) ··· 275 switch (msg) { 276 case CRYPTO_MSG_ALG_REQUEST: 277 return cryptomgr_schedule_probe(data); 278 case CRYPTO_MSG_ALG_REGISTER: 279 return cryptomgr_schedule_test(data); 280 } 281 282 return NOTIFY_DONE;
··· 206 return NOTIFY_OK; 207 } 208 209 + #ifdef CONFIG_CRYPTO_MANAGER_TESTS 210 static int cryptomgr_test(void *data) 211 { 212 struct crypto_test_param *param = data; ··· 266 err: 267 return NOTIFY_OK; 268 } 269 + #endif /* CONFIG_CRYPTO_MANAGER_TESTS */ 270 271 static int cryptomgr_notify(struct notifier_block *this, unsigned long msg, 272 void *data) ··· 273 switch (msg) { 274 case CRYPTO_MSG_ALG_REQUEST: 275 return cryptomgr_schedule_probe(data); 276 + #ifdef CONFIG_CRYPTO_MANAGER_TESTS 277 case CRYPTO_MSG_ALG_REGISTER: 278 return cryptomgr_schedule_test(data); 279 + #endif 280 } 281 282 return NOTIFY_DONE;
+1 -1
crypto/authenc.c
··· 616 auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH, 617 CRYPTO_ALG_TYPE_AHASH_MASK); 618 if (IS_ERR(auth)) 619 - return ERR_PTR(PTR_ERR(auth)); 620 621 auth_base = &auth->base; 622
··· 616 auth = ahash_attr_alg(tb[1], CRYPTO_ALG_TYPE_HASH, 617 CRYPTO_ALG_TYPE_AHASH_MASK); 618 if (IS_ERR(auth)) 619 + return ERR_CAST(auth); 620 621 auth_base = &auth->base; 622
+1 -1
crypto/ctr.c
··· 185 alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER, 186 CRYPTO_ALG_TYPE_MASK); 187 if (IS_ERR(alg)) 188 - return ERR_PTR(PTR_ERR(alg)); 189 190 /* Block size must be >= 4 bytes. */ 191 err = -EINVAL;
··· 185 alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER, 186 CRYPTO_ALG_TYPE_MASK); 187 if (IS_ERR(alg)) 188 + return ERR_CAST(alg); 189 190 /* Block size must be >= 4 bytes. */ 191 err = -EINVAL;
+183 -58
crypto/pcrypt.c
··· 24 #include <linux/init.h> 25 #include <linux/module.h> 26 #include <linux/slab.h> 27 #include <crypto/pcrypt.h> 28 29 - static struct padata_instance *pcrypt_enc_padata; 30 - static struct padata_instance *pcrypt_dec_padata; 31 - static struct workqueue_struct *encwq; 32 - static struct workqueue_struct *decwq; 33 34 struct pcrypt_instance_ctx { 35 struct crypto_spawn spawn; ··· 70 }; 71 72 static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu, 73 - struct padata_instance *pinst) 74 { 75 unsigned int cpu_index, cpu, i; 76 77 cpu = *cb_cpu; 78 79 - if (cpumask_test_cpu(cpu, cpu_active_mask)) 80 goto out; 81 82 - cpu_index = cpu % cpumask_weight(cpu_active_mask); 83 84 - cpu = cpumask_first(cpu_active_mask); 85 for (i = 0; i < cpu_index; i++) 86 - cpu = cpumask_next(cpu, cpu_active_mask); 87 88 *cb_cpu = cpu; 89 90 out: 91 - return padata_do_parallel(pinst, padata, cpu); 92 } 93 94 static int pcrypt_aead_setkey(struct crypto_aead *parent, ··· 177 req->cryptlen, req->iv); 178 aead_request_set_assoc(creq, req->assoc, req->assoclen); 179 180 - err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_enc_padata); 181 - if (err) 182 - return err; 183 - else 184 - err = crypto_aead_encrypt(creq); 185 186 return err; 187 } ··· 219 req->cryptlen, req->iv); 220 aead_request_set_assoc(creq, req->assoc, req->assoclen); 221 222 - err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_dec_padata); 223 - if (err) 224 - return err; 225 - else 226 - err = crypto_aead_decrypt(creq); 227 228 return err; 229 } ··· 263 aead_givcrypt_set_assoc(creq, areq->assoc, areq->assoclen); 264 aead_givcrypt_set_giv(creq, req->giv, req->seq); 265 266 - err = pcrypt_do_parallel(padata, &ctx->cb_cpu, pcrypt_enc_padata); 267 - if (err) 268 - return err; 269 - else 270 - err = crypto_aead_givencrypt(creq); 271 272 return err; 273 } ··· 405 kfree(inst); 406 } 407 408 static struct crypto_template pcrypt_tmpl = { 409 .name = "pcrypt", 410 .alloc = pcrypt_alloc, ··· 523 524 static int __init pcrypt_init(void) 525 { 526 - encwq = create_workqueue("pencrypt"); 527 - if (!encwq) 528 goto err; 529 530 - decwq = create_workqueue("pdecrypt"); 531 - if (!decwq) 532 - goto err_destroy_encwq; 533 534 535 - pcrypt_enc_padata = padata_alloc(cpu_possible_mask, encwq); 536 - if (!pcrypt_enc_padata) 537 - goto err_destroy_decwq; 538 - 539 - pcrypt_dec_padata = padata_alloc(cpu_possible_mask, decwq); 540 - if (!pcrypt_dec_padata) 541 - goto err_free_padata; 542 - 543 - padata_start(pcrypt_enc_padata); 544 - padata_start(pcrypt_dec_padata); 545 546 return crypto_register_template(&pcrypt_tmpl); 547 548 - err_free_padata: 549 - padata_free(pcrypt_enc_padata); 550 - 551 - err_destroy_decwq: 552 - destroy_workqueue(decwq); 553 - 554 - err_destroy_encwq: 555 - destroy_workqueue(encwq); 556 - 557 err: 558 - return -ENOMEM; 559 } 560 561 static void __exit pcrypt_exit(void) 562 { 563 - padata_stop(pcrypt_enc_padata); 564 - padata_stop(pcrypt_dec_padata); 565 566 - destroy_workqueue(encwq); 567 - destroy_workqueue(decwq); 568 - 569 - padata_free(pcrypt_enc_padata); 570 - padata_free(pcrypt_dec_padata); 571 - 572 crypto_unregister_template(&pcrypt_tmpl); 573 } 574
··· 24 #include <linux/init.h> 25 #include <linux/module.h> 26 #include <linux/slab.h> 27 + #include <linux/notifier.h> 28 + #include <linux/kobject.h> 29 + #include <linux/cpu.h> 30 #include <crypto/pcrypt.h> 31 32 + struct padata_pcrypt { 33 + struct padata_instance *pinst; 34 + struct workqueue_struct *wq; 35 + 36 + /* 37 + * Cpumask for callback CPUs. It should be 38 + * equal to serial cpumask of corresponding padata instance, 39 + * so it is updated when padata notifies us about serial 40 + * cpumask change. 41 + * 42 + * cb_cpumask is protected by RCU. This fact prevents us from 43 + * using cpumask_var_t directly because the actual type of 44 + * cpumsak_var_t depends on kernel configuration(particularly on 45 + * CONFIG_CPUMASK_OFFSTACK macro). Depending on the configuration 46 + * cpumask_var_t may be either a pointer to the struct cpumask 47 + * or a variable allocated on the stack. Thus we can not safely use 48 + * cpumask_var_t with RCU operations such as rcu_assign_pointer or 49 + * rcu_dereference. So cpumask_var_t is wrapped with struct 50 + * pcrypt_cpumask which makes possible to use it with RCU. 51 + */ 52 + struct pcrypt_cpumask { 53 + cpumask_var_t mask; 54 + } *cb_cpumask; 55 + struct notifier_block nblock; 56 + }; 57 + 58 + static struct padata_pcrypt pencrypt; 59 + static struct padata_pcrypt pdecrypt; 60 + static struct kset *pcrypt_kset; 61 62 struct pcrypt_instance_ctx { 63 struct crypto_spawn spawn; ··· 42 }; 43 44 static int pcrypt_do_parallel(struct padata_priv *padata, unsigned int *cb_cpu, 45 + struct padata_pcrypt *pcrypt) 46 { 47 unsigned int cpu_index, cpu, i; 48 + struct pcrypt_cpumask *cpumask; 49 50 cpu = *cb_cpu; 51 52 + rcu_read_lock_bh(); 53 + cpumask = rcu_dereference(pcrypt->cb_cpumask); 54 + if (cpumask_test_cpu(cpu, cpumask->mask)) 55 goto out; 56 57 + if (!cpumask_weight(cpumask->mask)) 58 + goto out; 59 60 + cpu_index = cpu % cpumask_weight(cpumask->mask); 61 + 62 + cpu = cpumask_first(cpumask->mask); 63 for (i = 0; i < cpu_index; i++) 64 + cpu = cpumask_next(cpu, cpumask->mask); 65 66 *cb_cpu = cpu; 67 68 out: 69 + rcu_read_unlock_bh(); 70 + return padata_do_parallel(pcrypt->pinst, padata, cpu); 71 } 72 73 static int pcrypt_aead_setkey(struct crypto_aead *parent, ··· 142 req->cryptlen, req->iv); 143 aead_request_set_assoc(creq, req->assoc, req->assoclen); 144 145 + err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt); 146 + if (!err) 147 + return -EINPROGRESS; 148 149 return err; 150 } ··· 186 req->cryptlen, req->iv); 187 aead_request_set_assoc(creq, req->assoc, req->assoclen); 188 189 + err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pdecrypt); 190 + if (!err) 191 + return -EINPROGRESS; 192 193 return err; 194 } ··· 232 aead_givcrypt_set_assoc(creq, areq->assoc, areq->assoclen); 233 aead_givcrypt_set_giv(creq, req->giv, req->seq); 234 235 + err = pcrypt_do_parallel(padata, &ctx->cb_cpu, &pencrypt); 236 + if (!err) 237 + return -EINPROGRESS; 238 239 return err; 240 } ··· 376 kfree(inst); 377 } 378 379 + static int pcrypt_cpumask_change_notify(struct notifier_block *self, 380 + unsigned long val, void *data) 381 + { 382 + struct padata_pcrypt *pcrypt; 383 + struct pcrypt_cpumask *new_mask, *old_mask; 384 + struct padata_cpumask *cpumask = (struct padata_cpumask *)data; 385 + 386 + if (!(val & PADATA_CPU_SERIAL)) 387 + return 0; 388 + 389 + pcrypt = container_of(self, struct padata_pcrypt, nblock); 390 + new_mask = kmalloc(sizeof(*new_mask), GFP_KERNEL); 391 + if (!new_mask) 392 + return -ENOMEM; 393 + if (!alloc_cpumask_var(&new_mask->mask, GFP_KERNEL)) { 394 + kfree(new_mask); 395 + return -ENOMEM; 396 + } 397 + 398 + old_mask = pcrypt->cb_cpumask; 399 + 400 + cpumask_copy(new_mask->mask, cpumask->cbcpu); 401 + rcu_assign_pointer(pcrypt->cb_cpumask, new_mask); 402 + synchronize_rcu_bh(); 403 + 404 + free_cpumask_var(old_mask->mask); 405 + kfree(old_mask); 406 + return 0; 407 + } 408 + 409 + static int pcrypt_sysfs_add(struct padata_instance *pinst, const char *name) 410 + { 411 + int ret; 412 + 413 + pinst->kobj.kset = pcrypt_kset; 414 + ret = kobject_add(&pinst->kobj, NULL, name); 415 + if (!ret) 416 + kobject_uevent(&pinst->kobj, KOBJ_ADD); 417 + 418 + return ret; 419 + } 420 + 421 + static int pcrypt_init_padata(struct padata_pcrypt *pcrypt, 422 + const char *name) 423 + { 424 + int ret = -ENOMEM; 425 + struct pcrypt_cpumask *mask; 426 + 427 + get_online_cpus(); 428 + 429 + pcrypt->wq = create_workqueue(name); 430 + if (!pcrypt->wq) 431 + goto err; 432 + 433 + pcrypt->pinst = padata_alloc_possible(pcrypt->wq); 434 + if (!pcrypt->pinst) 435 + goto err_destroy_workqueue; 436 + 437 + mask = kmalloc(sizeof(*mask), GFP_KERNEL); 438 + if (!mask) 439 + goto err_free_padata; 440 + if (!alloc_cpumask_var(&mask->mask, GFP_KERNEL)) { 441 + kfree(mask); 442 + goto err_free_padata; 443 + } 444 + 445 + cpumask_and(mask->mask, cpu_possible_mask, cpu_active_mask); 446 + rcu_assign_pointer(pcrypt->cb_cpumask, mask); 447 + 448 + pcrypt->nblock.notifier_call = pcrypt_cpumask_change_notify; 449 + ret = padata_register_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock); 450 + if (ret) 451 + goto err_free_cpumask; 452 + 453 + ret = pcrypt_sysfs_add(pcrypt->pinst, name); 454 + if (ret) 455 + goto err_unregister_notifier; 456 + 457 + put_online_cpus(); 458 + 459 + return ret; 460 + 461 + err_unregister_notifier: 462 + padata_unregister_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock); 463 + err_free_cpumask: 464 + free_cpumask_var(mask->mask); 465 + kfree(mask); 466 + err_free_padata: 467 + padata_free(pcrypt->pinst); 468 + err_destroy_workqueue: 469 + destroy_workqueue(pcrypt->wq); 470 + err: 471 + put_online_cpus(); 472 + 473 + return ret; 474 + } 475 + 476 + static void pcrypt_fini_padata(struct padata_pcrypt *pcrypt) 477 + { 478 + kobject_put(&pcrypt->pinst->kobj); 479 + free_cpumask_var(pcrypt->cb_cpumask->mask); 480 + kfree(pcrypt->cb_cpumask); 481 + 482 + padata_stop(pcrypt->pinst); 483 + padata_unregister_cpumask_notifier(pcrypt->pinst, &pcrypt->nblock); 484 + destroy_workqueue(pcrypt->wq); 485 + padata_free(pcrypt->pinst); 486 + } 487 + 488 static struct crypto_template pcrypt_tmpl = { 489 .name = "pcrypt", 490 .alloc = pcrypt_alloc, ··· 385 386 static int __init pcrypt_init(void) 387 { 388 + int err = -ENOMEM; 389 + 390 + pcrypt_kset = kset_create_and_add("pcrypt", NULL, kernel_kobj); 391 + if (!pcrypt_kset) 392 goto err; 393 394 + err = pcrypt_init_padata(&pencrypt, "pencrypt"); 395 + if (err) 396 + goto err_unreg_kset; 397 398 + err = pcrypt_init_padata(&pdecrypt, "pdecrypt"); 399 + if (err) 400 + goto err_deinit_pencrypt; 401 402 + padata_start(pencrypt.pinst); 403 + padata_start(pdecrypt.pinst); 404 405 return crypto_register_template(&pcrypt_tmpl); 406 407 + err_deinit_pencrypt: 408 + pcrypt_fini_padata(&pencrypt); 409 + err_unreg_kset: 410 + kset_unregister(pcrypt_kset); 411 err: 412 + return err; 413 } 414 415 static void __exit pcrypt_exit(void) 416 { 417 + pcrypt_fini_padata(&pencrypt); 418 + pcrypt_fini_padata(&pdecrypt); 419 420 + kset_unregister(pcrypt_kset); 421 crypto_unregister_template(&pcrypt_tmpl); 422 } 423
+14
crypto/testmgr.c
··· 22 #include <crypto/rng.h> 23 24 #include "internal.h" 25 #include "testmgr.h" 26 27 /* ··· 2541 non_fips_alg: 2542 return -EINVAL; 2543 } 2544 EXPORT_SYMBOL_GPL(alg_test);
··· 22 #include <crypto/rng.h> 23 24 #include "internal.h" 25 + 26 + #ifndef CONFIG_CRYPTO_MANAGER_TESTS 27 + 28 + /* a perfect nop */ 29 + int alg_test(const char *driver, const char *alg, u32 type, u32 mask) 30 + { 31 + return 0; 32 + } 33 + 34 + #else 35 + 36 #include "testmgr.h" 37 38 /* ··· 2530 non_fips_alg: 2531 return -EINVAL; 2532 } 2533 + 2534 + #endif /* CONFIG_CRYPTO_MANAGER_TESTS */ 2535 + 2536 EXPORT_SYMBOL_GPL(alg_test);
+1
crypto/twofish.c crypto/twofish_generic.c
··· 212 213 MODULE_LICENSE("GPL"); 214 MODULE_DESCRIPTION ("Twofish Cipher Algorithm");
··· 212 213 MODULE_LICENSE("GPL"); 214 MODULE_DESCRIPTION ("Twofish Cipher Algorithm"); 215 + MODULE_ALIAS("twofish");
+1 -1
crypto/xts.c
··· 224 alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, 225 CRYPTO_ALG_TYPE_MASK); 226 if (IS_ERR(alg)) 227 - return ERR_PTR(PTR_ERR(alg)); 228 229 inst = crypto_alloc_instance("xts", alg); 230 if (IS_ERR(inst))
··· 224 alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, 225 CRYPTO_ALG_TYPE_MASK); 226 if (IS_ERR(alg)) 227 + return ERR_CAST(alg); 228 229 inst = crypto_alloc_instance("xts", alg); 230 if (IS_ERR(inst))
+1 -1
drivers/char/hw_random/n2-drv.c
··· 387 388 static int n2rng_data_read(struct hwrng *rng, u32 *data) 389 { 390 - struct n2rng *np = (struct n2rng *) rng->priv; 391 unsigned long ra = __pa(&np->test_data); 392 int len; 393
··· 387 388 static int n2rng_data_read(struct hwrng *rng, u32 *data) 389 { 390 + struct n2rng *np = rng->priv; 391 unsigned long ra = __pa(&np->test_data); 392 int len; 393
+1 -1
drivers/char/random.c
··· 407 struct poolinfo *poolinfo; 408 __u32 *pool; 409 const char *name; 410 - int limit; 411 struct entropy_store *pull; 412 413 /* read-write data: */ 414 spinlock_t lock;
··· 407 struct poolinfo *poolinfo; 408 __u32 *pool; 409 const char *name; 410 struct entropy_store *pull; 411 + int limit; 412 413 /* read-write data: */ 414 spinlock_t lock;
+1 -1
drivers/crypto/geode-aes.c
··· 573 } 574 575 static struct pci_device_id geode_aes_tbl[] = { 576 - { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_LX_AES, PCI_ANY_ID, PCI_ANY_ID} , 577 { 0, } 578 }; 579
··· 573 } 574 575 static struct pci_device_id geode_aes_tbl[] = { 576 + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_LX_AES), } , 577 { 0, } 578 }; 579
-4
drivers/crypto/hifn_795x.c
··· 2018 { 2019 unsigned long flags; 2020 struct crypto_async_request *async_req; 2021 - struct hifn_context *ctx; 2022 struct ablkcipher_request *req; 2023 struct hifn_dma *dma = (struct hifn_dma *)dev->desc_virt; 2024 int i; ··· 2034 2035 spin_lock_irqsave(&dev->lock, flags); 2036 while ((async_req = crypto_dequeue_request(&dev->queue))) { 2037 - ctx = crypto_tfm_ctx(async_req->tfm); 2038 req = container_of(async_req, struct ablkcipher_request, base); 2039 spin_unlock_irqrestore(&dev->lock, flags); 2040 ··· 2137 static int hifn_process_queue(struct hifn_device *dev) 2138 { 2139 struct crypto_async_request *async_req, *backlog; 2140 - struct hifn_context *ctx; 2141 struct ablkcipher_request *req; 2142 unsigned long flags; 2143 int err = 0; ··· 2153 if (backlog) 2154 backlog->complete(backlog, -EINPROGRESS); 2155 2156 - ctx = crypto_tfm_ctx(async_req->tfm); 2157 req = container_of(async_req, struct ablkcipher_request, base); 2158 2159 err = hifn_handle_req(req);
··· 2018 { 2019 unsigned long flags; 2020 struct crypto_async_request *async_req; 2021 struct ablkcipher_request *req; 2022 struct hifn_dma *dma = (struct hifn_dma *)dev->desc_virt; 2023 int i; ··· 2035 2036 spin_lock_irqsave(&dev->lock, flags); 2037 while ((async_req = crypto_dequeue_request(&dev->queue))) { 2038 req = container_of(async_req, struct ablkcipher_request, base); 2039 spin_unlock_irqrestore(&dev->lock, flags); 2040 ··· 2139 static int hifn_process_queue(struct hifn_device *dev) 2140 { 2141 struct crypto_async_request *async_req, *backlog; 2142 struct ablkcipher_request *req; 2143 unsigned long flags; 2144 int err = 0; ··· 2156 if (backlog) 2157 backlog->complete(backlog, -EINPROGRESS); 2158 2159 req = container_of(async_req, struct ablkcipher_request, base); 2160 2161 err = hifn_handle_req(req);
+5 -5
drivers/crypto/mv_cesa.c
··· 1055 cp->queue_th = kthread_run(queue_manag, cp, "mv_crypto"); 1056 if (IS_ERR(cp->queue_th)) { 1057 ret = PTR_ERR(cp->queue_th); 1058 - goto err_thread; 1059 } 1060 1061 ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev), 1062 cp); 1063 if (ret) 1064 - goto err_unmap_sram; 1065 1066 writel(SEC_INT_ACCEL0_DONE, cpg->reg + SEC_ACCEL_INT_MASK); 1067 writel(SEC_CFG_STOP_DIG_ERR, cpg->reg + SEC_ACCEL_CFG); 1068 1069 ret = crypto_register_alg(&mv_aes_alg_ecb); 1070 if (ret) 1071 - goto err_reg; 1072 1073 ret = crypto_register_alg(&mv_aes_alg_cbc); 1074 if (ret) ··· 1091 return 0; 1092 err_unreg_ecb: 1093 crypto_unregister_alg(&mv_aes_alg_ecb); 1094 - err_thread: 1095 free_irq(irq, cp); 1096 - err_reg: 1097 kthread_stop(cp->queue_th); 1098 err_unmap_sram: 1099 iounmap(cp->sram);
··· 1055 cp->queue_th = kthread_run(queue_manag, cp, "mv_crypto"); 1056 if (IS_ERR(cp->queue_th)) { 1057 ret = PTR_ERR(cp->queue_th); 1058 + goto err_unmap_sram; 1059 } 1060 1061 ret = request_irq(irq, crypto_int, IRQF_DISABLED, dev_name(&pdev->dev), 1062 cp); 1063 if (ret) 1064 + goto err_thread; 1065 1066 writel(SEC_INT_ACCEL0_DONE, cpg->reg + SEC_ACCEL_INT_MASK); 1067 writel(SEC_CFG_STOP_DIG_ERR, cpg->reg + SEC_ACCEL_CFG); 1068 1069 ret = crypto_register_alg(&mv_aes_alg_ecb); 1070 if (ret) 1071 + goto err_irq; 1072 1073 ret = crypto_register_alg(&mv_aes_alg_cbc); 1074 if (ret) ··· 1091 return 0; 1092 err_unreg_ecb: 1093 crypto_unregister_alg(&mv_aes_alg_ecb); 1094 + err_irq: 1095 free_irq(irq, cp); 1096 + err_thread: 1097 kthread_stop(cp->queue_th); 1098 err_unmap_sram: 1099 iounmap(cp->sram);
+297 -120
drivers/crypto/n2_core.c
··· 239 } 240 #endif 241 242 - struct n2_base_ctx { 243 - struct list_head list; 244 }; 245 246 - static void n2_base_ctx_init(struct n2_base_ctx *ctx) 247 { 248 - INIT_LIST_HEAD(&ctx->list); 249 } 250 251 struct n2_hash_ctx { 252 - struct n2_base_ctx base; 253 - 254 struct crypto_ahash *fallback_tfm; 255 }; 256 257 struct n2_hash_req_ctx { ··· 296 struct sha1_state sha1; 297 struct sha256_state sha256; 298 } u; 299 - 300 - unsigned char hash_key[64]; 301 - unsigned char keyed_zero_hash[32]; 302 303 struct ahash_request fallback_req; 304 }; ··· 389 crypto_free_ahash(ctx->fallback_tfm); 390 } 391 392 static unsigned long wait_for_tail(struct spu_queue *qp) 393 { 394 unsigned long head, hv_ret; ··· 506 return hv_ret; 507 } 508 509 - static int n2_hash_async_digest(struct ahash_request *req, 510 - unsigned int auth_type, unsigned int digest_size, 511 - unsigned int result_size, void *hash_loc) 512 { 513 struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 514 - struct n2_hash_ctx *ctx = crypto_ahash_ctx(tfm); 515 struct cwq_initial_entry *ent; 516 struct crypto_hash_walk walk; 517 struct spu_queue *qp; ··· 524 */ 525 if (unlikely(req->nbytes > (1 << 16))) { 526 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 527 528 ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); 529 rctx->fallback_req.base.flags = ··· 535 536 return crypto_ahash_digest(&rctx->fallback_req); 537 } 538 - 539 - n2_base_ctx_init(&ctx->base); 540 541 nbytes = crypto_hash_walk_first(req, &walk); 542 ··· 550 */ 551 ent = qp->q + qp->tail; 552 553 - ent->control = control_word_base(nbytes, 0, 0, 554 auth_type, digest_size, 555 false, true, false, false, 556 OPCODE_INPLACE_BIT | 557 OPCODE_AUTH_MAC); 558 ent->src_addr = __pa(walk.data); 559 - ent->auth_key_addr = 0UL; 560 ent->auth_iv_addr = __pa(hash_loc); 561 ent->final_auth_state_addr = 0UL; 562 ent->enc_key_addr = 0UL; ··· 595 return err; 596 } 597 598 - static int n2_md5_async_digest(struct ahash_request *req) 599 { 600 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 601 - struct md5_state *m = &rctx->u.md5; 602 603 if (unlikely(req->nbytes == 0)) { 604 - static const char md5_zero[MD5_DIGEST_SIZE] = { 605 - 0xd4, 0x1d, 0x8c, 0xd9, 0x8f, 0x00, 0xb2, 0x04, 606 - 0xe9, 0x80, 0x09, 0x98, 0xec, 0xf8, 0x42, 0x7e, 607 - }; 608 - 609 - memcpy(req->result, md5_zero, MD5_DIGEST_SIZE); 610 return 0; 611 } 612 - m->hash[0] = cpu_to_le32(0x67452301); 613 - m->hash[1] = cpu_to_le32(0xefcdab89); 614 - m->hash[2] = cpu_to_le32(0x98badcfe); 615 - m->hash[3] = cpu_to_le32(0x10325476); 616 617 - return n2_hash_async_digest(req, AUTH_TYPE_MD5, 618 - MD5_DIGEST_SIZE, MD5_DIGEST_SIZE, 619 - m->hash); 620 } 621 622 - static int n2_sha1_async_digest(struct ahash_request *req) 623 { 624 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 625 - struct sha1_state *s = &rctx->u.sha1; 626 627 - if (unlikely(req->nbytes == 0)) { 628 - static const char sha1_zero[SHA1_DIGEST_SIZE] = { 629 - 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, 0x32, 630 - 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90, 0xaf, 0xd8, 631 - 0x07, 0x09 632 - }; 633 634 - memcpy(req->result, sha1_zero, SHA1_DIGEST_SIZE); 635 - return 0; 636 } 637 - s->state[0] = SHA1_H0; 638 - s->state[1] = SHA1_H1; 639 - s->state[2] = SHA1_H2; 640 - s->state[3] = SHA1_H3; 641 - s->state[4] = SHA1_H4; 642 643 - return n2_hash_async_digest(req, AUTH_TYPE_SHA1, 644 - SHA1_DIGEST_SIZE, SHA1_DIGEST_SIZE, 645 - s->state); 646 - } 647 - 648 - static int n2_sha256_async_digest(struct ahash_request *req) 649 - { 650 - struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 651 - struct sha256_state *s = &rctx->u.sha256; 652 - 653 - if (req->nbytes == 0) { 654 - static const char sha256_zero[SHA256_DIGEST_SIZE] = { 655 - 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 656 - 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 657 - 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 658 - 0x1b, 0x78, 0x52, 0xb8, 0x55 659 - }; 660 - 661 - memcpy(req->result, sha256_zero, SHA256_DIGEST_SIZE); 662 - return 0; 663 - } 664 - s->state[0] = SHA256_H0; 665 - s->state[1] = SHA256_H1; 666 - s->state[2] = SHA256_H2; 667 - s->state[3] = SHA256_H3; 668 - s->state[4] = SHA256_H4; 669 - s->state[5] = SHA256_H5; 670 - s->state[6] = SHA256_H6; 671 - s->state[7] = SHA256_H7; 672 - 673 - return n2_hash_async_digest(req, AUTH_TYPE_SHA256, 674 - SHA256_DIGEST_SIZE, SHA256_DIGEST_SIZE, 675 - s->state); 676 - } 677 - 678 - static int n2_sha224_async_digest(struct ahash_request *req) 679 - { 680 - struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 681 - struct sha256_state *s = &rctx->u.sha256; 682 - 683 - if (req->nbytes == 0) { 684 - static const char sha224_zero[SHA224_DIGEST_SIZE] = { 685 - 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 686 - 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, 687 - 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, 688 - 0x2f 689 - }; 690 - 691 - memcpy(req->result, sha224_zero, SHA224_DIGEST_SIZE); 692 - return 0; 693 - } 694 - s->state[0] = SHA224_H0; 695 - s->state[1] = SHA224_H1; 696 - s->state[2] = SHA224_H2; 697 - s->state[3] = SHA224_H3; 698 - s->state[4] = SHA224_H4; 699 - s->state[5] = SHA224_H5; 700 - s->state[6] = SHA224_H6; 701 - s->state[7] = SHA224_H7; 702 - 703 - return n2_hash_async_digest(req, AUTH_TYPE_SHA256, 704 - SHA256_DIGEST_SIZE, SHA224_DIGEST_SIZE, 705 - s->state); 706 } 707 708 struct n2_cipher_context { ··· 1270 1271 struct n2_hash_tmpl { 1272 const char *name; 1273 - int (*digest)(struct ahash_request *req); 1274 u8 digest_size; 1275 u8 block_size; 1276 }; 1277 static const struct n2_hash_tmpl hash_tmpls[] = { 1278 { .name = "md5", 1279 - .digest = n2_md5_async_digest, 1280 .digest_size = MD5_DIGEST_SIZE, 1281 .block_size = MD5_HMAC_BLOCK_SIZE }, 1282 { .name = "sha1", 1283 - .digest = n2_sha1_async_digest, 1284 .digest_size = SHA1_DIGEST_SIZE, 1285 .block_size = SHA1_BLOCK_SIZE }, 1286 { .name = "sha256", 1287 - .digest = n2_sha256_async_digest, 1288 .digest_size = SHA256_DIGEST_SIZE, 1289 .block_size = SHA256_BLOCK_SIZE }, 1290 { .name = "sha224", 1291 - .digest = n2_sha224_async_digest, 1292 .digest_size = SHA224_DIGEST_SIZE, 1293 .block_size = SHA224_BLOCK_SIZE }, 1294 }; 1295 #define NUM_HASH_TMPLS ARRAY_SIZE(hash_tmpls) 1296 1297 - struct n2_ahash_alg { 1298 - struct list_head entry; 1299 - struct ahash_alg alg; 1300 - }; 1301 static LIST_HEAD(ahash_algs); 1302 1303 static int algs_registered; 1304 ··· 1363 { 1364 struct n2_cipher_alg *cipher, *cipher_tmp; 1365 struct n2_ahash_alg *alg, *alg_tmp; 1366 1367 list_for_each_entry_safe(cipher, cipher_tmp, &cipher_algs, entry) { 1368 crypto_unregister_alg(&cipher->alg); 1369 list_del(&cipher->entry); 1370 kfree(cipher); 1371 } 1372 list_for_each_entry_safe(alg, alg_tmp, &ahash_algs, entry) { 1373 crypto_unregister_ahash(&alg->alg); ··· 1414 list_add(&p->entry, &cipher_algs); 1415 err = crypto_register_alg(alg); 1416 if (err) { 1417 list_del(&p->entry); 1418 kfree(p); 1419 } 1420 return err; 1421 } ··· 1472 if (!p) 1473 return -ENOMEM; 1474 1475 ahash = &p->alg; 1476 ahash->init = n2_hash_async_init; 1477 ahash->update = n2_hash_async_update; 1478 ahash->final = n2_hash_async_final; 1479 ahash->finup = n2_hash_async_finup; 1480 - ahash->digest = tmpl->digest; 1481 1482 halg = &ahash->halg; 1483 halg->digestsize = tmpl->digest_size; ··· 1503 list_add(&p->entry, &ahash_algs); 1504 err = crypto_register_ahash(ahash); 1505 if (err) { 1506 list_del(&p->entry); 1507 kfree(p); 1508 } 1509 return err; 1510 } 1511
··· 239 } 240 #endif 241 242 + struct n2_ahash_alg { 243 + struct list_head entry; 244 + const char *hash_zero; 245 + const u32 *hash_init; 246 + u8 hw_op_hashsz; 247 + u8 digest_size; 248 + u8 auth_type; 249 + u8 hmac_type; 250 + struct ahash_alg alg; 251 }; 252 253 + static inline struct n2_ahash_alg *n2_ahash_alg(struct crypto_tfm *tfm) 254 { 255 + struct crypto_alg *alg = tfm->__crt_alg; 256 + struct ahash_alg *ahash_alg; 257 + 258 + ahash_alg = container_of(alg, struct ahash_alg, halg.base); 259 + 260 + return container_of(ahash_alg, struct n2_ahash_alg, alg); 261 + } 262 + 263 + struct n2_hmac_alg { 264 + const char *child_alg; 265 + struct n2_ahash_alg derived; 266 + }; 267 + 268 + static inline struct n2_hmac_alg *n2_hmac_alg(struct crypto_tfm *tfm) 269 + { 270 + struct crypto_alg *alg = tfm->__crt_alg; 271 + struct ahash_alg *ahash_alg; 272 + 273 + ahash_alg = container_of(alg, struct ahash_alg, halg.base); 274 + 275 + return container_of(ahash_alg, struct n2_hmac_alg, derived.alg); 276 } 277 278 struct n2_hash_ctx { 279 struct crypto_ahash *fallback_tfm; 280 + }; 281 + 282 + #define N2_HASH_KEY_MAX 32 /* HW limit for all HMAC requests */ 283 + 284 + struct n2_hmac_ctx { 285 + struct n2_hash_ctx base; 286 + 287 + struct crypto_shash *child_shash; 288 + 289 + int hash_key_len; 290 + unsigned char hash_key[N2_HASH_KEY_MAX]; 291 }; 292 293 struct n2_hash_req_ctx { ··· 260 struct sha1_state sha1; 261 struct sha256_state sha256; 262 } u; 263 264 struct ahash_request fallback_req; 265 }; ··· 356 crypto_free_ahash(ctx->fallback_tfm); 357 } 358 359 + static int n2_hmac_cra_init(struct crypto_tfm *tfm) 360 + { 361 + const char *fallback_driver_name = tfm->__crt_alg->cra_name; 362 + struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); 363 + struct n2_hmac_ctx *ctx = crypto_ahash_ctx(ahash); 364 + struct n2_hmac_alg *n2alg = n2_hmac_alg(tfm); 365 + struct crypto_ahash *fallback_tfm; 366 + struct crypto_shash *child_shash; 367 + int err; 368 + 369 + fallback_tfm = crypto_alloc_ahash(fallback_driver_name, 0, 370 + CRYPTO_ALG_NEED_FALLBACK); 371 + if (IS_ERR(fallback_tfm)) { 372 + pr_warning("Fallback driver '%s' could not be loaded!\n", 373 + fallback_driver_name); 374 + err = PTR_ERR(fallback_tfm); 375 + goto out; 376 + } 377 + 378 + child_shash = crypto_alloc_shash(n2alg->child_alg, 0, 0); 379 + if (IS_ERR(child_shash)) { 380 + pr_warning("Child shash '%s' could not be loaded!\n", 381 + n2alg->child_alg); 382 + err = PTR_ERR(child_shash); 383 + goto out_free_fallback; 384 + } 385 + 386 + crypto_ahash_set_reqsize(ahash, (sizeof(struct n2_hash_req_ctx) + 387 + crypto_ahash_reqsize(fallback_tfm))); 388 + 389 + ctx->child_shash = child_shash; 390 + ctx->base.fallback_tfm = fallback_tfm; 391 + return 0; 392 + 393 + out_free_fallback: 394 + crypto_free_ahash(fallback_tfm); 395 + 396 + out: 397 + return err; 398 + } 399 + 400 + static void n2_hmac_cra_exit(struct crypto_tfm *tfm) 401 + { 402 + struct crypto_ahash *ahash = __crypto_ahash_cast(tfm); 403 + struct n2_hmac_ctx *ctx = crypto_ahash_ctx(ahash); 404 + 405 + crypto_free_ahash(ctx->base.fallback_tfm); 406 + crypto_free_shash(ctx->child_shash); 407 + } 408 + 409 + static int n2_hmac_async_setkey(struct crypto_ahash *tfm, const u8 *key, 410 + unsigned int keylen) 411 + { 412 + struct n2_hmac_ctx *ctx = crypto_ahash_ctx(tfm); 413 + struct crypto_shash *child_shash = ctx->child_shash; 414 + struct crypto_ahash *fallback_tfm; 415 + struct { 416 + struct shash_desc shash; 417 + char ctx[crypto_shash_descsize(child_shash)]; 418 + } desc; 419 + int err, bs, ds; 420 + 421 + fallback_tfm = ctx->base.fallback_tfm; 422 + err = crypto_ahash_setkey(fallback_tfm, key, keylen); 423 + if (err) 424 + return err; 425 + 426 + desc.shash.tfm = child_shash; 427 + desc.shash.flags = crypto_ahash_get_flags(tfm) & 428 + CRYPTO_TFM_REQ_MAY_SLEEP; 429 + 430 + bs = crypto_shash_blocksize(child_shash); 431 + ds = crypto_shash_digestsize(child_shash); 432 + BUG_ON(ds > N2_HASH_KEY_MAX); 433 + if (keylen > bs) { 434 + err = crypto_shash_digest(&desc.shash, key, keylen, 435 + ctx->hash_key); 436 + if (err) 437 + return err; 438 + keylen = ds; 439 + } else if (keylen <= N2_HASH_KEY_MAX) 440 + memcpy(ctx->hash_key, key, keylen); 441 + 442 + ctx->hash_key_len = keylen; 443 + 444 + return err; 445 + } 446 + 447 static unsigned long wait_for_tail(struct spu_queue *qp) 448 { 449 unsigned long head, hv_ret; ··· 385 return hv_ret; 386 } 387 388 + static int n2_do_async_digest(struct ahash_request *req, 389 + unsigned int auth_type, unsigned int digest_size, 390 + unsigned int result_size, void *hash_loc, 391 + unsigned long auth_key, unsigned int auth_key_len) 392 { 393 struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 394 struct cwq_initial_entry *ent; 395 struct crypto_hash_walk walk; 396 struct spu_queue *qp; ··· 403 */ 404 if (unlikely(req->nbytes > (1 << 16))) { 405 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 406 + struct n2_hash_ctx *ctx = crypto_ahash_ctx(tfm); 407 408 ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); 409 rctx->fallback_req.base.flags = ··· 413 414 return crypto_ahash_digest(&rctx->fallback_req); 415 } 416 417 nbytes = crypto_hash_walk_first(req, &walk); 418 ··· 430 */ 431 ent = qp->q + qp->tail; 432 433 + ent->control = control_word_base(nbytes, auth_key_len, 0, 434 auth_type, digest_size, 435 false, true, false, false, 436 OPCODE_INPLACE_BIT | 437 OPCODE_AUTH_MAC); 438 ent->src_addr = __pa(walk.data); 439 + ent->auth_key_addr = auth_key; 440 ent->auth_iv_addr = __pa(hash_loc); 441 ent->final_auth_state_addr = 0UL; 442 ent->enc_key_addr = 0UL; ··· 475 return err; 476 } 477 478 + static int n2_hash_async_digest(struct ahash_request *req) 479 { 480 + struct n2_ahash_alg *n2alg = n2_ahash_alg(req->base.tfm); 481 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 482 + int ds; 483 484 + ds = n2alg->digest_size; 485 if (unlikely(req->nbytes == 0)) { 486 + memcpy(req->result, n2alg->hash_zero, ds); 487 return 0; 488 } 489 + memcpy(&rctx->u, n2alg->hash_init, n2alg->hw_op_hashsz); 490 491 + return n2_do_async_digest(req, n2alg->auth_type, 492 + n2alg->hw_op_hashsz, ds, 493 + &rctx->u, 0UL, 0); 494 } 495 496 + static int n2_hmac_async_digest(struct ahash_request *req) 497 { 498 + struct n2_hmac_alg *n2alg = n2_hmac_alg(req->base.tfm); 499 struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 500 + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); 501 + struct n2_hmac_ctx *ctx = crypto_ahash_ctx(tfm); 502 + int ds; 503 504 + ds = n2alg->derived.digest_size; 505 + if (unlikely(req->nbytes == 0) || 506 + unlikely(ctx->hash_key_len > N2_HASH_KEY_MAX)) { 507 + struct n2_hash_req_ctx *rctx = ahash_request_ctx(req); 508 + struct n2_hash_ctx *ctx = crypto_ahash_ctx(tfm); 509 510 + ahash_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm); 511 + rctx->fallback_req.base.flags = 512 + req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP; 513 + rctx->fallback_req.nbytes = req->nbytes; 514 + rctx->fallback_req.src = req->src; 515 + rctx->fallback_req.result = req->result; 516 + 517 + return crypto_ahash_digest(&rctx->fallback_req); 518 } 519 + memcpy(&rctx->u, n2alg->derived.hash_init, 520 + n2alg->derived.hw_op_hashsz); 521 522 + return n2_do_async_digest(req, n2alg->derived.hmac_type, 523 + n2alg->derived.hw_op_hashsz, ds, 524 + &rctx->u, 525 + __pa(&ctx->hash_key), 526 + ctx->hash_key_len); 527 } 528 529 struct n2_cipher_context { ··· 1209 1210 struct n2_hash_tmpl { 1211 const char *name; 1212 + const char *hash_zero; 1213 + const u32 *hash_init; 1214 + u8 hw_op_hashsz; 1215 u8 digest_size; 1216 u8 block_size; 1217 + u8 auth_type; 1218 + u8 hmac_type; 1219 }; 1220 + 1221 + static const char md5_zero[MD5_DIGEST_SIZE] = { 1222 + 0xd4, 0x1d, 0x8c, 0xd9, 0x8f, 0x00, 0xb2, 0x04, 1223 + 0xe9, 0x80, 0x09, 0x98, 0xec, 0xf8, 0x42, 0x7e, 1224 + }; 1225 + static const u32 md5_init[MD5_HASH_WORDS] = { 1226 + cpu_to_le32(0x67452301), 1227 + cpu_to_le32(0xefcdab89), 1228 + cpu_to_le32(0x98badcfe), 1229 + cpu_to_le32(0x10325476), 1230 + }; 1231 + static const char sha1_zero[SHA1_DIGEST_SIZE] = { 1232 + 0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d, 0x32, 1233 + 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90, 0xaf, 0xd8, 1234 + 0x07, 0x09 1235 + }; 1236 + static const u32 sha1_init[SHA1_DIGEST_SIZE / 4] = { 1237 + SHA1_H0, SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4, 1238 + }; 1239 + static const char sha256_zero[SHA256_DIGEST_SIZE] = { 1240 + 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 1241 + 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 1242 + 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 1243 + 0x1b, 0x78, 0x52, 0xb8, 0x55 1244 + }; 1245 + static const u32 sha256_init[SHA256_DIGEST_SIZE / 4] = { 1246 + SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, 1247 + SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, 1248 + }; 1249 + static const char sha224_zero[SHA224_DIGEST_SIZE] = { 1250 + 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 1251 + 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, 1252 + 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, 1253 + 0x2f 1254 + }; 1255 + static const u32 sha224_init[SHA256_DIGEST_SIZE / 4] = { 1256 + SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, 1257 + SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, 1258 + }; 1259 + 1260 static const struct n2_hash_tmpl hash_tmpls[] = { 1261 { .name = "md5", 1262 + .hash_zero = md5_zero, 1263 + .hash_init = md5_init, 1264 + .auth_type = AUTH_TYPE_MD5, 1265 + .hmac_type = AUTH_TYPE_HMAC_MD5, 1266 + .hw_op_hashsz = MD5_DIGEST_SIZE, 1267 .digest_size = MD5_DIGEST_SIZE, 1268 .block_size = MD5_HMAC_BLOCK_SIZE }, 1269 { .name = "sha1", 1270 + .hash_zero = sha1_zero, 1271 + .hash_init = sha1_init, 1272 + .auth_type = AUTH_TYPE_SHA1, 1273 + .hmac_type = AUTH_TYPE_HMAC_SHA1, 1274 + .hw_op_hashsz = SHA1_DIGEST_SIZE, 1275 .digest_size = SHA1_DIGEST_SIZE, 1276 .block_size = SHA1_BLOCK_SIZE }, 1277 { .name = "sha256", 1278 + .hash_zero = sha256_zero, 1279 + .hash_init = sha256_init, 1280 + .auth_type = AUTH_TYPE_SHA256, 1281 + .hmac_type = AUTH_TYPE_HMAC_SHA256, 1282 + .hw_op_hashsz = SHA256_DIGEST_SIZE, 1283 .digest_size = SHA256_DIGEST_SIZE, 1284 .block_size = SHA256_BLOCK_SIZE }, 1285 { .name = "sha224", 1286 + .hash_zero = sha224_zero, 1287 + .hash_init = sha224_init, 1288 + .auth_type = AUTH_TYPE_SHA256, 1289 + .hmac_type = AUTH_TYPE_RESERVED, 1290 + .hw_op_hashsz = SHA256_DIGEST_SIZE, 1291 .digest_size = SHA224_DIGEST_SIZE, 1292 .block_size = SHA224_BLOCK_SIZE }, 1293 }; 1294 #define NUM_HASH_TMPLS ARRAY_SIZE(hash_tmpls) 1295 1296 static LIST_HEAD(ahash_algs); 1297 + static LIST_HEAD(hmac_algs); 1298 1299 static int algs_registered; 1300 ··· 1245 { 1246 struct n2_cipher_alg *cipher, *cipher_tmp; 1247 struct n2_ahash_alg *alg, *alg_tmp; 1248 + struct n2_hmac_alg *hmac, *hmac_tmp; 1249 1250 list_for_each_entry_safe(cipher, cipher_tmp, &cipher_algs, entry) { 1251 crypto_unregister_alg(&cipher->alg); 1252 list_del(&cipher->entry); 1253 kfree(cipher); 1254 + } 1255 + list_for_each_entry_safe(hmac, hmac_tmp, &hmac_algs, derived.entry) { 1256 + crypto_unregister_ahash(&hmac->derived.alg); 1257 + list_del(&hmac->derived.entry); 1258 + kfree(hmac); 1259 } 1260 list_for_each_entry_safe(alg, alg_tmp, &ahash_algs, entry) { 1261 crypto_unregister_ahash(&alg->alg); ··· 1290 list_add(&p->entry, &cipher_algs); 1291 err = crypto_register_alg(alg); 1292 if (err) { 1293 + pr_err("%s alg registration failed\n", alg->cra_name); 1294 list_del(&p->entry); 1295 kfree(p); 1296 + } else { 1297 + pr_info("%s alg registered\n", alg->cra_name); 1298 + } 1299 + return err; 1300 + } 1301 + 1302 + static int __devinit __n2_register_one_hmac(struct n2_ahash_alg *n2ahash) 1303 + { 1304 + struct n2_hmac_alg *p = kzalloc(sizeof(*p), GFP_KERNEL); 1305 + struct ahash_alg *ahash; 1306 + struct crypto_alg *base; 1307 + int err; 1308 + 1309 + if (!p) 1310 + return -ENOMEM; 1311 + 1312 + p->child_alg = n2ahash->alg.halg.base.cra_name; 1313 + memcpy(&p->derived, n2ahash, sizeof(struct n2_ahash_alg)); 1314 + INIT_LIST_HEAD(&p->derived.entry); 1315 + 1316 + ahash = &p->derived.alg; 1317 + ahash->digest = n2_hmac_async_digest; 1318 + ahash->setkey = n2_hmac_async_setkey; 1319 + 1320 + base = &ahash->halg.base; 1321 + snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "hmac(%s)", p->child_alg); 1322 + snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME, "hmac-%s-n2", p->child_alg); 1323 + 1324 + base->cra_ctxsize = sizeof(struct n2_hmac_ctx); 1325 + base->cra_init = n2_hmac_cra_init; 1326 + base->cra_exit = n2_hmac_cra_exit; 1327 + 1328 + list_add(&p->derived.entry, &hmac_algs); 1329 + err = crypto_register_ahash(ahash); 1330 + if (err) { 1331 + pr_err("%s alg registration failed\n", base->cra_name); 1332 + list_del(&p->derived.entry); 1333 + kfree(p); 1334 + } else { 1335 + pr_info("%s alg registered\n", base->cra_name); 1336 } 1337 return err; 1338 } ··· 1307 if (!p) 1308 return -ENOMEM; 1309 1310 + p->hash_zero = tmpl->hash_zero; 1311 + p->hash_init = tmpl->hash_init; 1312 + p->auth_type = tmpl->auth_type; 1313 + p->hmac_type = tmpl->hmac_type; 1314 + p->hw_op_hashsz = tmpl->hw_op_hashsz; 1315 + p->digest_size = tmpl->digest_size; 1316 + 1317 ahash = &p->alg; 1318 ahash->init = n2_hash_async_init; 1319 ahash->update = n2_hash_async_update; 1320 ahash->final = n2_hash_async_final; 1321 ahash->finup = n2_hash_async_finup; 1322 + ahash->digest = n2_hash_async_digest; 1323 1324 halg = &ahash->halg; 1325 halg->digestsize = tmpl->digest_size; ··· 1331 list_add(&p->entry, &ahash_algs); 1332 err = crypto_register_ahash(ahash); 1333 if (err) { 1334 + pr_err("%s alg registration failed\n", base->cra_name); 1335 list_del(&p->entry); 1336 kfree(p); 1337 + } else { 1338 + pr_info("%s alg registered\n", base->cra_name); 1339 } 1340 + if (!err && p->hmac_type != AUTH_TYPE_RESERVED) 1341 + err = __n2_register_one_hmac(p); 1342 return err; 1343 } 1344
-1
drivers/crypto/omap-sham.c
··· 15 16 #define pr_fmt(fmt) "%s: " fmt, __func__ 17 18 - #include <linux/version.h> 19 #include <linux/err.h> 20 #include <linux/device.h> 21 #include <linux/module.h>
··· 15 16 #define pr_fmt(fmt) "%s: " fmt, __func__ 17 18 #include <linux/err.h> 19 #include <linux/device.h> 20 #include <linux/module.h>
+39 -36
drivers/crypto/talitos.c
··· 720 #define TALITOS_MDEU_MAX_CONTEXT_SIZE TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512 721 722 struct talitos_ahash_req_ctx { 723 - u64 count; 724 u32 hw_context[TALITOS_MDEU_MAX_CONTEXT_SIZE / sizeof(u32)]; 725 unsigned int hw_context_size; 726 u8 buf[HASH_MAX_BLOCK_SIZE]; ··· 728 unsigned int first; 729 unsigned int last; 730 unsigned int to_hash_later; 731 struct scatterlist bufsl[2]; 732 struct scatterlist *psrc; 733 }; ··· 1613 if (!req_ctx->last && req_ctx->to_hash_later) { 1614 /* Position any partial block for next update/final/finup */ 1615 memcpy(req_ctx->buf, req_ctx->bufnext, req_ctx->to_hash_later); 1616 } 1617 common_nonsnoop_hash_unmap(dev, edesc, areq); 1618 ··· 1729 struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq); 1730 1731 /* Initialize the context */ 1732 - req_ctx->count = 0; 1733 req_ctx->first = 1; /* first indicates h/w must init its context */ 1734 req_ctx->swinit = 0; /* assume h/w init of context */ 1735 req_ctx->hw_context_size = ··· 1777 crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); 1778 unsigned int nbytes_to_hash; 1779 unsigned int to_hash_later; 1780 - unsigned int index; 1781 int chained; 1782 1783 - index = req_ctx->count & (blocksize - 1); 1784 - req_ctx->count += nbytes; 1785 - 1786 - if (!req_ctx->last && (index + nbytes) < blocksize) { 1787 - /* Buffer the partial block */ 1788 sg_copy_to_buffer(areq->src, 1789 sg_count(areq->src, nbytes, &chained), 1790 - req_ctx->buf + index, nbytes); 1791 return 0; 1792 } 1793 1794 - if (index) { 1795 - /* partial block from previous update; chain it in. */ 1796 - sg_init_table(req_ctx->bufsl, (nbytes) ? 2 : 1); 1797 - sg_set_buf(req_ctx->bufsl, req_ctx->buf, index); 1798 - if (nbytes) 1799 - scatterwalk_sg_chain(req_ctx->bufsl, 2, 1800 - areq->src); 1801 req_ctx->psrc = req_ctx->bufsl; 1802 - } else { 1803 req_ctx->psrc = areq->src; 1804 - } 1805 - nbytes_to_hash = index + nbytes; 1806 - if (!req_ctx->last) { 1807 - to_hash_later = (nbytes_to_hash & (blocksize - 1)); 1808 - if (to_hash_later) { 1809 - int nents; 1810 - /* Must copy to_hash_later bytes from the end 1811 - * to bufnext (a partial block) for later. 1812 - */ 1813 - nents = sg_count(areq->src, nbytes, &chained); 1814 - sg_copy_end_to_buffer(areq->src, nents, 1815 - req_ctx->bufnext, 1816 - to_hash_later, 1817 - nbytes - to_hash_later); 1818 1819 - /* Adjust count for what will be hashed now */ 1820 - nbytes_to_hash -= to_hash_later; 1821 - } 1822 - req_ctx->to_hash_later = to_hash_later; 1823 } 1824 1825 - /* allocate extended descriptor */ 1826 edesc = ahash_edesc_alloc(areq, nbytes_to_hash); 1827 if (IS_ERR(edesc)) 1828 return PTR_ERR(edesc);
··· 720 #define TALITOS_MDEU_MAX_CONTEXT_SIZE TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512 721 722 struct talitos_ahash_req_ctx { 723 u32 hw_context[TALITOS_MDEU_MAX_CONTEXT_SIZE / sizeof(u32)]; 724 unsigned int hw_context_size; 725 u8 buf[HASH_MAX_BLOCK_SIZE]; ··· 729 unsigned int first; 730 unsigned int last; 731 unsigned int to_hash_later; 732 + u64 nbuf; 733 struct scatterlist bufsl[2]; 734 struct scatterlist *psrc; 735 }; ··· 1613 if (!req_ctx->last && req_ctx->to_hash_later) { 1614 /* Position any partial block for next update/final/finup */ 1615 memcpy(req_ctx->buf, req_ctx->bufnext, req_ctx->to_hash_later); 1616 + req_ctx->nbuf = req_ctx->to_hash_later; 1617 } 1618 common_nonsnoop_hash_unmap(dev, edesc, areq); 1619 ··· 1728 struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq); 1729 1730 /* Initialize the context */ 1731 + req_ctx->nbuf = 0; 1732 req_ctx->first = 1; /* first indicates h/w must init its context */ 1733 req_ctx->swinit = 0; /* assume h/w init of context */ 1734 req_ctx->hw_context_size = ··· 1776 crypto_tfm_alg_blocksize(crypto_ahash_tfm(tfm)); 1777 unsigned int nbytes_to_hash; 1778 unsigned int to_hash_later; 1779 + unsigned int nsg; 1780 int chained; 1781 1782 + if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) { 1783 + /* Buffer up to one whole block */ 1784 sg_copy_to_buffer(areq->src, 1785 sg_count(areq->src, nbytes, &chained), 1786 + req_ctx->buf + req_ctx->nbuf, nbytes); 1787 + req_ctx->nbuf += nbytes; 1788 return 0; 1789 } 1790 1791 + /* At least (blocksize + 1) bytes are available to hash */ 1792 + nbytes_to_hash = nbytes + req_ctx->nbuf; 1793 + to_hash_later = nbytes_to_hash & (blocksize - 1); 1794 + 1795 + if (req_ctx->last) 1796 + to_hash_later = 0; 1797 + else if (to_hash_later) 1798 + /* There is a partial block. Hash the full block(s) now */ 1799 + nbytes_to_hash -= to_hash_later; 1800 + else { 1801 + /* Keep one block buffered */ 1802 + nbytes_to_hash -= blocksize; 1803 + to_hash_later = blocksize; 1804 + } 1805 + 1806 + /* Chain in any previously buffered data */ 1807 + if (req_ctx->nbuf) { 1808 + nsg = (req_ctx->nbuf < nbytes_to_hash) ? 2 : 1; 1809 + sg_init_table(req_ctx->bufsl, nsg); 1810 + sg_set_buf(req_ctx->bufsl, req_ctx->buf, req_ctx->nbuf); 1811 + if (nsg > 1) 1812 + scatterwalk_sg_chain(req_ctx->bufsl, 2, areq->src); 1813 req_ctx->psrc = req_ctx->bufsl; 1814 + } else 1815 req_ctx->psrc = areq->src; 1816 1817 + if (to_hash_later) { 1818 + int nents = sg_count(areq->src, nbytes, &chained); 1819 + sg_copy_end_to_buffer(areq->src, nents, 1820 + req_ctx->bufnext, 1821 + to_hash_later, 1822 + nbytes - to_hash_later); 1823 } 1824 + req_ctx->to_hash_later = to_hash_later; 1825 1826 + /* Allocate extended descriptor */ 1827 edesc = ahash_edesc_alloc(areq, nbytes_to_hash); 1828 if (IS_ERR(edesc)) 1829 return PTR_ERR(edesc);
+85 -36
include/linux/padata.h
··· 25 #include <linux/spinlock.h> 26 #include <linux/list.h> 27 #include <linux/timer.h> 28 29 /** 30 * struct padata_priv - Embedded to the users data structure. ··· 64 }; 65 66 /** 67 - * struct padata_queue - The percpu padata queues. 68 * 69 * @parallel: List to wait for parallelization. 70 * @reorder: List to wait for reordering after parallel processing. ··· 85 * @pwork: work struct for parallelization. 86 * @swork: work struct for serialization. 87 * @pd: Backpointer to the internal control structure. 88 * @num_obj: Number of objects that are processed by this cpu. 89 * @cpu_index: Index of the cpu. 90 */ 91 - struct padata_queue { 92 - struct padata_list parallel; 93 - struct padata_list reorder; 94 - struct padata_list serial; 95 - struct work_struct pwork; 96 - struct work_struct swork; 97 - struct parallel_data *pd; 98 - atomic_t num_obj; 99 - int cpu_index; 100 }; 101 102 /** ··· 114 * that depends on the cpumask in use. 115 * 116 * @pinst: padata instance. 117 - * @queue: percpu padata queues. 118 * @seq_nr: The sequence number that will be attached to the next object. 119 * @reorder_objects: Number of objects waiting in the reorder queues. 120 * @refcnt: Number of objects holding a reference on this parallel_data. 121 * @max_seq_nr: Maximal used sequence number. 122 - * @cpumask: cpumask in use. 123 * @lock: Reorder lock. 124 * @timer: Reorder timer. 125 */ 126 struct parallel_data { 127 - struct padata_instance *pinst; 128 - struct padata_queue *queue; 129 - atomic_t seq_nr; 130 - atomic_t reorder_objects; 131 - atomic_t refcnt; 132 - unsigned int max_seq_nr; 133 - cpumask_var_t cpumask; 134 - spinlock_t lock; 135 - struct timer_list timer; 136 }; 137 138 /** ··· 145 * @cpu_notifier: cpu hotplug notifier. 146 * @wq: The workqueue in use. 147 * @pd: The internal control structure. 148 - * @cpumask: User supplied cpumask. 149 * @lock: padata instance lock. 150 * @flags: padata flags. 151 */ 152 struct padata_instance { 153 - struct notifier_block cpu_notifier; 154 - struct workqueue_struct *wq; 155 - struct parallel_data *pd; 156 - cpumask_var_t cpumask; 157 - struct mutex lock; 158 - u8 flags; 159 - #define PADATA_INIT 1 160 - #define PADATA_RESET 2 161 }; 162 163 - extern struct padata_instance *padata_alloc(const struct cpumask *cpumask, 164 - struct workqueue_struct *wq); 165 extern void padata_free(struct padata_instance *pinst); 166 extern int padata_do_parallel(struct padata_instance *pinst, 167 struct padata_priv *padata, int cb_cpu); 168 extern void padata_do_serial(struct padata_priv *padata); 169 - extern int padata_set_cpumask(struct padata_instance *pinst, 170 cpumask_var_t cpumask); 171 - extern int padata_add_cpu(struct padata_instance *pinst, int cpu); 172 - extern int padata_remove_cpu(struct padata_instance *pinst, int cpu); 173 - extern void padata_start(struct padata_instance *pinst); 174 extern void padata_stop(struct padata_instance *pinst); 175 #endif
··· 25 #include <linux/spinlock.h> 26 #include <linux/list.h> 27 #include <linux/timer.h> 28 + #include <linux/notifier.h> 29 + #include <linux/kobject.h> 30 + 31 + #define PADATA_CPU_SERIAL 0x01 32 + #define PADATA_CPU_PARALLEL 0x02 33 34 /** 35 * struct padata_priv - Embedded to the users data structure. ··· 59 }; 60 61 /** 62 + * struct padata_serial_queue - The percpu padata serial queue 63 + * 64 + * @serial: List to wait for serialization after reordering. 65 + * @work: work struct for serialization. 66 + * @pd: Backpointer to the internal control structure. 67 + */ 68 + struct padata_serial_queue { 69 + struct padata_list serial; 70 + struct work_struct work; 71 + struct parallel_data *pd; 72 + }; 73 + 74 + /** 75 + * struct padata_parallel_queue - The percpu padata parallel queue 76 * 77 * @parallel: List to wait for parallelization. 78 * @reorder: List to wait for reordering after parallel processing. ··· 67 * @pwork: work struct for parallelization. 68 * @swork: work struct for serialization. 69 * @pd: Backpointer to the internal control structure. 70 + * @work: work struct for parallelization. 71 * @num_obj: Number of objects that are processed by this cpu. 72 * @cpu_index: Index of the cpu. 73 */ 74 + struct padata_parallel_queue { 75 + struct padata_list parallel; 76 + struct padata_list reorder; 77 + struct parallel_data *pd; 78 + struct work_struct work; 79 + atomic_t num_obj; 80 + int cpu_index; 81 + }; 82 + 83 + /** 84 + * struct padata_cpumask - The cpumasks for the parallel/serial workers 85 + * 86 + * @pcpu: cpumask for the parallel workers. 87 + * @cbcpu: cpumask for the serial (callback) workers. 88 + */ 89 + struct padata_cpumask { 90 + cpumask_var_t pcpu; 91 + cpumask_var_t cbcpu; 92 }; 93 94 /** ··· 86 * that depends on the cpumask in use. 87 * 88 * @pinst: padata instance. 89 + * @pqueue: percpu padata queues used for parallelization. 90 + * @squeue: percpu padata queues used for serialuzation. 91 * @seq_nr: The sequence number that will be attached to the next object. 92 * @reorder_objects: Number of objects waiting in the reorder queues. 93 * @refcnt: Number of objects holding a reference on this parallel_data. 94 * @max_seq_nr: Maximal used sequence number. 95 + * @cpumask: The cpumasks in use for parallel and serial workers. 96 * @lock: Reorder lock. 97 + * @processed: Number of already processed objects. 98 * @timer: Reorder timer. 99 */ 100 struct parallel_data { 101 + struct padata_instance *pinst; 102 + struct padata_parallel_queue *pqueue; 103 + struct padata_serial_queue *squeue; 104 + atomic_t seq_nr; 105 + atomic_t reorder_objects; 106 + atomic_t refcnt; 107 + unsigned int max_seq_nr; 108 + struct padata_cpumask cpumask; 109 + spinlock_t lock ____cacheline_aligned; 110 + unsigned int processed; 111 + struct timer_list timer; 112 }; 113 114 /** ··· 113 * @cpu_notifier: cpu hotplug notifier. 114 * @wq: The workqueue in use. 115 * @pd: The internal control structure. 116 + * @cpumask: User supplied cpumasks for parallel and serial works. 117 + * @cpumask_change_notifier: Notifiers chain for user-defined notify 118 + * callbacks that will be called when either @pcpu or @cbcpu 119 + * or both cpumasks change. 120 + * @kobj: padata instance kernel object. 121 * @lock: padata instance lock. 122 * @flags: padata flags. 123 */ 124 struct padata_instance { 125 + struct notifier_block cpu_notifier; 126 + struct workqueue_struct *wq; 127 + struct parallel_data *pd; 128 + struct padata_cpumask cpumask; 129 + struct blocking_notifier_head cpumask_change_notifier; 130 + struct kobject kobj; 131 + struct mutex lock; 132 + u8 flags; 133 + #define PADATA_INIT 1 134 + #define PADATA_RESET 2 135 + #define PADATA_INVALID 4 136 }; 137 138 + extern struct padata_instance *padata_alloc_possible( 139 + struct workqueue_struct *wq); 140 + extern struct padata_instance *padata_alloc(struct workqueue_struct *wq, 141 + const struct cpumask *pcpumask, 142 + const struct cpumask *cbcpumask); 143 extern void padata_free(struct padata_instance *pinst); 144 extern int padata_do_parallel(struct padata_instance *pinst, 145 struct padata_priv *padata, int cb_cpu); 146 extern void padata_do_serial(struct padata_priv *padata); 147 + extern int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, 148 cpumask_var_t cpumask); 149 + extern int padata_set_cpumasks(struct padata_instance *pinst, 150 + cpumask_var_t pcpumask, 151 + cpumask_var_t cbcpumask); 152 + extern int padata_add_cpu(struct padata_instance *pinst, int cpu, int mask); 153 + extern int padata_remove_cpu(struct padata_instance *pinst, int cpu, int mask); 154 + extern int padata_start(struct padata_instance *pinst); 155 extern void padata_stop(struct padata_instance *pinst); 156 + extern int padata_register_cpumask_notifier(struct padata_instance *pinst, 157 + struct notifier_block *nblock); 158 + extern int padata_unregister_cpumask_notifier(struct padata_instance *pinst, 159 + struct notifier_block *nblock); 160 #endif
+563 -202
kernel/padata.c
··· 26 #include <linux/mutex.h> 27 #include <linux/sched.h> 28 #include <linux/slab.h> 29 #include <linux/rcupdate.h> 30 31 - #define MAX_SEQ_NR INT_MAX - NR_CPUS 32 #define MAX_OBJ_NUM 1000 33 34 static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index) 35 { 36 int cpu, target_cpu; 37 38 - target_cpu = cpumask_first(pd->cpumask); 39 for (cpu = 0; cpu < cpu_index; cpu++) 40 - target_cpu = cpumask_next(target_cpu, pd->cpumask); 41 42 return target_cpu; 43 } ··· 54 * Hash the sequence numbers to the cpus by taking 55 * seq_nr mod. number of cpus in use. 56 */ 57 - cpu_index = padata->seq_nr % cpumask_weight(pd->cpumask); 58 59 return padata_index_to_cpu(pd, cpu_index); 60 } 61 62 - static void padata_parallel_worker(struct work_struct *work) 63 { 64 - struct padata_queue *queue; 65 struct parallel_data *pd; 66 struct padata_instance *pinst; 67 LIST_HEAD(local_list); 68 69 local_bh_disable(); 70 - queue = container_of(work, struct padata_queue, pwork); 71 - pd = queue->pd; 72 pinst = pd->pinst; 73 74 - spin_lock(&queue->parallel.lock); 75 - list_replace_init(&queue->parallel.list, &local_list); 76 - spin_unlock(&queue->parallel.lock); 77 78 while (!list_empty(&local_list)) { 79 struct padata_priv *padata; ··· 96 * @pinst: padata instance 97 * @padata: object to be parallelized 98 * @cb_cpu: cpu the serialization callback function will run on, 99 - * must be in the cpumask of padata. 100 * 101 * The parallelization callback function will run with BHs off. 102 * Note: Every object which is parallelized by padata_do_parallel ··· 106 struct padata_priv *padata, int cb_cpu) 107 { 108 int target_cpu, err; 109 - struct padata_queue *queue; 110 struct parallel_data *pd; 111 112 rcu_read_lock_bh(); 113 114 pd = rcu_dereference(pinst->pd); 115 116 - err = 0; 117 - if (!(pinst->flags & PADATA_INIT)) 118 goto out; 119 120 err = -EBUSY; ··· 127 if (atomic_read(&pd->refcnt) >= MAX_OBJ_NUM) 128 goto out; 129 130 - err = -EINVAL; 131 - if (!cpumask_test_cpu(cb_cpu, pd->cpumask)) 132 - goto out; 133 - 134 - err = -EINPROGRESS; 135 atomic_inc(&pd->refcnt); 136 padata->pd = pd; 137 padata->cb_cpu = cb_cpu; ··· 138 padata->seq_nr = atomic_inc_return(&pd->seq_nr); 139 140 target_cpu = padata_cpu_hash(padata); 141 - queue = per_cpu_ptr(pd->queue, target_cpu); 142 143 spin_lock(&queue->parallel.lock); 144 list_add_tail(&padata->list, &queue->parallel.list); 145 spin_unlock(&queue->parallel.lock); 146 147 - queue_work_on(target_cpu, pinst->wq, &queue->pwork); 148 149 out: 150 rcu_read_unlock_bh(); ··· 172 */ 173 static struct padata_priv *padata_get_next(struct parallel_data *pd) 174 { 175 - int cpu, num_cpus, empty, calc_seq_nr; 176 - int seq_nr, next_nr, overrun, next_overrun; 177 - struct padata_queue *queue, *next_queue; 178 struct padata_priv *padata; 179 struct padata_list *reorder; 180 181 - empty = 0; 182 - next_nr = -1; 183 - next_overrun = 0; 184 - next_queue = NULL; 185 186 - num_cpus = cpumask_weight(pd->cpumask); 187 188 - for_each_cpu(cpu, pd->cpumask) { 189 - queue = per_cpu_ptr(pd->queue, cpu); 190 - reorder = &queue->reorder; 191 - 192 - /* 193 - * Calculate the seq_nr of the object that should be 194 - * next in this reorder queue. 195 - */ 196 - overrun = 0; 197 - calc_seq_nr = (atomic_read(&queue->num_obj) * num_cpus) 198 - + queue->cpu_index; 199 - 200 - if (unlikely(calc_seq_nr > pd->max_seq_nr)) { 201 - calc_seq_nr = calc_seq_nr - pd->max_seq_nr - 1; 202 - overrun = 1; 203 - } 204 - 205 - if (!list_empty(&reorder->list)) { 206 - padata = list_entry(reorder->list.next, 207 - struct padata_priv, list); 208 - 209 - seq_nr = padata->seq_nr; 210 - BUG_ON(calc_seq_nr != seq_nr); 211 - } else { 212 - seq_nr = calc_seq_nr; 213 - empty++; 214 - } 215 - 216 - if (next_nr < 0 || seq_nr < next_nr 217 - || (next_overrun && !overrun)) { 218 - next_nr = seq_nr; 219 - next_overrun = overrun; 220 - next_queue = queue; 221 - } 222 } 223 224 padata = NULL; 225 - 226 - if (empty == num_cpus) 227 - goto out; 228 229 reorder = &next_queue->reorder; 230 ··· 205 padata = list_entry(reorder->list.next, 206 struct padata_priv, list); 207 208 - if (unlikely(next_overrun)) { 209 - for_each_cpu(cpu, pd->cpumask) { 210 - queue = per_cpu_ptr(pd->queue, cpu); 211 - atomic_set(&queue->num_obj, 0); 212 - } 213 - } 214 215 spin_lock(&reorder->lock); 216 list_del_init(&padata->list); 217 atomic_dec(&pd->reorder_objects); 218 spin_unlock(&reorder->lock); 219 220 - atomic_inc(&next_queue->num_obj); 221 222 goto out; 223 } 224 225 - queue = per_cpu_ptr(pd->queue, smp_processor_id()); 226 if (queue->cpu_index == next_queue->cpu_index) { 227 padata = ERR_PTR(-ENODATA); 228 goto out; ··· 231 static void padata_reorder(struct parallel_data *pd) 232 { 233 struct padata_priv *padata; 234 - struct padata_queue *queue; 235 struct padata_instance *pinst = pd->pinst; 236 237 /* ··· 270 return; 271 } 272 273 - queue = per_cpu_ptr(pd->queue, padata->cb_cpu); 274 275 - spin_lock(&queue->serial.lock); 276 - list_add_tail(&padata->list, &queue->serial.list); 277 - spin_unlock(&queue->serial.lock); 278 279 - queue_work_on(padata->cb_cpu, pinst->wq, &queue->swork); 280 } 281 282 spin_unlock_bh(&pd->lock); ··· 302 padata_reorder(pd); 303 } 304 305 - static void padata_serial_worker(struct work_struct *work) 306 { 307 - struct padata_queue *queue; 308 struct parallel_data *pd; 309 LIST_HEAD(local_list); 310 311 local_bh_disable(); 312 - queue = container_of(work, struct padata_queue, swork); 313 - pd = queue->pd; 314 315 - spin_lock(&queue->serial.lock); 316 - list_replace_init(&queue->serial.list, &local_list); 317 - spin_unlock(&queue->serial.lock); 318 319 while (!list_empty(&local_list)) { 320 struct padata_priv *padata; ··· 341 void padata_do_serial(struct padata_priv *padata) 342 { 343 int cpu; 344 - struct padata_queue *queue; 345 struct parallel_data *pd; 346 347 pd = padata->pd; 348 349 cpu = get_cpu(); 350 - queue = per_cpu_ptr(pd->queue, cpu); 351 352 - spin_lock(&queue->reorder.lock); 353 atomic_inc(&pd->reorder_objects); 354 - list_add_tail(&padata->list, &queue->reorder.list); 355 - spin_unlock(&queue->reorder.lock); 356 357 put_cpu(); 358 ··· 360 } 361 EXPORT_SYMBOL(padata_do_serial); 362 363 - /* Allocate and initialize the internal cpumask dependend resources. */ 364 - static struct parallel_data *padata_alloc_pd(struct padata_instance *pinst, 365 - const struct cpumask *cpumask) 366 { 367 - int cpu, cpu_index, num_cpus; 368 - struct padata_queue *queue; 369 - struct parallel_data *pd; 370 371 cpu_index = 0; 372 373 pd = kzalloc(sizeof(struct parallel_data), GFP_KERNEL); 374 if (!pd) 375 goto err; 376 377 - pd->queue = alloc_percpu(struct padata_queue); 378 - if (!pd->queue) 379 goto err_free_pd; 380 381 - if (!alloc_cpumask_var(&pd->cpumask, GFP_KERNEL)) 382 - goto err_free_queue; 383 384 - cpumask_and(pd->cpumask, cpumask, cpu_active_mask); 385 - 386 - for_each_cpu(cpu, pd->cpumask) { 387 - queue = per_cpu_ptr(pd->queue, cpu); 388 - 389 - queue->pd = pd; 390 - 391 - queue->cpu_index = cpu_index; 392 - cpu_index++; 393 - 394 - INIT_LIST_HEAD(&queue->reorder.list); 395 - INIT_LIST_HEAD(&queue->parallel.list); 396 - INIT_LIST_HEAD(&queue->serial.list); 397 - spin_lock_init(&queue->reorder.lock); 398 - spin_lock_init(&queue->parallel.lock); 399 - spin_lock_init(&queue->serial.lock); 400 - 401 - INIT_WORK(&queue->pwork, padata_parallel_worker); 402 - INIT_WORK(&queue->swork, padata_serial_worker); 403 - atomic_set(&queue->num_obj, 0); 404 - } 405 - 406 - num_cpus = cpumask_weight(pd->cpumask); 407 - pd->max_seq_nr = (MAX_SEQ_NR / num_cpus) * num_cpus - 1; 408 - 409 setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd); 410 atomic_set(&pd->seq_nr, -1); 411 atomic_set(&pd->reorder_objects, 0); ··· 452 453 return pd; 454 455 - err_free_queue: 456 - free_percpu(pd->queue); 457 err_free_pd: 458 kfree(pd); 459 err: ··· 464 465 static void padata_free_pd(struct parallel_data *pd) 466 { 467 - free_cpumask_var(pd->cpumask); 468 - free_percpu(pd->queue); 469 kfree(pd); 470 } 471 ··· 475 static void padata_flush_queues(struct parallel_data *pd) 476 { 477 int cpu; 478 - struct padata_queue *queue; 479 480 - for_each_cpu(cpu, pd->cpumask) { 481 - queue = per_cpu_ptr(pd->queue, cpu); 482 - flush_work(&queue->pwork); 483 } 484 485 del_timer_sync(&pd->timer); ··· 488 if (atomic_read(&pd->reorder_objects)) 489 padata_reorder(pd); 490 491 - for_each_cpu(cpu, pd->cpumask) { 492 - queue = per_cpu_ptr(pd->queue, cpu); 493 - flush_work(&queue->swork); 494 } 495 496 BUG_ON(atomic_read(&pd->refcnt) != 0); 497 } 498 499 /* Replace the internal control stucture with a new one. */ ··· 520 struct parallel_data *pd_new) 521 { 522 struct parallel_data *pd_old = pinst->pd; 523 524 pinst->flags |= PADATA_RESET; 525 ··· 528 529 synchronize_rcu(); 530 531 padata_flush_queues(pd_old); 532 padata_free_pd(pd_old); 533 534 pinst->flags &= ~PADATA_RESET; 535 } 536 537 /** 538 - * padata_set_cpumask - set the cpumask that padata should use 539 * 540 - * @pinst: padata instance 541 - * @cpumask: the cpumask to use 542 */ 543 - int padata_set_cpumask(struct padata_instance *pinst, 544 - cpumask_var_t cpumask) 545 { 546 - struct parallel_data *pd; 547 - int err = 0; 548 549 - mutex_lock(&pinst->lock); 550 551 - get_online_cpus(); 552 553 - pd = padata_alloc_pd(pinst, cpumask); 554 - if (!pd) { 555 - err = -ENOMEM; 556 - goto out; 557 } 558 559 - cpumask_copy(pinst->cpumask, cpumask); 560 561 padata_replace(pinst, pd); 562 563 out: 564 put_online_cpus(); 565 - 566 mutex_unlock(&pinst->lock); 567 568 return err; ··· 695 struct parallel_data *pd; 696 697 if (cpumask_test_cpu(cpu, cpu_active_mask)) { 698 - pd = padata_alloc_pd(pinst, pinst->cpumask); 699 if (!pd) 700 return -ENOMEM; 701 702 padata_replace(pinst, pd); 703 } 704 705 return 0; 706 } 707 708 - /** 709 - * padata_add_cpu - add a cpu to the padata cpumask 710 * 711 * @pinst: padata instance 712 * @cpu: cpu to add 713 */ 714 - int padata_add_cpu(struct padata_instance *pinst, int cpu) 715 { 716 int err; 717 718 mutex_lock(&pinst->lock); 719 720 get_online_cpus(); 721 - cpumask_set_cpu(cpu, pinst->cpumask); 722 err = __padata_add_cpu(pinst, cpu); 723 put_online_cpus(); 724 ··· 748 749 static int __padata_remove_cpu(struct padata_instance *pinst, int cpu) 750 { 751 - struct parallel_data *pd; 752 753 if (cpumask_test_cpu(cpu, cpu_online_mask)) { 754 - pd = padata_alloc_pd(pinst, pinst->cpumask); 755 if (!pd) 756 return -ENOMEM; 757 ··· 767 return 0; 768 } 769 770 - /** 771 - * padata_remove_cpu - remove a cpu from the padata cpumask 772 * 773 * @pinst: padata instance 774 * @cpu: cpu to remove 775 */ 776 - int padata_remove_cpu(struct padata_instance *pinst, int cpu) 777 { 778 int err; 779 780 mutex_lock(&pinst->lock); 781 782 get_online_cpus(); 783 - cpumask_clear_cpu(cpu, pinst->cpumask); 784 err = __padata_remove_cpu(pinst, cpu); 785 put_online_cpus(); 786 ··· 807 * 808 * @pinst: padata instance to start 809 */ 810 - void padata_start(struct padata_instance *pinst) 811 { 812 mutex_lock(&pinst->lock); 813 - pinst->flags |= PADATA_INIT; 814 mutex_unlock(&pinst->lock); 815 } 816 EXPORT_SYMBOL(padata_start); 817 ··· 832 void padata_stop(struct padata_instance *pinst) 833 { 834 mutex_lock(&pinst->lock); 835 - pinst->flags &= ~PADATA_INIT; 836 mutex_unlock(&pinst->lock); 837 } 838 EXPORT_SYMBOL(padata_stop); 839 840 #ifdef CONFIG_HOTPLUG_CPU 841 static int padata_cpu_callback(struct notifier_block *nfb, 842 unsigned long action, void *hcpu) 843 { ··· 858 switch (action) { 859 case CPU_ONLINE: 860 case CPU_ONLINE_FROZEN: 861 - if (!cpumask_test_cpu(cpu, pinst->cpumask)) 862 break; 863 mutex_lock(&pinst->lock); 864 err = __padata_add_cpu(pinst, cpu); ··· 869 870 case CPU_DOWN_PREPARE: 871 case CPU_DOWN_PREPARE_FROZEN: 872 - if (!cpumask_test_cpu(cpu, pinst->cpumask)) 873 break; 874 mutex_lock(&pinst->lock); 875 err = __padata_remove_cpu(pinst, cpu); ··· 880 881 case CPU_UP_CANCELED: 882 case CPU_UP_CANCELED_FROZEN: 883 - if (!cpumask_test_cpu(cpu, pinst->cpumask)) 884 break; 885 mutex_lock(&pinst->lock); 886 __padata_remove_cpu(pinst, cpu); ··· 888 889 case CPU_DOWN_FAILED: 890 case CPU_DOWN_FAILED_FROZEN: 891 - if (!cpumask_test_cpu(cpu, pinst->cpumask)) 892 break; 893 mutex_lock(&pinst->lock); 894 __padata_add_cpu(pinst, cpu); ··· 899 } 900 #endif 901 902 - /** 903 - * padata_alloc - allocate and initialize a padata instance 904 - * 905 - * @cpumask: cpumask that padata uses for parallelization 906 - * @wq: workqueue to use for the allocated padata instance 907 */ 908 - struct padata_instance *padata_alloc(const struct cpumask *cpumask, 909 - struct workqueue_struct *wq) 910 { 911 struct padata_instance *pinst; 912 - struct parallel_data *pd; 913 914 pinst = kzalloc(sizeof(struct padata_instance), GFP_KERNEL); 915 if (!pinst) 916 goto err; 917 918 get_online_cpus(); 919 - 920 - pd = padata_alloc_pd(pinst, cpumask); 921 - if (!pd) 922 goto err_free_inst; 923 924 - if (!alloc_cpumask_var(&pinst->cpumask, GFP_KERNEL)) 925 - goto err_free_pd; 926 927 rcu_assign_pointer(pinst->pd, pd); 928 929 pinst->wq = wq; 930 931 - cpumask_copy(pinst->cpumask, cpumask); 932 933 pinst->flags = 0; 934 ··· 1106 1107 put_online_cpus(); 1108 1109 mutex_init(&pinst->lock); 1110 1111 return pinst; 1112 1113 - err_free_pd: 1114 - padata_free_pd(pd); 1115 err_free_inst: 1116 kfree(pinst); 1117 put_online_cpus(); ··· 1130 */ 1131 void padata_free(struct padata_instance *pinst) 1132 { 1133 - padata_stop(pinst); 1134 - 1135 - synchronize_rcu(); 1136 - 1137 - #ifdef CONFIG_HOTPLUG_CPU 1138 - unregister_hotcpu_notifier(&pinst->cpu_notifier); 1139 - #endif 1140 - get_online_cpus(); 1141 - padata_flush_queues(pinst->pd); 1142 - put_online_cpus(); 1143 - 1144 - padata_free_pd(pinst->pd); 1145 - free_cpumask_var(pinst->cpumask); 1146 - kfree(pinst); 1147 } 1148 EXPORT_SYMBOL(padata_free);
··· 26 #include <linux/mutex.h> 27 #include <linux/sched.h> 28 #include <linux/slab.h> 29 + #include <linux/sysfs.h> 30 #include <linux/rcupdate.h> 31 32 + #define MAX_SEQ_NR (INT_MAX - NR_CPUS) 33 #define MAX_OBJ_NUM 1000 34 35 static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index) 36 { 37 int cpu, target_cpu; 38 39 + target_cpu = cpumask_first(pd->cpumask.pcpu); 40 for (cpu = 0; cpu < cpu_index; cpu++) 41 + target_cpu = cpumask_next(target_cpu, pd->cpumask.pcpu); 42 43 return target_cpu; 44 } ··· 53 * Hash the sequence numbers to the cpus by taking 54 * seq_nr mod. number of cpus in use. 55 */ 56 + cpu_index = padata->seq_nr % cpumask_weight(pd->cpumask.pcpu); 57 58 return padata_index_to_cpu(pd, cpu_index); 59 } 60 61 + static void padata_parallel_worker(struct work_struct *parallel_work) 62 { 63 + struct padata_parallel_queue *pqueue; 64 struct parallel_data *pd; 65 struct padata_instance *pinst; 66 LIST_HEAD(local_list); 67 68 local_bh_disable(); 69 + pqueue = container_of(parallel_work, 70 + struct padata_parallel_queue, work); 71 + pd = pqueue->pd; 72 pinst = pd->pinst; 73 74 + spin_lock(&pqueue->parallel.lock); 75 + list_replace_init(&pqueue->parallel.list, &local_list); 76 + spin_unlock(&pqueue->parallel.lock); 77 78 while (!list_empty(&local_list)) { 79 struct padata_priv *padata; ··· 94 * @pinst: padata instance 95 * @padata: object to be parallelized 96 * @cb_cpu: cpu the serialization callback function will run on, 97 + * must be in the serial cpumask of padata(i.e. cpumask.cbcpu). 98 * 99 * The parallelization callback function will run with BHs off. 100 * Note: Every object which is parallelized by padata_do_parallel ··· 104 struct padata_priv *padata, int cb_cpu) 105 { 106 int target_cpu, err; 107 + struct padata_parallel_queue *queue; 108 struct parallel_data *pd; 109 110 rcu_read_lock_bh(); 111 112 pd = rcu_dereference(pinst->pd); 113 114 + err = -EINVAL; 115 + if (!(pinst->flags & PADATA_INIT) || pinst->flags & PADATA_INVALID) 116 + goto out; 117 + 118 + if (!cpumask_test_cpu(cb_cpu, pd->cpumask.cbcpu)) 119 goto out; 120 121 err = -EBUSY; ··· 122 if (atomic_read(&pd->refcnt) >= MAX_OBJ_NUM) 123 goto out; 124 125 + err = 0; 126 atomic_inc(&pd->refcnt); 127 padata->pd = pd; 128 padata->cb_cpu = cb_cpu; ··· 137 padata->seq_nr = atomic_inc_return(&pd->seq_nr); 138 139 target_cpu = padata_cpu_hash(padata); 140 + queue = per_cpu_ptr(pd->pqueue, target_cpu); 141 142 spin_lock(&queue->parallel.lock); 143 list_add_tail(&padata->list, &queue->parallel.list); 144 spin_unlock(&queue->parallel.lock); 145 146 + queue_work_on(target_cpu, pinst->wq, &queue->work); 147 148 out: 149 rcu_read_unlock_bh(); ··· 171 */ 172 static struct padata_priv *padata_get_next(struct parallel_data *pd) 173 { 174 + int cpu, num_cpus; 175 + int next_nr, next_index; 176 + struct padata_parallel_queue *queue, *next_queue; 177 struct padata_priv *padata; 178 struct padata_list *reorder; 179 180 + num_cpus = cpumask_weight(pd->cpumask.pcpu); 181 182 + /* 183 + * Calculate the percpu reorder queue and the sequence 184 + * number of the next object. 185 + */ 186 + next_nr = pd->processed; 187 + next_index = next_nr % num_cpus; 188 + cpu = padata_index_to_cpu(pd, next_index); 189 + next_queue = per_cpu_ptr(pd->pqueue, cpu); 190 191 + if (unlikely(next_nr > pd->max_seq_nr)) { 192 + next_nr = next_nr - pd->max_seq_nr - 1; 193 + next_index = next_nr % num_cpus; 194 + cpu = padata_index_to_cpu(pd, next_index); 195 + next_queue = per_cpu_ptr(pd->pqueue, cpu); 196 + pd->processed = 0; 197 } 198 199 padata = NULL; 200 201 reorder = &next_queue->reorder; 202 ··· 231 padata = list_entry(reorder->list.next, 232 struct padata_priv, list); 233 234 + BUG_ON(next_nr != padata->seq_nr); 235 236 spin_lock(&reorder->lock); 237 list_del_init(&padata->list); 238 atomic_dec(&pd->reorder_objects); 239 spin_unlock(&reorder->lock); 240 241 + pd->processed++; 242 243 goto out; 244 } 245 246 + queue = per_cpu_ptr(pd->pqueue, smp_processor_id()); 247 if (queue->cpu_index == next_queue->cpu_index) { 248 padata = ERR_PTR(-ENODATA); 249 goto out; ··· 262 static void padata_reorder(struct parallel_data *pd) 263 { 264 struct padata_priv *padata; 265 + struct padata_serial_queue *squeue; 266 struct padata_instance *pinst = pd->pinst; 267 268 /* ··· 301 return; 302 } 303 304 + squeue = per_cpu_ptr(pd->squeue, padata->cb_cpu); 305 306 + spin_lock(&squeue->serial.lock); 307 + list_add_tail(&padata->list, &squeue->serial.list); 308 + spin_unlock(&squeue->serial.lock); 309 310 + queue_work_on(padata->cb_cpu, pinst->wq, &squeue->work); 311 } 312 313 spin_unlock_bh(&pd->lock); ··· 333 padata_reorder(pd); 334 } 335 336 + static void padata_serial_worker(struct work_struct *serial_work) 337 { 338 + struct padata_serial_queue *squeue; 339 struct parallel_data *pd; 340 LIST_HEAD(local_list); 341 342 local_bh_disable(); 343 + squeue = container_of(serial_work, struct padata_serial_queue, work); 344 + pd = squeue->pd; 345 346 + spin_lock(&squeue->serial.lock); 347 + list_replace_init(&squeue->serial.list, &local_list); 348 + spin_unlock(&squeue->serial.lock); 349 350 while (!list_empty(&local_list)) { 351 struct padata_priv *padata; ··· 372 void padata_do_serial(struct padata_priv *padata) 373 { 374 int cpu; 375 + struct padata_parallel_queue *pqueue; 376 struct parallel_data *pd; 377 378 pd = padata->pd; 379 380 cpu = get_cpu(); 381 + pqueue = per_cpu_ptr(pd->pqueue, cpu); 382 383 + spin_lock(&pqueue->reorder.lock); 384 atomic_inc(&pd->reorder_objects); 385 + list_add_tail(&padata->list, &pqueue->reorder.list); 386 + spin_unlock(&pqueue->reorder.lock); 387 388 put_cpu(); 389 ··· 391 } 392 EXPORT_SYMBOL(padata_do_serial); 393 394 + static int padata_setup_cpumasks(struct parallel_data *pd, 395 + const struct cpumask *pcpumask, 396 + const struct cpumask *cbcpumask) 397 { 398 + if (!alloc_cpumask_var(&pd->cpumask.pcpu, GFP_KERNEL)) 399 + return -ENOMEM; 400 + 401 + cpumask_and(pd->cpumask.pcpu, pcpumask, cpu_active_mask); 402 + if (!alloc_cpumask_var(&pd->cpumask.cbcpu, GFP_KERNEL)) { 403 + free_cpumask_var(pd->cpumask.cbcpu); 404 + return -ENOMEM; 405 + } 406 + 407 + cpumask_and(pd->cpumask.cbcpu, cbcpumask, cpu_active_mask); 408 + return 0; 409 + } 410 + 411 + static void __padata_list_init(struct padata_list *pd_list) 412 + { 413 + INIT_LIST_HEAD(&pd_list->list); 414 + spin_lock_init(&pd_list->lock); 415 + } 416 + 417 + /* Initialize all percpu queues used by serial workers */ 418 + static void padata_init_squeues(struct parallel_data *pd) 419 + { 420 + int cpu; 421 + struct padata_serial_queue *squeue; 422 + 423 + for_each_cpu(cpu, pd->cpumask.cbcpu) { 424 + squeue = per_cpu_ptr(pd->squeue, cpu); 425 + squeue->pd = pd; 426 + __padata_list_init(&squeue->serial); 427 + INIT_WORK(&squeue->work, padata_serial_worker); 428 + } 429 + } 430 + 431 + /* Initialize all percpu queues used by parallel workers */ 432 + static void padata_init_pqueues(struct parallel_data *pd) 433 + { 434 + int cpu_index, num_cpus, cpu; 435 + struct padata_parallel_queue *pqueue; 436 437 cpu_index = 0; 438 + for_each_cpu(cpu, pd->cpumask.pcpu) { 439 + pqueue = per_cpu_ptr(pd->pqueue, cpu); 440 + pqueue->pd = pd; 441 + pqueue->cpu_index = cpu_index; 442 + cpu_index++; 443 + 444 + __padata_list_init(&pqueue->reorder); 445 + __padata_list_init(&pqueue->parallel); 446 + INIT_WORK(&pqueue->work, padata_parallel_worker); 447 + atomic_set(&pqueue->num_obj, 0); 448 + } 449 + 450 + num_cpus = cpumask_weight(pd->cpumask.pcpu); 451 + pd->max_seq_nr = num_cpus ? (MAX_SEQ_NR / num_cpus) * num_cpus - 1 : 0; 452 + } 453 + 454 + /* Allocate and initialize the internal cpumask dependend resources. */ 455 + static struct parallel_data *padata_alloc_pd(struct padata_instance *pinst, 456 + const struct cpumask *pcpumask, 457 + const struct cpumask *cbcpumask) 458 + { 459 + struct parallel_data *pd; 460 461 pd = kzalloc(sizeof(struct parallel_data), GFP_KERNEL); 462 if (!pd) 463 goto err; 464 465 + pd->pqueue = alloc_percpu(struct padata_parallel_queue); 466 + if (!pd->pqueue) 467 goto err_free_pd; 468 469 + pd->squeue = alloc_percpu(struct padata_serial_queue); 470 + if (!pd->squeue) 471 + goto err_free_pqueue; 472 + if (padata_setup_cpumasks(pd, pcpumask, cbcpumask) < 0) 473 + goto err_free_squeue; 474 475 + padata_init_pqueues(pd); 476 + padata_init_squeues(pd); 477 setup_timer(&pd->timer, padata_reorder_timer, (unsigned long)pd); 478 atomic_set(&pd->seq_nr, -1); 479 atomic_set(&pd->reorder_objects, 0); ··· 446 447 return pd; 448 449 + err_free_squeue: 450 + free_percpu(pd->squeue); 451 + err_free_pqueue: 452 + free_percpu(pd->pqueue); 453 err_free_pd: 454 kfree(pd); 455 err: ··· 456 457 static void padata_free_pd(struct parallel_data *pd) 458 { 459 + free_cpumask_var(pd->cpumask.pcpu); 460 + free_cpumask_var(pd->cpumask.cbcpu); 461 + free_percpu(pd->pqueue); 462 + free_percpu(pd->squeue); 463 kfree(pd); 464 } 465 ··· 465 static void padata_flush_queues(struct parallel_data *pd) 466 { 467 int cpu; 468 + struct padata_parallel_queue *pqueue; 469 + struct padata_serial_queue *squeue; 470 471 + for_each_cpu(cpu, pd->cpumask.pcpu) { 472 + pqueue = per_cpu_ptr(pd->pqueue, cpu); 473 + flush_work(&pqueue->work); 474 } 475 476 del_timer_sync(&pd->timer); ··· 477 if (atomic_read(&pd->reorder_objects)) 478 padata_reorder(pd); 479 480 + for_each_cpu(cpu, pd->cpumask.cbcpu) { 481 + squeue = per_cpu_ptr(pd->squeue, cpu); 482 + flush_work(&squeue->work); 483 } 484 485 BUG_ON(atomic_read(&pd->refcnt) != 0); 486 + } 487 + 488 + static void __padata_start(struct padata_instance *pinst) 489 + { 490 + pinst->flags |= PADATA_INIT; 491 + } 492 + 493 + static void __padata_stop(struct padata_instance *pinst) 494 + { 495 + if (!(pinst->flags & PADATA_INIT)) 496 + return; 497 + 498 + pinst->flags &= ~PADATA_INIT; 499 + 500 + synchronize_rcu(); 501 + 502 + get_online_cpus(); 503 + padata_flush_queues(pinst->pd); 504 + put_online_cpus(); 505 } 506 507 /* Replace the internal control stucture with a new one. */ ··· 490 struct parallel_data *pd_new) 491 { 492 struct parallel_data *pd_old = pinst->pd; 493 + int notification_mask = 0; 494 495 pinst->flags |= PADATA_RESET; 496 ··· 497 498 synchronize_rcu(); 499 500 + if (!cpumask_equal(pd_old->cpumask.pcpu, pd_new->cpumask.pcpu)) 501 + notification_mask |= PADATA_CPU_PARALLEL; 502 + if (!cpumask_equal(pd_old->cpumask.cbcpu, pd_new->cpumask.cbcpu)) 503 + notification_mask |= PADATA_CPU_SERIAL; 504 + 505 padata_flush_queues(pd_old); 506 padata_free_pd(pd_old); 507 + 508 + if (notification_mask) 509 + blocking_notifier_call_chain(&pinst->cpumask_change_notifier, 510 + notification_mask, 511 + &pd_new->cpumask); 512 513 pinst->flags &= ~PADATA_RESET; 514 } 515 516 /** 517 + * padata_register_cpumask_notifier - Registers a notifier that will be called 518 + * if either pcpu or cbcpu or both cpumasks change. 519 * 520 + * @pinst: A poineter to padata instance 521 + * @nblock: A pointer to notifier block. 522 */ 523 + int padata_register_cpumask_notifier(struct padata_instance *pinst, 524 + struct notifier_block *nblock) 525 { 526 + return blocking_notifier_chain_register(&pinst->cpumask_change_notifier, 527 + nblock); 528 + } 529 + EXPORT_SYMBOL(padata_register_cpumask_notifier); 530 531 + /** 532 + * padata_unregister_cpumask_notifier - Unregisters cpumask notifier 533 + * registered earlier using padata_register_cpumask_notifier 534 + * 535 + * @pinst: A pointer to data instance. 536 + * @nlock: A pointer to notifier block. 537 + */ 538 + int padata_unregister_cpumask_notifier(struct padata_instance *pinst, 539 + struct notifier_block *nblock) 540 + { 541 + return blocking_notifier_chain_unregister( 542 + &pinst->cpumask_change_notifier, 543 + nblock); 544 + } 545 + EXPORT_SYMBOL(padata_unregister_cpumask_notifier); 546 547 548 + /* If cpumask contains no active cpu, we mark the instance as invalid. */ 549 + static bool padata_validate_cpumask(struct padata_instance *pinst, 550 + const struct cpumask *cpumask) 551 + { 552 + if (!cpumask_intersects(cpumask, cpu_active_mask)) { 553 + pinst->flags |= PADATA_INVALID; 554 + return false; 555 } 556 557 + pinst->flags &= ~PADATA_INVALID; 558 + return true; 559 + } 560 + 561 + static int __padata_set_cpumasks(struct padata_instance *pinst, 562 + cpumask_var_t pcpumask, 563 + cpumask_var_t cbcpumask) 564 + { 565 + int valid; 566 + struct parallel_data *pd; 567 + 568 + valid = padata_validate_cpumask(pinst, pcpumask); 569 + if (!valid) { 570 + __padata_stop(pinst); 571 + goto out_replace; 572 + } 573 + 574 + valid = padata_validate_cpumask(pinst, cbcpumask); 575 + if (!valid) 576 + __padata_stop(pinst); 577 + 578 + out_replace: 579 + pd = padata_alloc_pd(pinst, pcpumask, cbcpumask); 580 + if (!pd) 581 + return -ENOMEM; 582 + 583 + cpumask_copy(pinst->cpumask.pcpu, pcpumask); 584 + cpumask_copy(pinst->cpumask.cbcpu, cbcpumask); 585 586 padata_replace(pinst, pd); 587 588 + if (valid) 589 + __padata_start(pinst); 590 + 591 + return 0; 592 + } 593 + 594 + /** 595 + * padata_set_cpumasks - Set both parallel and serial cpumasks. The first 596 + * one is used by parallel workers and the second one 597 + * by the wokers doing serialization. 598 + * 599 + * @pinst: padata instance 600 + * @pcpumask: the cpumask to use for parallel workers 601 + * @cbcpumask: the cpumsak to use for serial workers 602 + */ 603 + int padata_set_cpumasks(struct padata_instance *pinst, cpumask_var_t pcpumask, 604 + cpumask_var_t cbcpumask) 605 + { 606 + int err; 607 + 608 + mutex_lock(&pinst->lock); 609 + get_online_cpus(); 610 + 611 + err = __padata_set_cpumasks(pinst, pcpumask, cbcpumask); 612 + 613 + put_online_cpus(); 614 + mutex_unlock(&pinst->lock); 615 + 616 + return err; 617 + 618 + } 619 + EXPORT_SYMBOL(padata_set_cpumasks); 620 + 621 + /** 622 + * padata_set_cpumask: Sets specified by @cpumask_type cpumask to the value 623 + * equivalent to @cpumask. 624 + * 625 + * @pinst: padata instance 626 + * @cpumask_type: PADATA_CPU_SERIAL or PADATA_CPU_PARALLEL corresponding 627 + * to parallel and serial cpumasks respectively. 628 + * @cpumask: the cpumask to use 629 + */ 630 + int padata_set_cpumask(struct padata_instance *pinst, int cpumask_type, 631 + cpumask_var_t cpumask) 632 + { 633 + struct cpumask *serial_mask, *parallel_mask; 634 + int err = -EINVAL; 635 + 636 + mutex_lock(&pinst->lock); 637 + get_online_cpus(); 638 + 639 + switch (cpumask_type) { 640 + case PADATA_CPU_PARALLEL: 641 + serial_mask = pinst->cpumask.cbcpu; 642 + parallel_mask = cpumask; 643 + break; 644 + case PADATA_CPU_SERIAL: 645 + parallel_mask = pinst->cpumask.pcpu; 646 + serial_mask = cpumask; 647 + break; 648 + default: 649 + goto out; 650 + } 651 + 652 + err = __padata_set_cpumasks(pinst, parallel_mask, serial_mask); 653 + 654 out: 655 put_online_cpus(); 656 mutex_unlock(&pinst->lock); 657 658 return err; ··· 543 struct parallel_data *pd; 544 545 if (cpumask_test_cpu(cpu, cpu_active_mask)) { 546 + pd = padata_alloc_pd(pinst, pinst->cpumask.pcpu, 547 + pinst->cpumask.cbcpu); 548 if (!pd) 549 return -ENOMEM; 550 551 padata_replace(pinst, pd); 552 + 553 + if (padata_validate_cpumask(pinst, pinst->cpumask.pcpu) && 554 + padata_validate_cpumask(pinst, pinst->cpumask.cbcpu)) 555 + __padata_start(pinst); 556 } 557 558 return 0; 559 } 560 561 + /** 562 + * padata_add_cpu - add a cpu to one or both(parallel and serial) 563 + * padata cpumasks. 564 * 565 * @pinst: padata instance 566 * @cpu: cpu to add 567 + * @mask: bitmask of flags specifying to which cpumask @cpu shuld be added. 568 + * The @mask may be any combination of the following flags: 569 + * PADATA_CPU_SERIAL - serial cpumask 570 + * PADATA_CPU_PARALLEL - parallel cpumask 571 */ 572 + 573 + int padata_add_cpu(struct padata_instance *pinst, int cpu, int mask) 574 { 575 int err; 576 + 577 + if (!(mask & (PADATA_CPU_SERIAL | PADATA_CPU_PARALLEL))) 578 + return -EINVAL; 579 580 mutex_lock(&pinst->lock); 581 582 get_online_cpus(); 583 + if (mask & PADATA_CPU_SERIAL) 584 + cpumask_set_cpu(cpu, pinst->cpumask.cbcpu); 585 + if (mask & PADATA_CPU_PARALLEL) 586 + cpumask_set_cpu(cpu, pinst->cpumask.pcpu); 587 + 588 err = __padata_add_cpu(pinst, cpu); 589 put_online_cpus(); 590 ··· 578 579 static int __padata_remove_cpu(struct padata_instance *pinst, int cpu) 580 { 581 + struct parallel_data *pd = NULL; 582 583 if (cpumask_test_cpu(cpu, cpu_online_mask)) { 584 + 585 + if (!padata_validate_cpumask(pinst, pinst->cpumask.pcpu) || 586 + !padata_validate_cpumask(pinst, pinst->cpumask.cbcpu)) 587 + __padata_stop(pinst); 588 + 589 + pd = padata_alloc_pd(pinst, pinst->cpumask.pcpu, 590 + pinst->cpumask.cbcpu); 591 if (!pd) 592 return -ENOMEM; 593 ··· 591 return 0; 592 } 593 594 + /** 595 + * padata_remove_cpu - remove a cpu from the one or both(serial and paralell) 596 + * padata cpumasks. 597 * 598 * @pinst: padata instance 599 * @cpu: cpu to remove 600 + * @mask: bitmask specifying from which cpumask @cpu should be removed 601 + * The @mask may be any combination of the following flags: 602 + * PADATA_CPU_SERIAL - serial cpumask 603 + * PADATA_CPU_PARALLEL - parallel cpumask 604 */ 605 + int padata_remove_cpu(struct padata_instance *pinst, int cpu, int mask) 606 { 607 int err; 608 + 609 + if (!(mask & (PADATA_CPU_SERIAL | PADATA_CPU_PARALLEL))) 610 + return -EINVAL; 611 612 mutex_lock(&pinst->lock); 613 614 get_online_cpus(); 615 + if (mask & PADATA_CPU_SERIAL) 616 + cpumask_clear_cpu(cpu, pinst->cpumask.cbcpu); 617 + if (mask & PADATA_CPU_PARALLEL) 618 + cpumask_clear_cpu(cpu, pinst->cpumask.pcpu); 619 + 620 err = __padata_remove_cpu(pinst, cpu); 621 put_online_cpus(); 622 ··· 619 * 620 * @pinst: padata instance to start 621 */ 622 + int padata_start(struct padata_instance *pinst) 623 { 624 + int err = 0; 625 + 626 mutex_lock(&pinst->lock); 627 + 628 + if (pinst->flags & PADATA_INVALID) 629 + err =-EINVAL; 630 + 631 + __padata_start(pinst); 632 + 633 mutex_unlock(&pinst->lock); 634 + 635 + return err; 636 } 637 EXPORT_SYMBOL(padata_start); 638 ··· 635 void padata_stop(struct padata_instance *pinst) 636 { 637 mutex_lock(&pinst->lock); 638 + __padata_stop(pinst); 639 mutex_unlock(&pinst->lock); 640 } 641 EXPORT_SYMBOL(padata_stop); 642 643 #ifdef CONFIG_HOTPLUG_CPU 644 + 645 + static inline int pinst_has_cpu(struct padata_instance *pinst, int cpu) 646 + { 647 + return cpumask_test_cpu(cpu, pinst->cpumask.pcpu) || 648 + cpumask_test_cpu(cpu, pinst->cpumask.cbcpu); 649 + } 650 + 651 + 652 static int padata_cpu_callback(struct notifier_block *nfb, 653 unsigned long action, void *hcpu) 654 { ··· 653 switch (action) { 654 case CPU_ONLINE: 655 case CPU_ONLINE_FROZEN: 656 + if (!pinst_has_cpu(pinst, cpu)) 657 break; 658 mutex_lock(&pinst->lock); 659 err = __padata_add_cpu(pinst, cpu); ··· 664 665 case CPU_DOWN_PREPARE: 666 case CPU_DOWN_PREPARE_FROZEN: 667 + if (!pinst_has_cpu(pinst, cpu)) 668 break; 669 mutex_lock(&pinst->lock); 670 err = __padata_remove_cpu(pinst, cpu); ··· 675 676 case CPU_UP_CANCELED: 677 case CPU_UP_CANCELED_FROZEN: 678 + if (!pinst_has_cpu(pinst, cpu)) 679 break; 680 mutex_lock(&pinst->lock); 681 __padata_remove_cpu(pinst, cpu); ··· 683 684 case CPU_DOWN_FAILED: 685 case CPU_DOWN_FAILED_FROZEN: 686 + if (!pinst_has_cpu(pinst, cpu)) 687 break; 688 mutex_lock(&pinst->lock); 689 __padata_add_cpu(pinst, cpu); ··· 694 } 695 #endif 696 697 + static void __padata_free(struct padata_instance *pinst) 698 + { 699 + #ifdef CONFIG_HOTPLUG_CPU 700 + unregister_hotcpu_notifier(&pinst->cpu_notifier); 701 + #endif 702 + 703 + padata_stop(pinst); 704 + padata_free_pd(pinst->pd); 705 + free_cpumask_var(pinst->cpumask.pcpu); 706 + free_cpumask_var(pinst->cpumask.cbcpu); 707 + kfree(pinst); 708 + } 709 + 710 + #define kobj2pinst(_kobj) \ 711 + container_of(_kobj, struct padata_instance, kobj) 712 + #define attr2pentry(_attr) \ 713 + container_of(_attr, struct padata_sysfs_entry, attr) 714 + 715 + static void padata_sysfs_release(struct kobject *kobj) 716 + { 717 + struct padata_instance *pinst = kobj2pinst(kobj); 718 + __padata_free(pinst); 719 + } 720 + 721 + struct padata_sysfs_entry { 722 + struct attribute attr; 723 + ssize_t (*show)(struct padata_instance *, struct attribute *, char *); 724 + ssize_t (*store)(struct padata_instance *, struct attribute *, 725 + const char *, size_t); 726 + }; 727 + 728 + static ssize_t show_cpumask(struct padata_instance *pinst, 729 + struct attribute *attr, char *buf) 730 + { 731 + struct cpumask *cpumask; 732 + ssize_t len; 733 + 734 + mutex_lock(&pinst->lock); 735 + if (!strcmp(attr->name, "serial_cpumask")) 736 + cpumask = pinst->cpumask.cbcpu; 737 + else 738 + cpumask = pinst->cpumask.pcpu; 739 + 740 + len = bitmap_scnprintf(buf, PAGE_SIZE, cpumask_bits(cpumask), 741 + nr_cpu_ids); 742 + if (PAGE_SIZE - len < 2) 743 + len = -EINVAL; 744 + else 745 + len += sprintf(buf + len, "\n"); 746 + 747 + mutex_unlock(&pinst->lock); 748 + return len; 749 + } 750 + 751 + static ssize_t store_cpumask(struct padata_instance *pinst, 752 + struct attribute *attr, 753 + const char *buf, size_t count) 754 + { 755 + cpumask_var_t new_cpumask; 756 + ssize_t ret; 757 + int mask_type; 758 + 759 + if (!alloc_cpumask_var(&new_cpumask, GFP_KERNEL)) 760 + return -ENOMEM; 761 + 762 + ret = bitmap_parse(buf, count, cpumask_bits(new_cpumask), 763 + nr_cpumask_bits); 764 + if (ret < 0) 765 + goto out; 766 + 767 + mask_type = !strcmp(attr->name, "serial_cpumask") ? 768 + PADATA_CPU_SERIAL : PADATA_CPU_PARALLEL; 769 + ret = padata_set_cpumask(pinst, mask_type, new_cpumask); 770 + if (!ret) 771 + ret = count; 772 + 773 + out: 774 + free_cpumask_var(new_cpumask); 775 + return ret; 776 + } 777 + 778 + #define PADATA_ATTR_RW(_name, _show_name, _store_name) \ 779 + static struct padata_sysfs_entry _name##_attr = \ 780 + __ATTR(_name, 0644, _show_name, _store_name) 781 + #define PADATA_ATTR_RO(_name, _show_name) \ 782 + static struct padata_sysfs_entry _name##_attr = \ 783 + __ATTR(_name, 0400, _show_name, NULL) 784 + 785 + PADATA_ATTR_RW(serial_cpumask, show_cpumask, store_cpumask); 786 + PADATA_ATTR_RW(parallel_cpumask, show_cpumask, store_cpumask); 787 + 788 + /* 789 + * Padata sysfs provides the following objects: 790 + * serial_cpumask [RW] - cpumask for serial workers 791 + * parallel_cpumask [RW] - cpumask for parallel workers 792 */ 793 + static struct attribute *padata_default_attrs[] = { 794 + &serial_cpumask_attr.attr, 795 + &parallel_cpumask_attr.attr, 796 + NULL, 797 + }; 798 + 799 + static ssize_t padata_sysfs_show(struct kobject *kobj, 800 + struct attribute *attr, char *buf) 801 { 802 struct padata_instance *pinst; 803 + struct padata_sysfs_entry *pentry; 804 + ssize_t ret = -EIO; 805 + 806 + pinst = kobj2pinst(kobj); 807 + pentry = attr2pentry(attr); 808 + if (pentry->show) 809 + ret = pentry->show(pinst, attr, buf); 810 + 811 + return ret; 812 + } 813 + 814 + static ssize_t padata_sysfs_store(struct kobject *kobj, struct attribute *attr, 815 + const char *buf, size_t count) 816 + { 817 + struct padata_instance *pinst; 818 + struct padata_sysfs_entry *pentry; 819 + ssize_t ret = -EIO; 820 + 821 + pinst = kobj2pinst(kobj); 822 + pentry = attr2pentry(attr); 823 + if (pentry->show) 824 + ret = pentry->store(pinst, attr, buf, count); 825 + 826 + return ret; 827 + } 828 + 829 + static const struct sysfs_ops padata_sysfs_ops = { 830 + .show = padata_sysfs_show, 831 + .store = padata_sysfs_store, 832 + }; 833 + 834 + static struct kobj_type padata_attr_type = { 835 + .sysfs_ops = &padata_sysfs_ops, 836 + .default_attrs = padata_default_attrs, 837 + .release = padata_sysfs_release, 838 + }; 839 + 840 + /** 841 + * padata_alloc_possible - Allocate and initialize padata instance. 842 + * Use the cpu_possible_mask for serial and 843 + * parallel workers. 844 + * 845 + * @wq: workqueue to use for the allocated padata instance 846 + */ 847 + struct padata_instance *padata_alloc_possible(struct workqueue_struct *wq) 848 + { 849 + return padata_alloc(wq, cpu_possible_mask, cpu_possible_mask); 850 + } 851 + EXPORT_SYMBOL(padata_alloc_possible); 852 + 853 + /** 854 + * padata_alloc - allocate and initialize a padata instance and specify 855 + * cpumasks for serial and parallel workers. 856 + * 857 + * @wq: workqueue to use for the allocated padata instance 858 + * @pcpumask: cpumask that will be used for padata parallelization 859 + * @cbcpumask: cpumask that will be used for padata serialization 860 + */ 861 + struct padata_instance *padata_alloc(struct workqueue_struct *wq, 862 + const struct cpumask *pcpumask, 863 + const struct cpumask *cbcpumask) 864 + { 865 + struct padata_instance *pinst; 866 + struct parallel_data *pd = NULL; 867 868 pinst = kzalloc(sizeof(struct padata_instance), GFP_KERNEL); 869 if (!pinst) 870 goto err; 871 872 get_online_cpus(); 873 + if (!alloc_cpumask_var(&pinst->cpumask.pcpu, GFP_KERNEL)) 874 goto err_free_inst; 875 + if (!alloc_cpumask_var(&pinst->cpumask.cbcpu, GFP_KERNEL)) { 876 + free_cpumask_var(pinst->cpumask.pcpu); 877 + goto err_free_inst; 878 + } 879 + if (!padata_validate_cpumask(pinst, pcpumask) || 880 + !padata_validate_cpumask(pinst, cbcpumask)) 881 + goto err_free_masks; 882 883 + pd = padata_alloc_pd(pinst, pcpumask, cbcpumask); 884 + if (!pd) 885 + goto err_free_masks; 886 887 rcu_assign_pointer(pinst->pd, pd); 888 889 pinst->wq = wq; 890 891 + cpumask_copy(pinst->cpumask.pcpu, pcpumask); 892 + cpumask_copy(pinst->cpumask.cbcpu, cbcpumask); 893 894 pinst->flags = 0; 895 ··· 735 736 put_online_cpus(); 737 738 + BLOCKING_INIT_NOTIFIER_HEAD(&pinst->cpumask_change_notifier); 739 + kobject_init(&pinst->kobj, &padata_attr_type); 740 mutex_init(&pinst->lock); 741 742 return pinst; 743 744 + err_free_masks: 745 + free_cpumask_var(pinst->cpumask.pcpu); 746 + free_cpumask_var(pinst->cpumask.cbcpu); 747 err_free_inst: 748 kfree(pinst); 749 put_online_cpus(); ··· 756 */ 757 void padata_free(struct padata_instance *pinst) 758 { 759 + kobject_put(&pinst->kobj); 760 } 761 EXPORT_SYMBOL(padata_free);