loading up the forgejo repo on tangled to test page performance
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Rewrite queue (#24505)

# ⚠️ Breaking

Many deprecated queue config options are removed (actually, they should
have been removed in 1.18/1.19).

If you see the fatal message when starting Gitea: "Please update your
app.ini to remove deprecated config options", please follow the error
messages to remove these options from your app.ini.

Example:

```
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].ISSUE_INDEXER_QUEUE_TYPE`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [E] Removed queue option: `[indexer].UPDATE_BUFFER_LEN`. Use new options in `[queue.issue_indexer]`
2023/05/06 19:39:22 [F] Please update your app.ini to remove deprecated config options
```

Many options in `[queue]` are are dropped, including:
`WRAP_IF_NECESSARY`, `MAX_ATTEMPTS`, `TIMEOUT`, `WORKERS`,
`BLOCK_TIMEOUT`, `BOOST_TIMEOUT`, `BOOST_WORKERS`, they can be removed
from app.ini.

# The problem

The old queue package has some legacy problems:

* complexity: I doubt few people could tell how it works.
* maintainability: Too many channels and mutex/cond are mixed together,
too many different structs/interfaces depends each other.
* stability: due to the complexity & maintainability, sometimes there
are strange bugs and difficult to debug, and some code doesn't have test
(indeed some code is difficult to test because a lot of things are mixed
together).
* general applicability: although it is called "queue", its behavior is
not a well-known queue.
* scalability: it doesn't seem easy to make it work with a cluster
without breaking its behaviors.

It came from some very old code to "avoid breaking", however, its
technical debt is too heavy now. It's a good time to introduce a better
"queue" package.

# The new queue package

It keeps using old config and concept as much as possible.

* It only contains two major kinds of concepts:
* The "base queue": channel, levelqueue, redis
* They have the same abstraction, the same interface, and they are
tested by the same testing code.
* The "WokerPoolQueue", it uses the "base queue" to provide "worker
pool" function, calls the "handler" to process the data in the base
queue.
* The new code doesn't do "PushBack"
* Think about a queue with many workers, the "PushBack" can't guarantee
the order for re-queued unhandled items, so in new code it just does
"normal push"
* The new code doesn't do "pause/resume"
* The "pause/resume" was designed to handle some handler's failure: eg:
document indexer (elasticsearch) is down
* If a queue is paused for long time, either the producers blocks or the
new items are dropped.
* The new code doesn't do such "pause/resume" trick, it's not a common
queue's behavior and it doesn't help much.
* If there are unhandled items, the "push" function just blocks for a
few seconds and then re-queue them and retry.
* The new code doesn't do "worker booster"
* Gitea's queue's handlers are light functions, the cost is only the
go-routine, so it doesn't make sense to "boost" them.
* The new code only use "max worker number" to limit the concurrent
workers.
* The new "Push" never blocks forever
* Instead of creating more and more blocking goroutines, return an error
is more friendly to the server and to the end user.

There are more details in code comments: eg: the "Flush" problem, the
strange "code.index" hanging problem, the "immediate" queue problem.

Almost ready for review.

TODO:

* [x] add some necessary comments during review
* [x] add some more tests if necessary
* [x] update documents and config options
* [x] test max worker / active worker
* [x] re-run the CI tasks to see whether any test is flaky
* [x] improve the `handleOldLengthConfiguration` to provide more
friendly messages
* [x] fine tune default config values (eg: length?)

## Code coverage:

![image](https://user-images.githubusercontent.com/2114189/236620635-55576955-f95d-4810-b12f-879026a3afdf.png)

authored by

wxiaoguang and committed by
GitHub
6f9c2785 cb700aed

+2480 -6842
+1 -2
cmd/hook.go
··· 18 18 "code.gitea.io/gitea/modules/private" 19 19 repo_module "code.gitea.io/gitea/modules/repository" 20 20 "code.gitea.io/gitea/modules/setting" 21 - "code.gitea.io/gitea/modules/util" 22 21 23 22 "github.com/urfave/cli" 24 23 ) ··· 141 140 if d == nil { 142 141 return nil 143 142 } 144 - stopped := util.StopTimer(d.timer) 143 + stopped := d.timer.Stop() 145 144 if stopped || d.buf == nil { 146 145 return nil 147 146 }
+2 -47
custom/conf/app.example.ini
··· 926 926 ;; Global limit of repositories per user, applied at creation time. -1 means no limit 927 927 ;MAX_CREATION_LIMIT = -1 928 928 ;; 929 - ;; Mirror sync queue length, increase if mirror syncing starts hanging (DEPRECATED: please use [queue.mirror] LENGTH instead) 930 - ;MIRROR_QUEUE_LENGTH = 1000 931 - ;; 932 - ;; Patch test queue length, increase if pull request patch testing starts hanging (DEPRECATED: please use [queue.pr_patch_checker] LENGTH instead) 933 - ;PULL_REQUEST_QUEUE_LENGTH = 1000 934 - ;; 935 929 ;; Preferred Licenses to place at the top of the List 936 930 ;; The name here must match the filename in options/license or custom/options/license 937 931 ;PREFERRED_LICENSES = Apache License 2.0,MIT License ··· 1376 1370 ;; Set to -1 to disable timeout. 1377 1371 ;STARTUP_TIMEOUT = 30s 1378 1372 ;; 1379 - ;; Issue indexer queue, currently support: channel, levelqueue or redis, default is levelqueue (deprecated - use [queue.issue_indexer]) 1380 - ;ISSUE_INDEXER_QUEUE_TYPE = levelqueue; **DEPRECATED** use settings in `[queue.issue_indexer]`. 1381 - ;; 1382 - ;; When ISSUE_INDEXER_QUEUE_TYPE is levelqueue, this will be the path where the queue will be saved. 1383 - ;; This can be overridden by `ISSUE_INDEXER_QUEUE_CONN_STR`. 1384 - ;; default is queues/common 1385 - ;ISSUE_INDEXER_QUEUE_DIR = queues/common; **DEPRECATED** use settings in `[queue.issue_indexer]`. Relative paths will be made absolute against `%(APP_DATA_PATH)s`. 1386 - ;; 1387 - ;; When `ISSUE_INDEXER_QUEUE_TYPE` is `redis`, this will store the redis connection string. 1388 - ;; When `ISSUE_INDEXER_QUEUE_TYPE` is `levelqueue`, this is a directory or additional options of 1389 - ;; the form `leveldb://path/to/db?option=value&....`, and overrides `ISSUE_INDEXER_QUEUE_DIR`. 1390 - ;ISSUE_INDEXER_QUEUE_CONN_STR = "addrs=127.0.0.1:6379 db=0"; **DEPRECATED** use settings in `[queue.issue_indexer]`. 1391 - ;; 1392 - ;; Batch queue number, default is 20 1393 - ;ISSUE_INDEXER_QUEUE_BATCH_NUMBER = 20; **DEPRECATED** use settings in `[queue.issue_indexer]`. 1394 - 1395 1373 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 1396 1374 ;; Repository Indexer settings 1397 1375 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ··· 1418 1396 ;; A comma separated list of glob patterns to exclude from the index; ; default is empty 1419 1397 ;REPO_INDEXER_EXCLUDE = 1420 1398 ;; 1421 - ;; 1422 - ;UPDATE_BUFFER_LEN = 20; **DEPRECATED** use settings in `[queue.issue_indexer]`. 1423 1399 ;MAX_FILE_SIZE = 1048576 1424 1400 1425 1401 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ··· 1441 1417 ;DATADIR = queues/ ; Relative paths will be made absolute against `%(APP_DATA_PATH)s`. 1442 1418 ;; 1443 1419 ;; Default queue length before a channel queue will block 1444 - ;LENGTH = 20 1420 + ;LENGTH = 100 1445 1421 ;; 1446 1422 ;; Batch size to send for batched queues 1447 1423 ;BATCH_LENGTH = 20 ··· 1449 1425 ;; Connection string for redis queues this will store the redis connection string. 1450 1426 ;; When `TYPE` is `persistable-channel`, this provides a directory for the underlying leveldb 1451 1427 ;; or additional options of the form `leveldb://path/to/db?option=value&....`, and will override `DATADIR`. 1452 - ;CONN_STR = "addrs=127.0.0.1:6379 db=0" 1428 + ;CONN_STR = "redis://127.0.0.1:6379/0" 1453 1429 ;; 1454 1430 ;; Provides the suffix of the default redis/disk queue name - specific queues can be overridden within in their [queue.name] sections. 1455 1431 ;QUEUE_NAME = "_queue" ··· 1457 1433 ;; Provides the suffix of the default redis/disk unique queue set name - specific queues can be overridden within in their [queue.name] sections. 1458 1434 ;SET_NAME = "_unique" 1459 1435 ;; 1460 - ;; If the queue cannot be created at startup - level queues may need a timeout at startup - wrap the queue: 1461 - ;WRAP_IF_NECESSARY = true 1462 - ;; 1463 - ;; Attempt to create the wrapped queue at max 1464 - ;MAX_ATTEMPTS = 10 1465 - ;; 1466 - ;; Timeout queue creation 1467 - ;TIMEOUT = 15m30s 1468 - ;; 1469 - ;; Create a pool with this many workers 1470 - ;WORKERS = 0 1471 - ;; 1472 1436 ;; Dynamically scale the worker pool to at this many workers 1473 1437 ;MAX_WORKERS = 10 1474 - ;; 1475 - ;; Add boost workers when the queue blocks for BLOCK_TIMEOUT 1476 - ;BLOCK_TIMEOUT = 1s 1477 - ;; 1478 - ;; Remove the boost workers after BOOST_TIMEOUT 1479 - ;BOOST_TIMEOUT = 5m 1480 - ;; 1481 - ;; During a boost add BOOST_WORKERS 1482 - ;BOOST_WORKERS = 1 1483 1438 1484 1439 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 1485 1440 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+5 -39
docs/content/doc/administration/config-cheat-sheet.en-us.md
··· 89 89 - `DEFAULT_PUSH_CREATE_PRIVATE`: **true**: Default private when creating a new repository with push-to-create. 90 90 - `MAX_CREATION_LIMIT`: **-1**: Global maximum creation limit of repositories per user, 91 91 `-1` means no limit. 92 - - `PULL_REQUEST_QUEUE_LENGTH`: **1000**: Length of pull request patch test queue, make it. **DEPRECATED** use `LENGTH` in `[queue.pr_patch_checker]`. 93 - as large as possible. Use caution when editing this value. 94 - - `MIRROR_QUEUE_LENGTH`: **1000**: Patch test queue length, increase if pull request patch 95 - testing starts hanging. **DEPRECATED** use `LENGTH` in `[queue.mirror]`. 96 92 - `PREFERRED_LICENSES`: **Apache License 2.0,MIT License**: Preferred Licenses to place at 97 93 the top of the list. Name must match file name in options/license or custom/options/license. 98 94 - `DISABLE_HTTP_GIT`: **false**: Disable the ability to interact with repositories over the ··· 465 461 - `ISSUE_INDEXER_CONN_STR`: ****: Issue indexer connection string, available when ISSUE_INDEXER_TYPE is elasticsearch, or meilisearch. i.e. http://elastic:changeme@localhost:9200 466 462 - `ISSUE_INDEXER_NAME`: **gitea_issues**: Issue indexer name, available when ISSUE_INDEXER_TYPE is elasticsearch 467 463 - `ISSUE_INDEXER_PATH`: **indexers/issues.bleve**: Index file used for issue search; available when ISSUE_INDEXER_TYPE is bleve and elasticsearch. Relative paths will be made absolute against _`AppWorkPath`_. 468 - - The next 4 configuration values are deprecated and should be set in `queue.issue_indexer` however are kept for backwards compatibility: 469 - - `ISSUE_INDEXER_QUEUE_TYPE`: **levelqueue**: Issue indexer queue, currently supports:`channel`, `levelqueue`, `redis`. **DEPRECATED** use settings in `[queue.issue_indexer]`. 470 - - `ISSUE_INDEXER_QUEUE_DIR`: **queues/common**: When `ISSUE_INDEXER_QUEUE_TYPE` is `levelqueue`, this will be the path where the queue will be saved. **DEPRECATED** use settings in `[queue.issue_indexer]`. Relative paths will be made absolute against `%(APP_DATA_PATH)s`. 471 - - `ISSUE_INDEXER_QUEUE_CONN_STR`: **addrs=127.0.0.1:6379 db=0**: When `ISSUE_INDEXER_QUEUE_TYPE` is `redis`, this will store the redis connection string. When `ISSUE_INDEXER_QUEUE_TYPE` is `levelqueue`, this is a directory or additional options of the form `leveldb://path/to/db?option=value&....`, and overrides `ISSUE_INDEXER_QUEUE_DIR`. **DEPRECATED** use settings in `[queue.issue_indexer]`. 472 - - `ISSUE_INDEXER_QUEUE_BATCH_NUMBER`: **20**: Batch queue number. **DEPRECATED** use settings in `[queue.issue_indexer]`. 473 464 474 465 - `REPO_INDEXER_ENABLED`: **false**: Enables code search (uses a lot of disk space, about 6 times more than the repository size). 475 466 - `REPO_INDEXER_TYPE`: **bleve**: Code search engine type, could be `bleve` or `elasticsearch`. ··· 480 471 - `REPO_INDEXER_INCLUDE`: **empty**: A comma separated list of glob patterns (see https://github.com/gobwas/glob) to **include** in the index. Use `**.txt` to match any files with .txt extension. An empty list means include all files. 481 472 - `REPO_INDEXER_EXCLUDE`: **empty**: A comma separated list of glob patterns (see https://github.com/gobwas/glob) to **exclude** from the index. Files that match this list will not be indexed, even if they match in `REPO_INDEXER_INCLUDE`. 482 473 - `REPO_INDEXER_EXCLUDE_VENDORED`: **true**: Exclude vendored files from index. 483 - - `UPDATE_BUFFER_LEN`: **20**: Buffer length of index request. **DEPRECATED** use settings in `[queue.issue_indexer]`. 484 474 - `MAX_FILE_SIZE`: **1048576**: Maximum size in bytes of files to be indexed. 485 475 - `STARTUP_TIMEOUT`: **30s**: If the indexer takes longer than this timeout to start - fail. (This timeout will be added to the hammer time above for child processes - as bleve will not start until the previous parent is shutdown.) Set to -1 to never timeout. 486 476 ··· 488 478 489 479 Configuration at `[queue]` will set defaults for queues with overrides for individual queues at `[queue.*]`. (However see below.) 490 480 491 - - `TYPE`: **persistable-channel**: General queue type, currently support: `persistable-channel` (uses a LevelDB internally), `channel`, `level`, `redis`, `dummy` 492 - - `DATADIR`: **queues/**: Base DataDir for storing persistent and level queues. `DATADIR` for individual queues can be set in `queue.name` sections but will default to `DATADIR/`**`common`**. (Previously each queue would default to `DATADIR/`**`name`**.) Relative paths will be made absolute against `%(APP_DATA_PATH)s`. 493 - - `LENGTH`: **20**: Maximal queue size before channel queues block 481 + - `TYPE`: **level**: General queue type, currently support: `level` (uses a LevelDB internally), `channel`, `redis`, `dummy`. Invalid types are treated as `level`. 482 + - `DATADIR`: **queues/common**: Base DataDir for storing level queues. `DATADIR` for individual queues can be set in `queue.name` sections. Relative paths will be made absolute against `%(APP_DATA_PATH)s`. 483 + - `LENGTH`: **100**: Maximal queue size before channel queues block 494 484 - `BATCH_LENGTH`: **20**: Batch data before passing to the handler 495 - - `CONN_STR`: **redis://127.0.0.1:6379/0**: Connection string for the redis queue type. Options can be set using query params. Similarly LevelDB options can also be set using: **leveldb://relative/path?option=value** or **leveldb:///absolute/path?option=value**, and will override `DATADIR` 485 + - `CONN_STR`: **redis://127.0.0.1:6379/0**: Connection string for the redis queue type. Options can be set using query params. Similarly, LevelDB options can also be set using: **leveldb://relative/path?option=value** or **leveldb:///absolute/path?option=value**, and will override `DATADIR` 496 486 - `QUEUE_NAME`: **_queue**: The suffix for default redis and disk queue name. Individual queues will default to **`name`**`QUEUE_NAME` but can be overridden in the specific `queue.name` section. 497 - - `SET_NAME`: **_unique**: The suffix that will be added to the default redis and disk queue `set` name for unique queues. Individual queues will default to 498 - **`name`**`QUEUE_NAME`_`SET_NAME`_ but can be overridden in the specific `queue.name` section. 499 - - `WRAP_IF_NECESSARY`: **true**: Will wrap queues with a timeoutable queue if the selected queue is not ready to be created - (Only relevant for the level queue.) 500 - - `MAX_ATTEMPTS`: **10**: Maximum number of attempts to create the wrapped queue 501 - - `TIMEOUT`: **GRACEFUL_HAMMER_TIME + 30s**: Timeout the creation of the wrapped queue if it takes longer than this to create. 502 - - Queues by default come with a dynamically scaling worker pool. The following settings configure this: 503 - - `WORKERS`: **0**: Number of initial workers for the queue. 487 + - `SET_NAME`: **_unique**: The suffix that will be added to the default redis and disk queue `set` name for unique queues. Individual queues will default to **`name`**`QUEUE_NAME`_`SET_NAME`_ but can be overridden in the specific `queue.name` section. 504 488 - `MAX_WORKERS`: **10**: Maximum number of worker go-routines for the queue. 505 - - `BLOCK_TIMEOUT`: **1s**: If the queue blocks for this time, boost the number of workers - the `BLOCK_TIMEOUT` will then be doubled before boosting again whilst the boost is ongoing. 506 - - `BOOST_TIMEOUT`: **5m**: Boost workers will timeout after this long. 507 - - `BOOST_WORKERS`: **1**: This many workers will be added to the worker pool if there is a boost. 508 489 509 490 Gitea creates the following non-unique queues: 510 491 ··· 521 502 - `repo-archive` 522 503 - `mirror` 523 504 - `pr_patch_checker` 524 - 525 - Certain queues have defaults that override the defaults set in `[queue]` (this occurs mostly to support older configuration): 526 - 527 - - `[queue.issue_indexer]` 528 - - `TYPE` this will default to `[queue]` `TYPE` if it is set but if not it will appropriately convert `[indexer]` `ISSUE_INDEXER_QUEUE_TYPE` if that is set. 529 - - `LENGTH` will default to `[indexer]` `UPDATE_BUFFER_LEN` if that is set. 530 - - `BATCH_LENGTH` will default to `[indexer]` `ISSUE_INDEXER_QUEUE_BATCH_NUMBER` if that is set. 531 - - `DATADIR` will default to `[indexer]` `ISSUE_INDEXER_QUEUE_DIR` if that is set. 532 - - `CONN_STR` will default to `[indexer]` `ISSUE_INDEXER_QUEUE_CONN_STR` if that is set. 533 - - `[queue.mailer]` 534 - - `LENGTH` will default to **100** or whatever `[mailer]` `SEND_BUFFER_LEN` is. 535 - - `[queue.pr_patch_checker]` 536 - - `LENGTH` will default to **1000** or whatever `[repository]` `PULL_REQUEST_QUEUE_LENGTH` is. 537 - - `[queue.mirror]` 538 - - `LENGTH` will default to **1000** or whatever `[repository]` `MIRROR_QUEUE_LENGTH` is. 539 505 540 506 ## Admin (`admin`) 541 507
-6
docs/content/doc/administration/config-cheat-sheet.zh-cn.md
··· 43 43 - `DEFAULT_PRIVATE`: 默认创建的git工程为私有。 可以是`last`, `private` 或 `public`。默认值是 `last`表示用户最后创建的Repo的选择。 44 44 - `DEFAULT_PUSH_CREATE_PRIVATE`: **true**: 通过 ``push-to-create`` 方式创建的仓库是否默认为私有仓库. 45 45 - `MAX_CREATION_LIMIT`: 全局最大每个用户创建的git工程数目, `-1` 表示没限制。 46 - - `PULL_REQUEST_QUEUE_LENGTH`: 小心:合并请求测试队列的长度,尽量放大。 47 46 48 47 ### Repository - Release (`repository.release`) 49 48 ··· 111 110 - `ISSUE_INDEXER_CONN_STR`: ****: 工单索引连接字符串,仅当 ISSUE_INDEXER_TYPE 为 `elasticsearch` 时有效。例如: http://elastic:changeme@localhost:9200 112 111 - `ISSUE_INDEXER_NAME`: **gitea_issues**: 工单索引名称,仅当 ISSUE_INDEXER_TYPE 为 `elasticsearch` 时有效。 113 112 - `ISSUE_INDEXER_PATH`: **indexers/issues.bleve**: 工单索引文件存放路径,当索引类型为 `bleve` 时有效。 114 - - `ISSUE_INDEXER_QUEUE_TYPE`: **levelqueue**: 工单索引队列类型,当前支持 `channel`, `levelqueue` 或 `redis`。 115 - - `ISSUE_INDEXER_QUEUE_DIR`: **indexers/issues.queue**: 当 `ISSUE_INDEXER_QUEUE_TYPE` 为 `levelqueue` 时,保存索引队列的磁盘路径。 116 - - `ISSUE_INDEXER_QUEUE_CONN_STR`: **addrs=127.0.0.1:6379 db=0**: 当 `ISSUE_INDEXER_QUEUE_TYPE` 为 `redis` 时,保存Redis队列的连接字符串。 117 - - `ISSUE_INDEXER_QUEUE_BATCH_NUMBER`: **20**: 队列处理中批量提交数量。 118 113 119 114 - `REPO_INDEXER_ENABLED`: **false**: 是否启用代码搜索(启用后会占用比较大的磁盘空间,如果是bleve可能需要占用约6倍存储空间)。 120 115 - `REPO_INDEXER_TYPE`: **bleve**: 代码搜索引擎类型,可以为 `bleve` 或者 `elasticsearch`。 ··· 122 117 - `REPO_INDEXER_CONN_STR`: ****: 代码搜索引擎连接字符串,当 `REPO_INDEXER_TYPE` 为 `elasticsearch` 时有效。例如: http://elastic:changeme@localhost:9200 123 118 - `REPO_INDEXER_NAME`: **gitea_codes**: 代码搜索引擎的名字,当 `REPO_INDEXER_TYPE` 为 `elasticsearch` 时有效。 124 119 125 - - `UPDATE_BUFFER_LEN`: **20**: 代码索引请求的缓冲区长度。 126 120 - `MAX_FILE_SIZE`: **1048576**: 进行解析的源代码文件的最大长度,小于该值时才会索引。 127 121 128 122 ## Security (`security`)
-1
docs/content/doc/administration/repo-indexer.en-us.md
··· 30 30 ; ... 31 31 REPO_INDEXER_ENABLED = true 32 32 REPO_INDEXER_PATH = indexers/repos.bleve 33 - UPDATE_BUFFER_LEN = 20 34 33 MAX_FILE_SIZE = 1048576 35 34 REPO_INDEXER_INCLUDE = 36 35 REPO_INDEXER_EXCLUDE = resources/bin/**
-180
models/migrations/base/testlogger.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package base 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "os" 10 - "runtime" 11 - "strings" 12 - "sync" 13 - "testing" 14 - "time" 15 - 16 - "code.gitea.io/gitea/modules/json" 17 - "code.gitea.io/gitea/modules/log" 18 - "code.gitea.io/gitea/modules/queue" 19 - ) 20 - 21 - var ( 22 - prefix string 23 - slowTest = 10 * time.Second 24 - slowFlush = 5 * time.Second 25 - ) 26 - 27 - // TestLogger is a logger which will write to the testing log 28 - type TestLogger struct { 29 - log.WriterLogger 30 - } 31 - 32 - var writerCloser = &testLoggerWriterCloser{} 33 - 34 - type testLoggerWriterCloser struct { 35 - sync.RWMutex 36 - t []*testing.TB 37 - } 38 - 39 - func (w *testLoggerWriterCloser) setT(t *testing.TB) { 40 - w.Lock() 41 - w.t = append(w.t, t) 42 - w.Unlock() 43 - } 44 - 45 - func (w *testLoggerWriterCloser) Write(p []byte) (int, error) { 46 - w.RLock() 47 - var t *testing.TB 48 - if len(w.t) > 0 { 49 - t = w.t[len(w.t)-1] 50 - } 51 - w.RUnlock() 52 - if t != nil && *t != nil { 53 - if len(p) > 0 && p[len(p)-1] == '\n' { 54 - p = p[:len(p)-1] 55 - } 56 - 57 - defer func() { 58 - err := recover() 59 - if err == nil { 60 - return 61 - } 62 - var errString string 63 - errErr, ok := err.(error) 64 - if ok { 65 - errString = errErr.Error() 66 - } else { 67 - errString, ok = err.(string) 68 - } 69 - if !ok { 70 - panic(err) 71 - } 72 - if !strings.HasPrefix(errString, "Log in goroutine after ") { 73 - panic(err) 74 - } 75 - }() 76 - 77 - (*t).Log(string(p)) 78 - return len(p), nil 79 - } 80 - return len(p), nil 81 - } 82 - 83 - func (w *testLoggerWriterCloser) Close() error { 84 - w.Lock() 85 - if len(w.t) > 0 { 86 - w.t = w.t[:len(w.t)-1] 87 - } 88 - w.Unlock() 89 - return nil 90 - } 91 - 92 - // PrintCurrentTest prints the current test to os.Stdout 93 - func PrintCurrentTest(t testing.TB, skip ...int) func() { 94 - start := time.Now() 95 - actualSkip := 1 96 - if len(skip) > 0 { 97 - actualSkip = skip[0] 98 - } 99 - _, filename, line, _ := runtime.Caller(actualSkip) 100 - 101 - if log.CanColorStdout { 102 - fmt.Fprintf(os.Stdout, "=== %s (%s:%d)\n", fmt.Formatter(log.NewColoredValue(t.Name())), strings.TrimPrefix(filename, prefix), line) 103 - } else { 104 - fmt.Fprintf(os.Stdout, "=== %s (%s:%d)\n", t.Name(), strings.TrimPrefix(filename, prefix), line) 105 - } 106 - writerCloser.setT(&t) 107 - return func() { 108 - took := time.Since(start) 109 - if took > slowTest { 110 - if log.CanColorStdout { 111 - fmt.Fprintf(os.Stdout, "+++ %s is a slow test (took %v)\n", fmt.Formatter(log.NewColoredValue(t.Name(), log.Bold, log.FgYellow)), fmt.Formatter(log.NewColoredValue(took, log.Bold, log.FgYellow))) 112 - } else { 113 - fmt.Fprintf(os.Stdout, "+++ %s is a slow test (took %v)\n", t.Name(), took) 114 - } 115 - } 116 - timer := time.AfterFunc(slowFlush, func() { 117 - if log.CanColorStdout { 118 - fmt.Fprintf(os.Stdout, "+++ %s ... still flushing after %v ...\n", fmt.Formatter(log.NewColoredValue(t.Name(), log.Bold, log.FgRed)), slowFlush) 119 - } else { 120 - fmt.Fprintf(os.Stdout, "+++ %s ... still flushing after %v ...\n", t.Name(), slowFlush) 121 - } 122 - }) 123 - if err := queue.GetManager().FlushAll(context.Background(), -1); err != nil { 124 - t.Errorf("Flushing queues failed with error %v", err) 125 - } 126 - timer.Stop() 127 - flushTook := time.Since(start) - took 128 - if flushTook > slowFlush { 129 - if log.CanColorStdout { 130 - fmt.Fprintf(os.Stdout, "+++ %s had a slow clean-up flush (took %v)\n", fmt.Formatter(log.NewColoredValue(t.Name(), log.Bold, log.FgRed)), fmt.Formatter(log.NewColoredValue(flushTook, log.Bold, log.FgRed))) 131 - } else { 132 - fmt.Fprintf(os.Stdout, "+++ %s had a slow clean-up flush (took %v)\n", t.Name(), flushTook) 133 - } 134 - } 135 - _ = writerCloser.Close() 136 - } 137 - } 138 - 139 - // Printf takes a format and args and prints the string to os.Stdout 140 - func Printf(format string, args ...interface{}) { 141 - if log.CanColorStdout { 142 - for i := 0; i < len(args); i++ { 143 - args[i] = log.NewColoredValue(args[i]) 144 - } 145 - } 146 - fmt.Fprintf(os.Stdout, "\t"+format, args...) 147 - } 148 - 149 - // NewTestLogger creates a TestLogger as a log.LoggerProvider 150 - func NewTestLogger() log.LoggerProvider { 151 - logger := &TestLogger{} 152 - logger.Colorize = log.CanColorStdout 153 - logger.Level = log.TRACE 154 - return logger 155 - } 156 - 157 - // Init inits connection writer with json config. 158 - // json config only need key "level". 159 - func (log *TestLogger) Init(config string) error { 160 - err := json.Unmarshal([]byte(config), log) 161 - if err != nil { 162 - return err 163 - } 164 - log.NewWriterLogger(writerCloser) 165 - return nil 166 - } 167 - 168 - // Flush when log should be flushed 169 - func (log *TestLogger) Flush() { 170 - } 171 - 172 - // ReleaseReopen does nothing 173 - func (log *TestLogger) ReleaseReopen() error { 174 - return nil 175 - } 176 - 177 - // GetName returns the default name for this implementation 178 - func (log *TestLogger) GetName() string { 179 - return "test" 180 - }
+3 -5
models/migrations/base/tests.go
··· 11 11 "path" 12 12 "path/filepath" 13 13 "runtime" 14 - "strings" 15 14 "testing" 16 15 17 16 "code.gitea.io/gitea/models/unittest" ··· 19 18 "code.gitea.io/gitea/modules/git" 20 19 "code.gitea.io/gitea/modules/log" 21 20 "code.gitea.io/gitea/modules/setting" 21 + "code.gitea.io/gitea/modules/testlogger" 22 22 23 23 "github.com/stretchr/testify/assert" 24 24 "xorm.io/xorm" ··· 32 32 t.Helper() 33 33 ourSkip := 2 34 34 ourSkip += skip 35 - deferFn := PrintCurrentTest(t, ourSkip) 35 + deferFn := testlogger.PrintCurrentTest(t, ourSkip) 36 36 assert.NoError(t, os.RemoveAll(setting.RepoRootPath)) 37 37 assert.NoError(t, unittest.CopyDir(path.Join(filepath.Dir(setting.AppPath), "tests/gitea-repositories-meta"), setting.RepoRootPath)) 38 38 ownerDirs, err := os.ReadDir(setting.RepoRootPath) ··· 110 110 } 111 111 112 112 func MainTest(m *testing.M) { 113 - log.Register("test", NewTestLogger) 114 - _, filename, _, _ := runtime.Caller(0) 115 - prefix = strings.TrimSuffix(filename, "tests/testlogger.go") 113 + log.Register("test", testlogger.NewTestLogger) 116 114 117 115 giteaRoot := base.SetupGiteaRoot() 118 116 if giteaRoot == "" {
+3
models/unittest/testdb.go
··· 202 202 func CreateTestEngine(opts FixturesOptions) error { 203 203 x, err := xorm.NewEngine("sqlite3", "file::memory:?cache=shared&_txlock=immediate") 204 204 if err != nil { 205 + if strings.Contains(err.Error(), "unknown driver") { 206 + return fmt.Errorf(`sqlite3 requires: import _ "github.com/mattn/go-sqlite3" or -tags sqlite,sqlite_unlock_notify%s%w`, "\n", err) 207 + } 205 208 return err 206 209 } 207 210 x.SetMapper(names.GonicMapper{})
-4
modules/indexer/code/bleve.go
··· 273 273 log.Info("PID: %d Repository Indexer closed", os.Getpid()) 274 274 } 275 275 276 - // SetAvailabilityChangeCallback does nothing 277 - func (b *BleveIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 278 - } 279 - 280 276 // Ping does nothing 281 277 func (b *BleveIndexer) Ping() bool { 282 278 return true
+5 -17
modules/indexer/code/elastic_search.go
··· 42 42 43 43 // ElasticSearchIndexer implements Indexer interface 44 44 type ElasticSearchIndexer struct { 45 - client *elastic.Client 46 - indexerAliasName string 47 - available bool 48 - availabilityCallback func(bool) 49 - stopTimer chan struct{} 50 - lock sync.RWMutex 45 + client *elastic.Client 46 + indexerAliasName string 47 + available bool 48 + stopTimer chan struct{} 49 + lock sync.RWMutex 51 50 } 52 51 53 52 type elasticLogger struct { ··· 196 195 } 197 196 198 197 return exists, nil 199 - } 200 - 201 - // SetAvailabilityChangeCallback sets callback that will be triggered when availability changes 202 - func (b *ElasticSearchIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 203 - b.lock.Lock() 204 - defer b.lock.Unlock() 205 - b.availabilityCallback = callback 206 198 } 207 199 208 200 // Ping checks if elastic is available ··· 529 521 } 530 522 531 523 b.available = available 532 - if b.availabilityCallback != nil { 533 - // Call the callback from within the lock to ensure that the ordering remains correct 534 - b.availabilityCallback(b.available) 535 - } 536 524 }
+25 -30
modules/indexer/code/indexer.go
··· 44 44 // Indexer defines an interface to index and search code contents 45 45 type Indexer interface { 46 46 Ping() bool 47 - SetAvailabilityChangeCallback(callback func(bool)) 48 47 Index(ctx context.Context, repo *repo_model.Repository, sha string, changes *repoChanges) error 49 48 Delete(repoID int64) error 50 49 Search(ctx context.Context, repoIDs []int64, language, keyword string, page, pageSize int, isMatch bool) (int64, []*SearchResult, []*SearchResultLanguages, error) ··· 81 80 RepoID int64 82 81 } 83 82 84 - var indexerQueue queue.UniqueQueue 83 + var indexerQueue *queue.WorkerPoolQueue[*IndexerData] 85 84 86 85 func index(ctx context.Context, indexer Indexer, repoID int64) error { 87 86 repo, err := repo_model.GetRepositoryByID(ctx, repoID) ··· 137 136 // Create the Queue 138 137 switch setting.Indexer.RepoType { 139 138 case "bleve", "elasticsearch": 140 - handler := func(data ...queue.Data) []queue.Data { 139 + handler := func(items ...*IndexerData) (unhandled []*IndexerData) { 141 140 idx, err := indexer.get() 142 141 if idx == nil || err != nil { 143 142 log.Error("Codes indexer handler: unable to get indexer!") 144 - return data 143 + return items 145 144 } 146 145 147 - unhandled := make([]queue.Data, 0, len(data)) 148 - for _, datum := range data { 149 - indexerData, ok := datum.(*IndexerData) 150 - if !ok { 151 - log.Error("Unable to process provided datum: %v - not possible to cast to IndexerData", datum) 152 - continue 153 - } 146 + for _, indexerData := range items { 154 147 log.Trace("IndexerData Process Repo: %d", indexerData.RepoID) 155 148 149 + // FIXME: it seems there is a bug in `CatFileBatch` or `nio.Pipe`, which will cause the process to hang forever in rare cases 150 + /* 151 + sync.(*Cond).Wait(cond.go:70) 152 + github.com/djherbis/nio/v3.(*PipeReader).Read(sync.go:106) 153 + bufio.(*Reader).fill(bufio.go:106) 154 + bufio.(*Reader).ReadSlice(bufio.go:372) 155 + bufio.(*Reader).collectFragments(bufio.go:447) 156 + bufio.(*Reader).ReadString(bufio.go:494) 157 + code.gitea.io/gitea/modules/git.ReadBatchLine(batch_reader.go:149) 158 + code.gitea.io/gitea/modules/indexer/code.(*BleveIndexer).addUpdate(bleve.go:214) 159 + code.gitea.io/gitea/modules/indexer/code.(*BleveIndexer).Index(bleve.go:296) 160 + code.gitea.io/gitea/modules/indexer/code.(*wrappedIndexer).Index(wrapped.go:74) 161 + code.gitea.io/gitea/modules/indexer/code.index(indexer.go:105) 162 + */ 156 163 if err := index(ctx, indexer, indexerData.RepoID); err != nil { 164 + if !idx.Ping() { 165 + log.Error("Code indexer handler: indexer is unavailable.") 166 + unhandled = append(unhandled, indexerData) 167 + continue 168 + } 157 169 if !setting.IsInTesting { 158 - log.Error("indexer index error for repo %v: %v", indexerData.RepoID, err) 170 + log.Error("Codes indexer handler: index error for repo %v: %v", indexerData.RepoID, err) 159 171 } 160 - if indexer.Ping() { 161 - continue 162 - } 163 - // Add back to queue 164 - unhandled = append(unhandled, datum) 165 172 } 166 173 } 167 174 return unhandled 168 175 } 169 176 170 - indexerQueue = queue.CreateUniqueQueue("code_indexer", handler, &IndexerData{}) 177 + indexerQueue = queue.CreateUniqueQueue("code_indexer", handler) 171 178 if indexerQueue == nil { 172 179 log.Fatal("Unable to create codes indexer queue") 173 180 } ··· 223 230 } 224 231 225 232 indexer.set(rIndexer) 226 - 227 - if queue, ok := indexerQueue.(queue.Pausable); ok { 228 - rIndexer.SetAvailabilityChangeCallback(func(available bool) { 229 - if !available { 230 - log.Info("Code index queue paused") 231 - queue.Pause() 232 - } else { 233 - log.Info("Code index queue resumed") 234 - queue.Resume() 235 - } 236 - }) 237 - } 238 233 239 234 // Start processing the queue 240 235 go graceful.GetManager().RunWithShutdownFns(indexerQueue.Run)
-10
modules/indexer/code/wrapped.go
··· 56 56 return w.internal, nil 57 57 } 58 58 59 - // SetAvailabilityChangeCallback sets callback that will be triggered when availability changes 60 - func (w *wrappedIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 61 - indexer, err := w.get() 62 - if err != nil { 63 - log.Error("Failed to get indexer: %v", err) 64 - return 65 - } 66 - indexer.SetAvailabilityChangeCallback(callback) 67 - } 68 - 69 59 // Ping checks if elastic is available 70 60 func (w *wrappedIndexer) Ping() bool { 71 61 indexer, err := w.get()
-4
modules/indexer/issues/bleve.go
··· 187 187 return false, err 188 188 } 189 189 190 - // SetAvailabilityChangeCallback does nothing 191 - func (b *BleveIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 192 - } 193 - 194 190 // Ping does nothing 195 191 func (b *BleveIndexer) Ping() bool { 196 192 return true
-4
modules/indexer/issues/db.go
··· 18 18 return false, nil 19 19 } 20 20 21 - // SetAvailabilityChangeCallback dummy function 22 - func (i *DBIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 23 - } 24 - 25 21 // Ping checks if database is available 26 22 func (i *DBIndexer) Ping() bool { 27 23 return db.GetEngine(db.DefaultContext).Ping() != nil
+5 -17
modules/indexer/issues/elastic_search.go
··· 22 22 23 23 // ElasticSearchIndexer implements Indexer interface 24 24 type ElasticSearchIndexer struct { 25 - client *elastic.Client 26 - indexerName string 27 - available bool 28 - availabilityCallback func(bool) 29 - stopTimer chan struct{} 30 - lock sync.RWMutex 25 + client *elastic.Client 26 + indexerName string 27 + available bool 28 + stopTimer chan struct{} 29 + lock sync.RWMutex 31 30 } 32 31 33 32 type elasticLogger struct { ··· 136 135 return false, nil 137 136 } 138 137 return true, nil 139 - } 140 - 141 - // SetAvailabilityChangeCallback sets callback that will be triggered when availability changes 142 - func (b *ElasticSearchIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 143 - b.lock.Lock() 144 - defer b.lock.Unlock() 145 - b.availabilityCallback = callback 146 138 } 147 139 148 140 // Ping checks if elastic is available ··· 305 297 } 306 298 307 299 b.available = available 308 - if b.availabilityCallback != nil { 309 - // Call the callback from within the lock to ensure that the ordering remains correct 310 - b.availabilityCallback(b.available) 311 - } 312 300 }
+20 -53
modules/indexer/issues/indexer.go
··· 49 49 type Indexer interface { 50 50 Init() (bool, error) 51 51 Ping() bool 52 - SetAvailabilityChangeCallback(callback func(bool)) 53 52 Index(issue []*IndexerData) error 54 53 Delete(ids ...int64) error 55 54 Search(ctx context.Context, kw string, repoIDs []int64, limit, start int) (*SearchResult, error) ··· 94 93 95 94 var ( 96 95 // issueIndexerQueue queue of issue ids to be updated 97 - issueIndexerQueue queue.Queue 96 + issueIndexerQueue *queue.WorkerPoolQueue[*IndexerData] 98 97 holder = newIndexerHolder() 99 98 ) 100 99 ··· 108 107 // Create the Queue 109 108 switch setting.Indexer.IssueType { 110 109 case "bleve", "elasticsearch", "meilisearch": 111 - handler := func(data ...queue.Data) []queue.Data { 110 + handler := func(items ...*IndexerData) (unhandled []*IndexerData) { 112 111 indexer := holder.get() 113 112 if indexer == nil { 114 - log.Error("Issue indexer handler: unable to get indexer!") 115 - return data 113 + log.Error("Issue indexer handler: unable to get indexer.") 114 + return items 116 115 } 117 - 118 - iData := make([]*IndexerData, 0, len(data)) 119 - unhandled := make([]queue.Data, 0, len(data)) 120 - for _, datum := range data { 121 - indexerData, ok := datum.(*IndexerData) 122 - if !ok { 123 - log.Error("Unable to process provided datum: %v - not possible to cast to IndexerData", datum) 124 - continue 125 - } 116 + toIndex := make([]*IndexerData, 0, len(items)) 117 + for _, indexerData := range items { 126 118 log.Trace("IndexerData Process: %d %v %t", indexerData.ID, indexerData.IDs, indexerData.IsDelete) 127 119 if indexerData.IsDelete { 128 120 if err := indexer.Delete(indexerData.IDs...); err != nil { 129 - log.Error("Error whilst deleting from index: %v Error: %v", indexerData.IDs, err) 130 - if indexer.Ping() { 131 - continue 121 + log.Error("Issue indexer handler: failed to from index: %v Error: %v", indexerData.IDs, err) 122 + if !indexer.Ping() { 123 + log.Error("Issue indexer handler: indexer is unavailable when deleting") 124 + unhandled = append(unhandled, indexerData) 132 125 } 133 - // Add back to queue 134 - unhandled = append(unhandled, datum) 135 126 } 136 127 continue 137 128 } 138 - iData = append(iData, indexerData) 129 + toIndex = append(toIndex, indexerData) 139 130 } 140 - if len(unhandled) > 0 { 141 - for _, indexerData := range iData { 142 - unhandled = append(unhandled, indexerData) 131 + if err := indexer.Index(toIndex); err != nil { 132 + log.Error("Error whilst indexing: %v Error: %v", toIndex, err) 133 + if !indexer.Ping() { 134 + log.Error("Issue indexer handler: indexer is unavailable when indexing") 135 + unhandled = append(unhandled, toIndex...) 143 136 } 144 - return unhandled 145 137 } 146 - if err := indexer.Index(iData); err != nil { 147 - log.Error("Error whilst indexing: %v Error: %v", iData, err) 148 - if indexer.Ping() { 149 - return nil 150 - } 151 - // Add back to queue 152 - for _, indexerData := range iData { 153 - unhandled = append(unhandled, indexerData) 154 - } 155 - return unhandled 156 - } 157 - return nil 138 + return unhandled 158 139 } 159 140 160 - issueIndexerQueue = queue.CreateQueue("issue_indexer", handler, &IndexerData{}) 141 + issueIndexerQueue = queue.CreateSimpleQueue("issue_indexer", handler) 161 142 162 143 if issueIndexerQueue == nil { 163 144 log.Fatal("Unable to create issue indexer queue") 164 145 } 165 146 default: 166 - issueIndexerQueue = &queue.DummyQueue{} 147 + issueIndexerQueue = queue.CreateSimpleQueue[*IndexerData]("issue_indexer", nil) 167 148 } 168 149 169 150 // Create the Indexer ··· 240 221 log.Fatal("Unknown issue indexer type: %s", setting.Indexer.IssueType) 241 222 } 242 223 243 - if queue, ok := issueIndexerQueue.(queue.Pausable); ok { 244 - holder.get().SetAvailabilityChangeCallback(func(available bool) { 245 - if !available { 246 - log.Info("Issue index queue paused") 247 - queue.Pause() 248 - } else { 249 - log.Info("Issue index queue resumed") 250 - queue.Resume() 251 - } 252 - }) 253 - } 254 - 255 224 // Start processing the queue 256 225 go graceful.GetManager().RunWithShutdownFns(issueIndexerQueue.Run) 257 226 ··· 285 254 case <-graceful.GetManager().IsShutdown(): 286 255 log.Warn("Shutdown occurred before issue index initialisation was complete") 287 256 case <-time.After(timeout): 288 - if shutdownable, ok := issueIndexerQueue.(queue.Shutdownable); ok { 289 - shutdownable.Terminate() 290 - } 257 + issueIndexerQueue.ShutdownWait(5 * time.Second) 291 258 log.Fatal("Issue Indexer Initialization timed-out after: %v", timeout) 292 259 } 293 260 }()
+5 -17
modules/indexer/issues/meilisearch.go
··· 17 17 18 18 // MeilisearchIndexer implements Indexer interface 19 19 type MeilisearchIndexer struct { 20 - client *meilisearch.Client 21 - indexerName string 22 - available bool 23 - availabilityCallback func(bool) 24 - stopTimer chan struct{} 25 - lock sync.RWMutex 20 + client *meilisearch.Client 21 + indexerName string 22 + available bool 23 + stopTimer chan struct{} 24 + lock sync.RWMutex 26 25 } 27 26 28 27 // MeilisearchIndexer creates a new meilisearch indexer ··· 71 70 72 71 _, err = b.client.Index(b.indexerName).UpdateFilterableAttributes(&[]string{"repo_id"}) 73 72 return false, b.checkError(err) 74 - } 75 - 76 - // SetAvailabilityChangeCallback sets callback that will be triggered when availability changes 77 - func (b *MeilisearchIndexer) SetAvailabilityChangeCallback(callback func(bool)) { 78 - b.lock.Lock() 79 - defer b.lock.Unlock() 80 - b.availabilityCallback = callback 81 73 } 82 74 83 75 // Ping checks if meilisearch is available ··· 178 170 } 179 171 180 172 b.available = available 181 - if b.availabilityCallback != nil { 182 - // Call the callback from within the lock to ensure that the ordering remains correct 183 - b.availabilityCallback(b.available) 184 - } 185 173 }
+1 -1
modules/indexer/stats/indexer_test.go
··· 41 41 err = UpdateRepoIndexer(repo) 42 42 assert.NoError(t, err) 43 43 44 - queue.GetManager().FlushAll(context.Background(), 5*time.Second) 44 + assert.NoError(t, queue.GetManager().FlushAll(context.Background(), 5*time.Second)) 45 45 46 46 status, err := repo_model.GetIndexerStatus(db.DefaultContext, repo, repo_model.RepoIndexerTypeStats) 47 47 assert.NoError(t, err)
+4 -5
modules/indexer/stats/queue.go
··· 14 14 ) 15 15 16 16 // statsQueue represents a queue to handle repository stats updates 17 - var statsQueue queue.UniqueQueue 17 + var statsQueue *queue.WorkerPoolQueue[int64] 18 18 19 19 // handle passed PR IDs and test the PRs 20 - func handle(data ...queue.Data) []queue.Data { 21 - for _, datum := range data { 22 - opts := datum.(int64) 20 + func handler(items ...int64) []int64 { 21 + for _, opts := range items { 23 22 if err := indexer.Index(opts); err != nil { 24 23 if !setting.IsInTesting { 25 24 log.Error("stats queue indexer.Index(%d) failed: %v", opts, err) ··· 30 29 } 31 30 32 31 func initStatsQueue() error { 33 - statsQueue = queue.CreateUniqueQueue("repo_stats_update", handle, int64(0)) 32 + statsQueue = queue.CreateUniqueQueue("repo_stats_update", handler) 34 33 if statsQueue == nil { 35 34 return fmt.Errorf("Unable to create repo_stats_update Queue") 36 35 }
+3 -3
modules/mirror/mirror.go
··· 10 10 "code.gitea.io/gitea/modules/setting" 11 11 ) 12 12 13 - var mirrorQueue queue.UniqueQueue 13 + var mirrorQueue *queue.WorkerPoolQueue[*SyncRequest] 14 14 15 15 // SyncType type of sync request 16 16 type SyncType int ··· 29 29 } 30 30 31 31 // StartSyncMirrors starts a go routine to sync the mirrors 32 - func StartSyncMirrors(queueHandle func(data ...queue.Data) []queue.Data) { 32 + func StartSyncMirrors(queueHandle func(data ...*SyncRequest) []*SyncRequest) { 33 33 if !setting.Mirror.Enabled { 34 34 return 35 35 } 36 - mirrorQueue = queue.CreateUniqueQueue("mirror", queueHandle, new(SyncRequest)) 36 + mirrorQueue = queue.CreateUniqueQueue("mirror", queueHandle) 37 37 38 38 go graceful.GetManager().RunWithShutdownFns(mirrorQueue.Run) 39 39 }
+5 -6
modules/notification/ui/ui.go
··· 21 21 type ( 22 22 notificationService struct { 23 23 base.NullNotifier 24 - issueQueue queue.Queue 24 + issueQueue *queue.WorkerPoolQueue[issueNotificationOpts] 25 25 } 26 26 27 27 issueNotificationOpts struct { ··· 37 37 // NewNotifier create a new notificationService notifier 38 38 func NewNotifier() base.Notifier { 39 39 ns := &notificationService{} 40 - ns.issueQueue = queue.CreateQueue("notification-service", ns.handle, issueNotificationOpts{}) 40 + ns.issueQueue = queue.CreateSimpleQueue("notification-service", handler) 41 41 return ns 42 42 } 43 43 44 - func (ns *notificationService) handle(data ...queue.Data) []queue.Data { 45 - for _, datum := range data { 46 - opts := datum.(issueNotificationOpts) 44 + func handler(items ...issueNotificationOpts) []issueNotificationOpts { 45 + for _, opts := range items { 47 46 if err := activities_model.CreateOrUpdateIssueNotifications(opts.IssueID, opts.CommentID, opts.NotificationAuthorID, opts.ReceiverID); err != nil { 48 47 log.Error("Was unable to create issue notification: %v", err) 49 48 } ··· 52 51 } 53 52 54 53 func (ns *notificationService) Run() { 55 - graceful.GetManager().RunWithShutdownFns(ns.issueQueue.Run) 54 + go graceful.GetManager().RunWithShutdownFns(ns.issueQueue.Run) 56 55 } 57 56 58 57 func (ns *notificationService) NotifyCreateIssueComment(ctx context.Context, doer *user_model.User, repo *repo_model.Repository,
+63
modules/queue/backoff.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "time" 9 + ) 10 + 11 + const ( 12 + backoffBegin = 50 * time.Millisecond 13 + backoffUpper = 2 * time.Second 14 + ) 15 + 16 + type ( 17 + backoffFuncRetErr[T any] func() (retry bool, ret T, err error) 18 + backoffFuncErr func() (retry bool, err error) 19 + ) 20 + 21 + func backoffRetErr[T any](ctx context.Context, begin, upper time.Duration, end <-chan time.Time, fn backoffFuncRetErr[T]) (ret T, err error) { 22 + d := begin 23 + for { 24 + // check whether the context has been cancelled or has reached the deadline, return early 25 + select { 26 + case <-ctx.Done(): 27 + return ret, ctx.Err() 28 + case <-end: 29 + return ret, context.DeadlineExceeded 30 + default: 31 + } 32 + 33 + // call the target function 34 + retry, ret, err := fn() 35 + if err != nil { 36 + return ret, err 37 + } 38 + if !retry { 39 + return ret, nil 40 + } 41 + 42 + // wait for a while before retrying, and also respect the context & deadline 43 + select { 44 + case <-ctx.Done(): 45 + return ret, ctx.Err() 46 + case <-time.After(d): 47 + d *= 2 48 + if d > upper { 49 + d = upper 50 + } 51 + case <-end: 52 + return ret, context.DeadlineExceeded 53 + } 54 + } 55 + } 56 + 57 + func backoffErr(ctx context.Context, begin, upper time.Duration, end <-chan time.Time, fn backoffFuncErr) error { 58 + _, err := backoffRetErr(ctx, begin, upper, end, func() (retry bool, ret any, err error) { 59 + retry, err = fn() 60 + return retry, nil, err 61 + }) 62 + return err 63 + }
+42
modules/queue/base.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "time" 9 + ) 10 + 11 + var pushBlockTime = 5 * time.Second 12 + 13 + type baseQueue interface { 14 + PushItem(ctx context.Context, data []byte) error 15 + PopItem(ctx context.Context) ([]byte, error) 16 + HasItem(ctx context.Context, data []byte) (bool, error) 17 + Len(ctx context.Context) (int, error) 18 + Close() error 19 + RemoveAll(ctx context.Context) error 20 + } 21 + 22 + func popItemByChan(ctx context.Context, popItemFn func(ctx context.Context) ([]byte, error)) (chanItem chan []byte, chanErr chan error) { 23 + chanItem = make(chan []byte) 24 + chanErr = make(chan error) 25 + go func() { 26 + for { 27 + it, err := popItemFn(ctx) 28 + if err != nil { 29 + close(chanItem) 30 + chanErr <- err 31 + return 32 + } 33 + if it == nil { 34 + close(chanItem) 35 + close(chanErr) 36 + return 37 + } 38 + chanItem <- it 39 + } 40 + }() 41 + return chanItem, chanErr 42 + }
+123
modules/queue/base_channel.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "errors" 9 + "sync" 10 + "time" 11 + 12 + "code.gitea.io/gitea/modules/container" 13 + ) 14 + 15 + var errChannelClosed = errors.New("channel is closed") 16 + 17 + type baseChannel struct { 18 + c chan []byte 19 + set container.Set[string] 20 + mu sync.Mutex 21 + 22 + isUnique bool 23 + } 24 + 25 + var _ baseQueue = (*baseChannel)(nil) 26 + 27 + func newBaseChannelGeneric(cfg *BaseConfig, unique bool) (baseQueue, error) { 28 + q := &baseChannel{c: make(chan []byte, cfg.Length), isUnique: unique} 29 + if unique { 30 + q.set = container.Set[string]{} 31 + } 32 + return q, nil 33 + } 34 + 35 + func newBaseChannelSimple(cfg *BaseConfig) (baseQueue, error) { 36 + return newBaseChannelGeneric(cfg, false) 37 + } 38 + 39 + func newBaseChannelUnique(cfg *BaseConfig) (baseQueue, error) { 40 + return newBaseChannelGeneric(cfg, true) 41 + } 42 + 43 + func (q *baseChannel) PushItem(ctx context.Context, data []byte) error { 44 + if q.c == nil { 45 + return errChannelClosed 46 + } 47 + 48 + if q.isUnique { 49 + q.mu.Lock() 50 + has := q.set.Contains(string(data)) 51 + q.mu.Unlock() 52 + if has { 53 + return ErrAlreadyInQueue 54 + } 55 + } 56 + 57 + select { 58 + case q.c <- data: 59 + if q.isUnique { 60 + q.mu.Lock() 61 + q.set.Add(string(data)) 62 + q.mu.Unlock() 63 + } 64 + return nil 65 + case <-time.After(pushBlockTime): 66 + return context.DeadlineExceeded 67 + case <-ctx.Done(): 68 + return ctx.Err() 69 + } 70 + } 71 + 72 + func (q *baseChannel) PopItem(ctx context.Context) ([]byte, error) { 73 + select { 74 + case data, ok := <-q.c: 75 + if !ok { 76 + return nil, errChannelClosed 77 + } 78 + q.mu.Lock() 79 + q.set.Remove(string(data)) 80 + q.mu.Unlock() 81 + return data, nil 82 + case <-ctx.Done(): 83 + return nil, ctx.Err() 84 + } 85 + } 86 + 87 + func (q *baseChannel) HasItem(ctx context.Context, data []byte) (bool, error) { 88 + q.mu.Lock() 89 + defer q.mu.Unlock() 90 + 91 + return q.set.Contains(string(data)), nil 92 + } 93 + 94 + func (q *baseChannel) Len(ctx context.Context) (int, error) { 95 + q.mu.Lock() 96 + defer q.mu.Unlock() 97 + 98 + if q.c == nil { 99 + return 0, errChannelClosed 100 + } 101 + 102 + return len(q.c), nil 103 + } 104 + 105 + func (q *baseChannel) Close() error { 106 + q.mu.Lock() 107 + defer q.mu.Unlock() 108 + 109 + close(q.c) 110 + q.set = container.Set[string]{} 111 + 112 + return nil 113 + } 114 + 115 + func (q *baseChannel) RemoveAll(ctx context.Context) error { 116 + q.mu.Lock() 117 + defer q.mu.Unlock() 118 + 119 + for q.c != nil && len(q.c) > 0 { 120 + <-q.c 121 + } 122 + return nil 123 + }
+11
modules/queue/base_channel_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import "testing" 7 + 8 + func TestBaseChannel(t *testing.T) { 9 + testQueueBasic(t, newBaseChannelSimple, &BaseConfig{ManagedName: "baseChannel", Length: 10}, false) 10 + testQueueBasic(t, newBaseChannelUnique, &BaseConfig{ManagedName: "baseChannel", Length: 10}, true) 11 + }
+38
modules/queue/base_dummy.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import "context" 7 + 8 + type baseDummy struct{} 9 + 10 + var _ baseQueue = (*baseDummy)(nil) 11 + 12 + func newBaseDummy(cfg *BaseConfig, unique bool) (baseQueue, error) { 13 + return &baseDummy{}, nil 14 + } 15 + 16 + func (q *baseDummy) PushItem(ctx context.Context, data []byte) error { 17 + return nil 18 + } 19 + 20 + func (q *baseDummy) PopItem(ctx context.Context) ([]byte, error) { 21 + return nil, nil 22 + } 23 + 24 + func (q *baseDummy) Len(ctx context.Context) (int, error) { 25 + return 0, nil 26 + } 27 + 28 + func (q *baseDummy) HasItem(ctx context.Context, data []byte) (bool, error) { 29 + return false, nil 30 + } 31 + 32 + func (q *baseDummy) Close() error { 33 + return nil 34 + } 35 + 36 + func (q *baseDummy) RemoveAll(ctx context.Context) error { 37 + return nil 38 + }
+72
modules/queue/base_levelqueue.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + 9 + "code.gitea.io/gitea/modules/nosql" 10 + 11 + "gitea.com/lunny/levelqueue" 12 + ) 13 + 14 + type baseLevelQueue struct { 15 + internal *levelqueue.Queue 16 + conn string 17 + cfg *BaseConfig 18 + } 19 + 20 + var _ baseQueue = (*baseLevelQueue)(nil) 21 + 22 + func newBaseLevelQueueGeneric(cfg *BaseConfig, unique bool) (baseQueue, error) { 23 + if unique { 24 + return newBaseLevelQueueUnique(cfg) 25 + } 26 + return newBaseLevelQueueSimple(cfg) 27 + } 28 + 29 + func newBaseLevelQueueSimple(cfg *BaseConfig) (baseQueue, error) { 30 + conn, db, err := prepareLevelDB(cfg) 31 + if err != nil { 32 + return nil, err 33 + } 34 + q := &baseLevelQueue{conn: conn, cfg: cfg} 35 + q.internal, err = levelqueue.NewQueue(db, []byte(cfg.QueueFullName), false) 36 + if err != nil { 37 + return nil, err 38 + } 39 + 40 + return q, nil 41 + } 42 + 43 + func (q *baseLevelQueue) PushItem(ctx context.Context, data []byte) error { 44 + return baseLevelQueueCommon(q.cfg, q.internal, nil).PushItem(ctx, data) 45 + } 46 + 47 + func (q *baseLevelQueue) PopItem(ctx context.Context) ([]byte, error) { 48 + return baseLevelQueueCommon(q.cfg, q.internal, nil).PopItem(ctx) 49 + } 50 + 51 + func (q *baseLevelQueue) HasItem(ctx context.Context, data []byte) (bool, error) { 52 + return false, nil 53 + } 54 + 55 + func (q *baseLevelQueue) Len(ctx context.Context) (int, error) { 56 + return int(q.internal.Len()), nil 57 + } 58 + 59 + func (q *baseLevelQueue) Close() error { 60 + err := q.internal.Close() 61 + _ = nosql.GetManager().CloseLevelDB(q.conn) 62 + return err 63 + } 64 + 65 + func (q *baseLevelQueue) RemoveAll(ctx context.Context) error { 66 + for q.internal.Len() > 0 { 67 + if _, err := q.internal.LPop(); err != nil { 68 + return err 69 + } 70 + } 71 + return nil 72 + }
+92
modules/queue/base_levelqueue_common.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "fmt" 9 + "path/filepath" 10 + "strings" 11 + "sync" 12 + "time" 13 + 14 + "code.gitea.io/gitea/modules/nosql" 15 + 16 + "gitea.com/lunny/levelqueue" 17 + "github.com/syndtr/goleveldb/leveldb" 18 + ) 19 + 20 + type baseLevelQueuePushPoper interface { 21 + RPush(data []byte) error 22 + LPop() ([]byte, error) 23 + Len() int64 24 + } 25 + 26 + type baseLevelQueueCommonImpl struct { 27 + length int 28 + internal baseLevelQueuePushPoper 29 + mu *sync.Mutex 30 + } 31 + 32 + func (q *baseLevelQueueCommonImpl) PushItem(ctx context.Context, data []byte) error { 33 + return backoffErr(ctx, backoffBegin, backoffUpper, time.After(pushBlockTime), func() (retry bool, err error) { 34 + if q.mu != nil { 35 + q.mu.Lock() 36 + defer q.mu.Unlock() 37 + } 38 + 39 + cnt := int(q.internal.Len()) 40 + if cnt >= q.length { 41 + return true, nil 42 + } 43 + retry, err = false, q.internal.RPush(data) 44 + if err == levelqueue.ErrAlreadyInQueue { 45 + err = ErrAlreadyInQueue 46 + } 47 + return retry, err 48 + }) 49 + } 50 + 51 + func (q *baseLevelQueueCommonImpl) PopItem(ctx context.Context) ([]byte, error) { 52 + return backoffRetErr(ctx, backoffBegin, backoffUpper, infiniteTimerC, func() (retry bool, data []byte, err error) { 53 + if q.mu != nil { 54 + q.mu.Lock() 55 + defer q.mu.Unlock() 56 + } 57 + 58 + data, err = q.internal.LPop() 59 + if err == levelqueue.ErrNotFound { 60 + return true, nil, nil 61 + } 62 + if err != nil { 63 + return false, nil, err 64 + } 65 + return false, data, nil 66 + }) 67 + } 68 + 69 + func baseLevelQueueCommon(cfg *BaseConfig, internal baseLevelQueuePushPoper, mu *sync.Mutex) *baseLevelQueueCommonImpl { 70 + return &baseLevelQueueCommonImpl{length: cfg.Length, internal: internal} 71 + } 72 + 73 + func prepareLevelDB(cfg *BaseConfig) (conn string, db *leveldb.DB, err error) { 74 + if cfg.ConnStr == "" { // use data dir as conn str 75 + if !filepath.IsAbs(cfg.DataFullDir) { 76 + return "", nil, fmt.Errorf("invalid leveldb data dir (not absolute): %q", cfg.DataFullDir) 77 + } 78 + conn = cfg.DataFullDir 79 + } else { 80 + if !strings.HasPrefix(cfg.ConnStr, "leveldb://") { 81 + return "", nil, fmt.Errorf("invalid leveldb connection string: %q", cfg.ConnStr) 82 + } 83 + conn = cfg.ConnStr 84 + } 85 + for i := 0; i < 10; i++ { 86 + if db, err = nosql.GetManager().GetLevelDB(conn); err == nil { 87 + break 88 + } 89 + time.Sleep(1 * time.Second) 90 + } 91 + return conn, db, err 92 + }
+23
modules/queue/base_levelqueue_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "testing" 8 + 9 + "code.gitea.io/gitea/modules/setting" 10 + 11 + "github.com/stretchr/testify/assert" 12 + ) 13 + 14 + func TestBaseLevelDB(t *testing.T) { 15 + _, err := newBaseLevelQueueGeneric(&BaseConfig{ConnStr: "redis://"}, false) 16 + assert.ErrorContains(t, err, "invalid leveldb connection string") 17 + 18 + _, err = newBaseLevelQueueGeneric(&BaseConfig{DataFullDir: "relative"}, false) 19 + assert.ErrorContains(t, err, "invalid leveldb data dir") 20 + 21 + testQueueBasic(t, newBaseLevelQueueSimple, toBaseConfig("baseLevelQueue", setting.QueueSettings{Datadir: t.TempDir() + "/queue-test", Length: 10}), false) 22 + testQueueBasic(t, newBaseLevelQueueUnique, toBaseConfig("baseLevelQueueUnique", setting.QueueSettings{ConnStr: "leveldb://" + t.TempDir() + "/queue-test", Length: 10}), true) 23 + }
+93
modules/queue/base_levelqueue_unique.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "sync" 9 + "unsafe" 10 + 11 + "code.gitea.io/gitea/modules/nosql" 12 + 13 + "gitea.com/lunny/levelqueue" 14 + "github.com/syndtr/goleveldb/leveldb" 15 + ) 16 + 17 + type baseLevelQueueUnique struct { 18 + internal *levelqueue.UniqueQueue 19 + conn string 20 + cfg *BaseConfig 21 + 22 + mu sync.Mutex // the levelqueue.UniqueQueue is not thread-safe, there is no mutex protecting the underlying queue&set together 23 + } 24 + 25 + var _ baseQueue = (*baseLevelQueueUnique)(nil) 26 + 27 + func newBaseLevelQueueUnique(cfg *BaseConfig) (baseQueue, error) { 28 + conn, db, err := prepareLevelDB(cfg) 29 + if err != nil { 30 + return nil, err 31 + } 32 + q := &baseLevelQueueUnique{conn: conn, cfg: cfg} 33 + q.internal, err = levelqueue.NewUniqueQueue(db, []byte(cfg.QueueFullName), []byte(cfg.SetFullName), false) 34 + if err != nil { 35 + return nil, err 36 + } 37 + 38 + return q, nil 39 + } 40 + 41 + func (q *baseLevelQueueUnique) PushItem(ctx context.Context, data []byte) error { 42 + return baseLevelQueueCommon(q.cfg, q.internal, &q.mu).PushItem(ctx, data) 43 + } 44 + 45 + func (q *baseLevelQueueUnique) PopItem(ctx context.Context) ([]byte, error) { 46 + return baseLevelQueueCommon(q.cfg, q.internal, &q.mu).PopItem(ctx) 47 + } 48 + 49 + func (q *baseLevelQueueUnique) HasItem(ctx context.Context, data []byte) (bool, error) { 50 + q.mu.Lock() 51 + defer q.mu.Unlock() 52 + return q.internal.Has(data) 53 + } 54 + 55 + func (q *baseLevelQueueUnique) Len(ctx context.Context) (int, error) { 56 + q.mu.Lock() 57 + defer q.mu.Unlock() 58 + return int(q.internal.Len()), nil 59 + } 60 + 61 + func (q *baseLevelQueueUnique) Close() error { 62 + q.mu.Lock() 63 + defer q.mu.Unlock() 64 + err := q.internal.Close() 65 + _ = nosql.GetManager().CloseLevelDB(q.conn) 66 + return err 67 + } 68 + 69 + func (q *baseLevelQueueUnique) RemoveAll(ctx context.Context) error { 70 + q.mu.Lock() 71 + defer q.mu.Unlock() 72 + 73 + type levelUniqueQueue struct { 74 + q *levelqueue.Queue 75 + set *levelqueue.Set 76 + db *leveldb.DB 77 + } 78 + lq := (*levelUniqueQueue)(unsafe.Pointer(q.internal)) 79 + 80 + members, err := lq.set.Members() 81 + if err != nil { 82 + return err // seriously corrupted 83 + } 84 + for _, v := range members { 85 + _, _ = lq.set.Remove(v) 86 + } 87 + for lq.q.Len() > 0 { 88 + if _, err = lq.q.LPop(); err != nil { 89 + return err 90 + } 91 + } 92 + return nil 93 + }
+135
modules/queue/base_redis.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "sync" 9 + "time" 10 + 11 + "code.gitea.io/gitea/modules/graceful" 12 + "code.gitea.io/gitea/modules/log" 13 + "code.gitea.io/gitea/modules/nosql" 14 + 15 + "github.com/redis/go-redis/v9" 16 + ) 17 + 18 + type baseRedis struct { 19 + client redis.UniversalClient 20 + isUnique bool 21 + cfg *BaseConfig 22 + 23 + mu sync.Mutex // the old implementation is not thread-safe, the queue operation and set operation should be protected together 24 + } 25 + 26 + var _ baseQueue = (*baseRedis)(nil) 27 + 28 + func newBaseRedisGeneric(cfg *BaseConfig, unique bool) (baseQueue, error) { 29 + client := nosql.GetManager().GetRedisClient(cfg.ConnStr) 30 + 31 + var err error 32 + for i := 0; i < 10; i++ { 33 + err = client.Ping(graceful.GetManager().ShutdownContext()).Err() 34 + if err == nil { 35 + break 36 + } 37 + log.Warn("Redis is not ready, waiting for 1 second to retry: %v", err) 38 + time.Sleep(time.Second) 39 + } 40 + if err != nil { 41 + return nil, err 42 + } 43 + 44 + return &baseRedis{cfg: cfg, client: client, isUnique: unique}, nil 45 + } 46 + 47 + func newBaseRedisSimple(cfg *BaseConfig) (baseQueue, error) { 48 + return newBaseRedisGeneric(cfg, false) 49 + } 50 + 51 + func newBaseRedisUnique(cfg *BaseConfig) (baseQueue, error) { 52 + return newBaseRedisGeneric(cfg, true) 53 + } 54 + 55 + func (q *baseRedis) PushItem(ctx context.Context, data []byte) error { 56 + return backoffErr(ctx, backoffBegin, backoffUpper, time.After(pushBlockTime), func() (retry bool, err error) { 57 + q.mu.Lock() 58 + defer q.mu.Unlock() 59 + 60 + cnt, err := q.client.LLen(ctx, q.cfg.QueueFullName).Result() 61 + if err != nil { 62 + return false, err 63 + } 64 + if int(cnt) >= q.cfg.Length { 65 + return true, nil 66 + } 67 + 68 + if q.isUnique { 69 + added, err := q.client.SAdd(ctx, q.cfg.SetFullName, data).Result() 70 + if err != nil { 71 + return false, err 72 + } 73 + if added == 0 { 74 + return false, ErrAlreadyInQueue 75 + } 76 + } 77 + return false, q.client.RPush(ctx, q.cfg.QueueFullName, data).Err() 78 + }) 79 + } 80 + 81 + func (q *baseRedis) PopItem(ctx context.Context) ([]byte, error) { 82 + return backoffRetErr(ctx, backoffBegin, backoffUpper, infiniteTimerC, func() (retry bool, data []byte, err error) { 83 + q.mu.Lock() 84 + defer q.mu.Unlock() 85 + 86 + data, err = q.client.LPop(ctx, q.cfg.QueueFullName).Bytes() 87 + if err == redis.Nil { 88 + return true, nil, nil 89 + } 90 + if err != nil { 91 + return true, nil, nil 92 + } 93 + if q.isUnique { 94 + // the data has been popped, even if there is any error we can't do anything 95 + _ = q.client.SRem(ctx, q.cfg.SetFullName, data).Err() 96 + } 97 + return false, data, err 98 + }) 99 + } 100 + 101 + func (q *baseRedis) HasItem(ctx context.Context, data []byte) (bool, error) { 102 + q.mu.Lock() 103 + defer q.mu.Unlock() 104 + if !q.isUnique { 105 + return false, nil 106 + } 107 + return q.client.SIsMember(ctx, q.cfg.SetFullName, data).Result() 108 + } 109 + 110 + func (q *baseRedis) Len(ctx context.Context) (int, error) { 111 + q.mu.Lock() 112 + defer q.mu.Unlock() 113 + cnt, err := q.client.LLen(ctx, q.cfg.QueueFullName).Result() 114 + return int(cnt), err 115 + } 116 + 117 + func (q *baseRedis) Close() error { 118 + q.mu.Lock() 119 + defer q.mu.Unlock() 120 + return q.client.Close() 121 + } 122 + 123 + func (q *baseRedis) RemoveAll(ctx context.Context) error { 124 + q.mu.Lock() 125 + defer q.mu.Unlock() 126 + c1 := q.client.Del(ctx, q.cfg.QueueFullName) 127 + c2 := q.client.Del(ctx, q.cfg.SetFullName) 128 + if c1.Err() != nil { 129 + return c1.Err() 130 + } 131 + if c2.Err() != nil { 132 + return c2.Err() 133 + } 134 + return nil // actually, checking errors doesn't make sense here because the state could be out-of-sync 135 + }
+71
modules/queue/base_redis_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "os" 9 + "os/exec" 10 + "testing" 11 + "time" 12 + 13 + "code.gitea.io/gitea/modules/nosql" 14 + "code.gitea.io/gitea/modules/setting" 15 + 16 + "github.com/stretchr/testify/assert" 17 + ) 18 + 19 + func waitRedisReady(conn string, dur time.Duration) (ready bool) { 20 + ctxTimed, cancel := context.WithTimeout(context.Background(), time.Second*5) 21 + defer cancel() 22 + for t := time.Now(); ; time.Sleep(50 * time.Millisecond) { 23 + ret := nosql.GetManager().GetRedisClient(conn).Ping(ctxTimed) 24 + if ret.Err() == nil { 25 + return true 26 + } 27 + if time.Since(t) > dur { 28 + return false 29 + } 30 + } 31 + } 32 + 33 + func redisServerCmd(t *testing.T) *exec.Cmd { 34 + redisServerProg, err := exec.LookPath("redis-server") 35 + if err != nil { 36 + return nil 37 + } 38 + c := &exec.Cmd{ 39 + Path: redisServerProg, 40 + Args: []string{redisServerProg, "--bind", "127.0.0.1", "--port", "6379"}, 41 + Dir: t.TempDir(), 42 + Stdin: os.Stdin, 43 + Stdout: os.Stdout, 44 + Stderr: os.Stderr, 45 + } 46 + return c 47 + } 48 + 49 + func TestBaseRedis(t *testing.T) { 50 + var redisServer *exec.Cmd 51 + defer func() { 52 + if redisServer != nil { 53 + _ = redisServer.Process.Signal(os.Interrupt) 54 + _ = redisServer.Wait() 55 + } 56 + }() 57 + if !waitRedisReady("redis://127.0.0.1:6379/0", 0) { 58 + redisServer = redisServerCmd(t) 59 + if redisServer == nil && os.Getenv("CI") != "" { 60 + t.Skip("redis-server not found") 61 + return 62 + } 63 + assert.NoError(t, redisServer.Start()) 64 + if !assert.True(t, waitRedisReady("redis://127.0.0.1:6379/0", 5*time.Second), "start redis-server") { 65 + return 66 + } 67 + } 68 + 69 + testQueueBasic(t, newBaseRedisSimple, toBaseConfig("baseRedis", setting.QueueSettings{Length: 10}), false) 70 + testQueueBasic(t, newBaseRedisUnique, toBaseConfig("baseRedisUnique", setting.QueueSettings{Length: 10}), true) 71 + }
+140
modules/queue/base_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "fmt" 9 + "testing" 10 + "time" 11 + 12 + "github.com/stretchr/testify/assert" 13 + ) 14 + 15 + func testQueueBasic(t *testing.T, newFn func(cfg *BaseConfig) (baseQueue, error), cfg *BaseConfig, isUnique bool) { 16 + t.Run(fmt.Sprintf("testQueueBasic-%s-unique:%v", cfg.ManagedName, isUnique), func(t *testing.T) { 17 + q, err := newFn(cfg) 18 + assert.NoError(t, err) 19 + 20 + ctx := context.Background() 21 + _ = q.RemoveAll(ctx) 22 + cnt, err := q.Len(ctx) 23 + assert.NoError(t, err) 24 + assert.EqualValues(t, 0, cnt) 25 + 26 + // push the first item 27 + err = q.PushItem(ctx, []byte("foo")) 28 + assert.NoError(t, err) 29 + 30 + cnt, err = q.Len(ctx) 31 + assert.NoError(t, err) 32 + assert.EqualValues(t, 1, cnt) 33 + 34 + // push a duplicate item 35 + err = q.PushItem(ctx, []byte("foo")) 36 + if !isUnique { 37 + assert.NoError(t, err) 38 + } else { 39 + assert.ErrorIs(t, err, ErrAlreadyInQueue) 40 + } 41 + 42 + // check the duplicate item 43 + cnt, err = q.Len(ctx) 44 + assert.NoError(t, err) 45 + has, err := q.HasItem(ctx, []byte("foo")) 46 + assert.NoError(t, err) 47 + if !isUnique { 48 + assert.EqualValues(t, 2, cnt) 49 + assert.EqualValues(t, false, has) // non-unique queues don't check for duplicates 50 + } else { 51 + assert.EqualValues(t, 1, cnt) 52 + assert.EqualValues(t, true, has) 53 + } 54 + 55 + // push another item 56 + err = q.PushItem(ctx, []byte("bar")) 57 + assert.NoError(t, err) 58 + 59 + // pop the first item (and the duplicate if non-unique) 60 + it, err := q.PopItem(ctx) 61 + assert.NoError(t, err) 62 + assert.EqualValues(t, "foo", string(it)) 63 + 64 + if !isUnique { 65 + it, err = q.PopItem(ctx) 66 + assert.NoError(t, err) 67 + assert.EqualValues(t, "foo", string(it)) 68 + } 69 + 70 + // pop another item 71 + it, err = q.PopItem(ctx) 72 + assert.NoError(t, err) 73 + assert.EqualValues(t, "bar", string(it)) 74 + 75 + // pop an empty queue (timeout, cancel) 76 + ctxTimed, cancel := context.WithTimeout(ctx, 10*time.Millisecond) 77 + it, err = q.PopItem(ctxTimed) 78 + assert.ErrorIs(t, err, context.DeadlineExceeded) 79 + assert.Nil(t, it) 80 + cancel() 81 + 82 + ctxTimed, cancel = context.WithTimeout(ctx, 10*time.Millisecond) 83 + cancel() 84 + it, err = q.PopItem(ctxTimed) 85 + assert.ErrorIs(t, err, context.Canceled) 86 + assert.Nil(t, it) 87 + 88 + // test blocking push if queue is full 89 + for i := 0; i < cfg.Length; i++ { 90 + err = q.PushItem(ctx, []byte(fmt.Sprintf("item-%d", i))) 91 + assert.NoError(t, err) 92 + } 93 + ctxTimed, cancel = context.WithTimeout(ctx, 10*time.Millisecond) 94 + err = q.PushItem(ctxTimed, []byte("item-full")) 95 + assert.ErrorIs(t, err, context.DeadlineExceeded) 96 + cancel() 97 + 98 + // test blocking push if queue is full (with custom pushBlockTime) 99 + oldPushBlockTime := pushBlockTime 100 + timeStart := time.Now() 101 + pushBlockTime = 30 * time.Millisecond 102 + err = q.PushItem(ctx, []byte("item-full")) 103 + assert.ErrorIs(t, err, context.DeadlineExceeded) 104 + assert.True(t, time.Since(timeStart) >= pushBlockTime*2/3) 105 + pushBlockTime = oldPushBlockTime 106 + 107 + // remove all 108 + cnt, err = q.Len(ctx) 109 + assert.NoError(t, err) 110 + assert.EqualValues(t, cfg.Length, cnt) 111 + 112 + _ = q.RemoveAll(ctx) 113 + 114 + cnt, err = q.Len(ctx) 115 + assert.NoError(t, err) 116 + assert.EqualValues(t, 0, cnt) 117 + }) 118 + } 119 + 120 + func TestBaseDummy(t *testing.T) { 121 + q, err := newBaseDummy(&BaseConfig{}, true) 122 + assert.NoError(t, err) 123 + 124 + ctx := context.Background() 125 + assert.NoError(t, q.PushItem(ctx, []byte("foo"))) 126 + 127 + cnt, err := q.Len(ctx) 128 + assert.NoError(t, err) 129 + assert.EqualValues(t, 0, cnt) 130 + 131 + has, err := q.HasItem(ctx, []byte("foo")) 132 + assert.NoError(t, err) 133 + assert.False(t, has) 134 + 135 + it, err := q.PopItem(ctx) 136 + assert.NoError(t, err) 137 + assert.Nil(t, it) 138 + 139 + assert.NoError(t, q.RemoveAll(ctx)) 140 + }
-69
modules/queue/bytefifo.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import "context" 7 - 8 - // ByteFIFO defines a FIFO that takes a byte array 9 - type ByteFIFO interface { 10 - // Len returns the length of the fifo 11 - Len(ctx context.Context) int64 12 - // PushFunc pushes data to the end of the fifo and calls the callback if it is added 13 - PushFunc(ctx context.Context, data []byte, fn func() error) error 14 - // Pop pops data from the start of the fifo 15 - Pop(ctx context.Context) ([]byte, error) 16 - // Close this fifo 17 - Close() error 18 - // PushBack pushes data back to the top of the fifo 19 - PushBack(ctx context.Context, data []byte) error 20 - } 21 - 22 - // UniqueByteFIFO defines a FIFO that Uniques its contents 23 - type UniqueByteFIFO interface { 24 - ByteFIFO 25 - // Has returns whether the fifo contains this data 26 - Has(ctx context.Context, data []byte) (bool, error) 27 - } 28 - 29 - var _ ByteFIFO = &DummyByteFIFO{} 30 - 31 - // DummyByteFIFO represents a dummy fifo 32 - type DummyByteFIFO struct{} 33 - 34 - // PushFunc returns nil 35 - func (*DummyByteFIFO) PushFunc(ctx context.Context, data []byte, fn func() error) error { 36 - return nil 37 - } 38 - 39 - // Pop returns nil 40 - func (*DummyByteFIFO) Pop(ctx context.Context) ([]byte, error) { 41 - return []byte{}, nil 42 - } 43 - 44 - // Close returns nil 45 - func (*DummyByteFIFO) Close() error { 46 - return nil 47 - } 48 - 49 - // Len is always 0 50 - func (*DummyByteFIFO) Len(ctx context.Context) int64 { 51 - return 0 52 - } 53 - 54 - // PushBack pushes data back to the top of the fifo 55 - func (*DummyByteFIFO) PushBack(ctx context.Context, data []byte) error { 56 - return nil 57 - } 58 - 59 - var _ UniqueByteFIFO = &DummyUniqueByteFIFO{} 60 - 61 - // DummyUniqueByteFIFO represents a dummy unique fifo 62 - type DummyUniqueByteFIFO struct { 63 - DummyByteFIFO 64 - } 65 - 66 - // Has always returns false 67 - func (*DummyUniqueByteFIFO) Has(ctx context.Context, data []byte) (bool, error) { 68 - return false, nil 69 - }
+36
modules/queue/config.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "code.gitea.io/gitea/modules/setting" 8 + ) 9 + 10 + type BaseConfig struct { 11 + ManagedName string 12 + DataFullDir string // the caller must prepare an absolute path 13 + 14 + ConnStr string 15 + Length int 16 + 17 + QueueFullName, SetFullName string 18 + } 19 + 20 + func toBaseConfig(managedName string, queueSetting setting.QueueSettings) *BaseConfig { 21 + baseConfig := &BaseConfig{ 22 + ManagedName: managedName, 23 + DataFullDir: queueSetting.Datadir, 24 + 25 + ConnStr: queueSetting.ConnStr, 26 + Length: queueSetting.Length, 27 + } 28 + 29 + // queue name and set name 30 + baseConfig.QueueFullName = managedName + queueSetting.QueueName 31 + baseConfig.SetFullName = baseConfig.QueueFullName + queueSetting.SetName 32 + if baseConfig.SetFullName == baseConfig.QueueFullName { 33 + baseConfig.SetFullName += "_unique" 34 + } 35 + return baseConfig 36 + }
-91
modules/queue/helper.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "reflect" 8 - 9 - "code.gitea.io/gitea/modules/json" 10 - ) 11 - 12 - // Mappable represents an interface that can MapTo another interface 13 - type Mappable interface { 14 - MapTo(v interface{}) error 15 - } 16 - 17 - // toConfig will attempt to convert a given configuration cfg into the provided exemplar type. 18 - // 19 - // It will tolerate the cfg being passed as a []byte or string of a json representation of the 20 - // exemplar or the correct type of the exemplar itself 21 - func toConfig(exemplar, cfg interface{}) (interface{}, error) { 22 - // First of all check if we've got the same type as the exemplar - if so it's all fine. 23 - if reflect.TypeOf(cfg).AssignableTo(reflect.TypeOf(exemplar)) { 24 - return cfg, nil 25 - } 26 - 27 - // Now if not - does it provide a MapTo function we can try? 28 - if mappable, ok := cfg.(Mappable); ok { 29 - newVal := reflect.New(reflect.TypeOf(exemplar)) 30 - if err := mappable.MapTo(newVal.Interface()); err == nil { 31 - return newVal.Elem().Interface(), nil 32 - } 33 - // MapTo has failed us ... let's try the json route ... 34 - } 35 - 36 - // OK we've been passed a byte array right? 37 - configBytes, ok := cfg.([]byte) 38 - if !ok { 39 - // oh ... it's a string then? 40 - var configStr string 41 - 42 - configStr, ok = cfg.(string) 43 - configBytes = []byte(configStr) 44 - } 45 - if !ok { 46 - // hmm ... can we marshal it to json? 47 - var err error 48 - configBytes, err = json.Marshal(cfg) 49 - ok = err == nil 50 - } 51 - if !ok { 52 - // no ... we've tried hard enough at this point - throw an error! 53 - return nil, ErrInvalidConfiguration{cfg: cfg} 54 - } 55 - 56 - // OK unmarshal the byte array into a new copy of the exemplar 57 - newVal := reflect.New(reflect.TypeOf(exemplar)) 58 - if err := json.Unmarshal(configBytes, newVal.Interface()); err != nil { 59 - // If we can't unmarshal it then return an error! 60 - return nil, ErrInvalidConfiguration{cfg: cfg, err: err} 61 - } 62 - return newVal.Elem().Interface(), nil 63 - } 64 - 65 - // unmarshalAs will attempt to unmarshal provided bytes as the provided exemplar 66 - func unmarshalAs(bs []byte, exemplar interface{}) (data Data, err error) { 67 - if exemplar != nil { 68 - t := reflect.TypeOf(exemplar) 69 - n := reflect.New(t) 70 - ne := n.Elem() 71 - err = json.Unmarshal(bs, ne.Addr().Interface()) 72 - data = ne.Interface().(Data) 73 - } else { 74 - err = json.Unmarshal(bs, &data) 75 - } 76 - return data, err 77 - } 78 - 79 - // assignableTo will check if provided data is assignable to the same type as the exemplar 80 - // if the provided exemplar is nil then it will always return true 81 - func assignableTo(data Data, exemplar interface{}) bool { 82 - if exemplar == nil { 83 - return true 84 - } 85 - 86 - // Assert data is of same type as exemplar 87 - t := reflect.TypeOf(data) 88 - exemplarType := reflect.TypeOf(exemplar) 89 - 90 - return t.AssignableTo(exemplarType) && data != nil 91 - }
+63 -414
modules/queue/manager.go
··· 5 5 6 6 import ( 7 7 "context" 8 - "fmt" 9 - "reflect" 10 - "sort" 11 - "strings" 12 8 "sync" 13 9 "time" 14 10 15 - "code.gitea.io/gitea/modules/json" 16 11 "code.gitea.io/gitea/modules/log" 12 + "code.gitea.io/gitea/modules/setting" 17 13 ) 18 14 19 - var manager *Manager 20 - 21 - // Manager is a queue manager 15 + // Manager is a manager for the queues created by "CreateXxxQueue" functions, these queues are called "managed queues". 22 16 type Manager struct { 23 - mutex sync.Mutex 24 - 25 - counter int64 26 - Queues map[int64]*ManagedQueue 27 - } 28 - 29 - // ManagedQueue represents a working queue with a Pool of workers. 30 - // 31 - // Although a ManagedQueue should really represent a Queue this does not 32 - // necessarily have to be the case. This could be used to describe any queue.WorkerPool. 33 - type ManagedQueue struct { 34 - mutex sync.Mutex 35 - QID int64 36 - Type Type 37 - Name string 38 - Configuration interface{} 39 - ExemplarType string 40 - Managed interface{} 41 - counter int64 42 - PoolWorkers map[int64]*PoolWorkers 43 - } 44 - 45 - // Flushable represents a pool or queue that is flushable 46 - type Flushable interface { 47 - // Flush will add a flush worker to the pool - the worker should be autoregistered with the manager 48 - Flush(time.Duration) error 49 - // FlushWithContext is very similar to Flush 50 - // NB: The worker will not be registered with the manager. 51 - FlushWithContext(ctx context.Context) error 52 - // IsEmpty will return if the managed pool is empty and has no work 53 - IsEmpty() bool 54 - } 55 - 56 - // Pausable represents a pool or queue that is Pausable 57 - type Pausable interface { 58 - // IsPaused will return if the pool or queue is paused 59 - IsPaused() bool 60 - // Pause will pause the pool or queue 61 - Pause() 62 - // Resume will resume the pool or queue 63 - Resume() 64 - // IsPausedIsResumed will return a bool indicating if the pool or queue is paused and a channel that will be closed when it is resumed 65 - IsPausedIsResumed() (paused, resumed <-chan struct{}) 66 - } 17 + mu sync.Mutex 67 18 68 - // ManagedPool is a simple interface to get certain details from a worker pool 69 - type ManagedPool interface { 70 - // AddWorkers adds a number of worker as group to the pool with the provided timeout. A CancelFunc is provided to cancel the group 71 - AddWorkers(number int, timeout time.Duration) context.CancelFunc 72 - // NumberOfWorkers returns the total number of workers in the pool 73 - NumberOfWorkers() int 74 - // MaxNumberOfWorkers returns the maximum number of workers the pool can dynamically grow to 75 - MaxNumberOfWorkers() int 76 - // SetMaxNumberOfWorkers sets the maximum number of workers the pool can dynamically grow to 77 - SetMaxNumberOfWorkers(int) 78 - // BoostTimeout returns the current timeout for worker groups created during a boost 79 - BoostTimeout() time.Duration 80 - // BlockTimeout returns the timeout the internal channel can block for before a boost would occur 81 - BlockTimeout() time.Duration 82 - // BoostWorkers sets the number of workers to be created during a boost 83 - BoostWorkers() int 84 - // SetPoolSettings sets the user updatable settings for the pool 85 - SetPoolSettings(maxNumberOfWorkers, boostWorkers int, timeout time.Duration) 86 - // NumberInQueue returns the total number of items in the pool 87 - NumberInQueue() int64 88 - // Done returns a channel that will be closed when the Pool's baseCtx is closed 89 - Done() <-chan struct{} 19 + qidCounter int64 20 + Queues map[int64]ManagedWorkerPoolQueue 90 21 } 91 22 92 - // ManagedQueueList implements the sort.Interface 93 - type ManagedQueueList []*ManagedQueue 23 + type ManagedWorkerPoolQueue interface { 24 + GetName() string 25 + GetType() string 26 + GetItemTypeName() string 27 + GetWorkerNumber() int 28 + GetWorkerActiveNumber() int 29 + GetWorkerMaxNumber() int 30 + SetWorkerMaxNumber(num int) 31 + GetQueueItemNumber() int 94 32 95 - // PoolWorkers represents a group of workers working on a queue 96 - type PoolWorkers struct { 97 - PID int64 98 - Workers int 99 - Start time.Time 100 - Timeout time.Time 101 - HasTimeout bool 102 - Cancel context.CancelFunc 103 - IsFlusher bool 33 + // FlushWithContext tries to make the handler process all items in the queue synchronously. 34 + // It is for testing purpose only. It's not designed to be used in a cluster. 35 + FlushWithContext(ctx context.Context, timeout time.Duration) error 104 36 } 105 37 106 - // PoolWorkersList implements the sort.Interface for PoolWorkers 107 - type PoolWorkersList []*PoolWorkers 38 + var manager *Manager 108 39 109 40 func init() { 110 - _ = GetManager() 41 + manager = &Manager{ 42 + Queues: make(map[int64]ManagedWorkerPoolQueue), 43 + } 111 44 } 112 45 113 - // GetManager returns a Manager and initializes one as singleton if there's none yet 114 46 func GetManager() *Manager { 115 - if manager == nil { 116 - manager = &Manager{ 117 - Queues: make(map[int64]*ManagedQueue), 118 - } 119 - } 120 47 return manager 121 48 } 122 49 123 - // Add adds a queue to this manager 124 - func (m *Manager) Add(managed interface{}, 125 - t Type, 126 - configuration, 127 - exemplar interface{}, 128 - ) int64 { 129 - cfg, _ := json.Marshal(configuration) 130 - mq := &ManagedQueue{ 131 - Type: t, 132 - Configuration: string(cfg), 133 - ExemplarType: reflect.TypeOf(exemplar).String(), 134 - PoolWorkers: make(map[int64]*PoolWorkers), 135 - Managed: managed, 136 - } 137 - m.mutex.Lock() 138 - m.counter++ 139 - mq.QID = m.counter 140 - mq.Name = fmt.Sprintf("queue-%d", mq.QID) 141 - if named, ok := managed.(Named); ok { 142 - name := named.Name() 143 - if len(name) > 0 { 144 - mq.Name = name 145 - } 146 - } 147 - m.Queues[mq.QID] = mq 148 - m.mutex.Unlock() 149 - log.Trace("Queue Manager registered: %s (QID: %d)", mq.Name, mq.QID) 150 - return mq.QID 50 + func (m *Manager) AddManagedQueue(managed ManagedWorkerPoolQueue) { 51 + m.mu.Lock() 52 + defer m.mu.Unlock() 53 + m.qidCounter++ 54 + m.Queues[m.qidCounter] = managed 151 55 } 152 56 153 - // Remove a queue from the Manager 154 - func (m *Manager) Remove(qid int64) { 155 - m.mutex.Lock() 156 - delete(m.Queues, qid) 157 - m.mutex.Unlock() 158 - log.Trace("Queue Manager removed: QID: %d", qid) 159 - } 160 - 161 - // GetManagedQueue by qid 162 - func (m *Manager) GetManagedQueue(qid int64) *ManagedQueue { 163 - m.mutex.Lock() 164 - defer m.mutex.Unlock() 57 + func (m *Manager) GetManagedQueue(qid int64) ManagedWorkerPoolQueue { 58 + m.mu.Lock() 59 + defer m.mu.Unlock() 165 60 return m.Queues[qid] 166 61 } 167 62 168 - // FlushAll flushes all the flushable queues attached to this manager 169 - func (m *Manager) FlushAll(baseCtx context.Context, timeout time.Duration) error { 170 - var ctx context.Context 171 - var cancel context.CancelFunc 172 - start := time.Now() 173 - end := start 174 - hasTimeout := false 175 - if timeout > 0 { 176 - ctx, cancel = context.WithTimeout(baseCtx, timeout) 177 - end = start.Add(timeout) 178 - hasTimeout = true 179 - } else { 180 - ctx, cancel = context.WithCancel(baseCtx) 181 - } 182 - defer cancel() 63 + func (m *Manager) ManagedQueues() map[int64]ManagedWorkerPoolQueue { 64 + m.mu.Lock() 65 + defer m.mu.Unlock() 183 66 184 - for { 185 - select { 186 - case <-ctx.Done(): 187 - mqs := m.ManagedQueues() 188 - nonEmptyQueues := []string{} 189 - for _, mq := range mqs { 190 - if !mq.IsEmpty() { 191 - nonEmptyQueues = append(nonEmptyQueues, mq.Name) 192 - } 193 - } 194 - if len(nonEmptyQueues) > 0 { 195 - return fmt.Errorf("flush timeout with non-empty queues: %s", strings.Join(nonEmptyQueues, ", ")) 196 - } 197 - return nil 198 - default: 199 - } 200 - mqs := m.ManagedQueues() 201 - log.Debug("Found %d Managed Queues", len(mqs)) 202 - wg := sync.WaitGroup{} 203 - wg.Add(len(mqs)) 204 - allEmpty := true 205 - for _, mq := range mqs { 206 - if mq.IsEmpty() { 207 - wg.Done() 208 - continue 209 - } 210 - if pausable, ok := mq.Managed.(Pausable); ok { 211 - // no point flushing paused queues 212 - if pausable.IsPaused() { 213 - wg.Done() 214 - continue 215 - } 216 - } 217 - if pool, ok := mq.Managed.(ManagedPool); ok { 218 - // No point into flushing pools when their base's ctx is already done. 219 - select { 220 - case <-pool.Done(): 221 - wg.Done() 222 - continue 223 - default: 224 - } 225 - } 226 - 227 - allEmpty = false 228 - if flushable, ok := mq.Managed.(Flushable); ok { 229 - log.Debug("Flushing (flushable) queue: %s", mq.Name) 230 - go func(q *ManagedQueue) { 231 - localCtx, localCtxCancel := context.WithCancel(ctx) 232 - pid := q.RegisterWorkers(1, start, hasTimeout, end, localCtxCancel, true) 233 - err := flushable.FlushWithContext(localCtx) 234 - if err != nil && err != ctx.Err() { 235 - cancel() 236 - } 237 - q.CancelWorkers(pid) 238 - localCtxCancel() 239 - wg.Done() 240 - }(mq) 241 - } else { 242 - log.Debug("Queue: %s is non-empty but is not flushable", mq.Name) 243 - wg.Done() 244 - } 245 - } 246 - if allEmpty { 247 - log.Debug("All queues are empty") 248 - break 249 - } 250 - // Ensure there are always at least 100ms between loops but not more if we've actually been doing some flushing 251 - // but don't delay cancellation here. 252 - select { 253 - case <-ctx.Done(): 254 - case <-time.After(100 * time.Millisecond): 255 - } 256 - wg.Wait() 67 + queues := make(map[int64]ManagedWorkerPoolQueue, len(m.Queues)) 68 + for k, v := range m.Queues { 69 + queues[k] = v 257 70 } 258 - return nil 259 - } 260 - 261 - // ManagedQueues returns the managed queues 262 - func (m *Manager) ManagedQueues() []*ManagedQueue { 263 - m.mutex.Lock() 264 - mqs := make([]*ManagedQueue, 0, len(m.Queues)) 265 - for _, mq := range m.Queues { 266 - mqs = append(mqs, mq) 267 - } 268 - m.mutex.Unlock() 269 - sort.Sort(ManagedQueueList(mqs)) 270 - return mqs 271 - } 272 - 273 - // Workers returns the poolworkers 274 - func (q *ManagedQueue) Workers() []*PoolWorkers { 275 - q.mutex.Lock() 276 - workers := make([]*PoolWorkers, 0, len(q.PoolWorkers)) 277 - for _, worker := range q.PoolWorkers { 278 - workers = append(workers, worker) 279 - } 280 - q.mutex.Unlock() 281 - sort.Sort(PoolWorkersList(workers)) 282 - return workers 283 - } 284 - 285 - // RegisterWorkers registers workers to this queue 286 - func (q *ManagedQueue) RegisterWorkers(number int, start time.Time, hasTimeout bool, timeout time.Time, cancel context.CancelFunc, isFlusher bool) int64 { 287 - q.mutex.Lock() 288 - defer q.mutex.Unlock() 289 - q.counter++ 290 - q.PoolWorkers[q.counter] = &PoolWorkers{ 291 - PID: q.counter, 292 - Workers: number, 293 - Start: start, 294 - Timeout: timeout, 295 - HasTimeout: hasTimeout, 296 - Cancel: cancel, 297 - IsFlusher: isFlusher, 298 - } 299 - return q.counter 300 - } 301 - 302 - // CancelWorkers cancels pooled workers with pid 303 - func (q *ManagedQueue) CancelWorkers(pid int64) { 304 - q.mutex.Lock() 305 - pw, ok := q.PoolWorkers[pid] 306 - q.mutex.Unlock() 307 - if !ok { 308 - return 309 - } 310 - pw.Cancel() 311 - } 312 - 313 - // RemoveWorkers deletes pooled workers with pid 314 - func (q *ManagedQueue) RemoveWorkers(pid int64) { 315 - q.mutex.Lock() 316 - pw, ok := q.PoolWorkers[pid] 317 - delete(q.PoolWorkers, pid) 318 - q.mutex.Unlock() 319 - if ok && pw.Cancel != nil { 320 - pw.Cancel() 321 - } 322 - } 323 - 324 - // AddWorkers adds workers to the queue if it has registered an add worker function 325 - func (q *ManagedQueue) AddWorkers(number int, timeout time.Duration) context.CancelFunc { 326 - if pool, ok := q.Managed.(ManagedPool); ok { 327 - // the cancel will be added to the pool workers description above 328 - return pool.AddWorkers(number, timeout) 329 - } 330 - return nil 331 - } 332 - 333 - // Flushable returns true if the queue is flushable 334 - func (q *ManagedQueue) Flushable() bool { 335 - _, ok := q.Managed.(Flushable) 336 - return ok 337 - } 338 - 339 - // Flush flushes the queue with a timeout 340 - func (q *ManagedQueue) Flush(timeout time.Duration) error { 341 - if flushable, ok := q.Managed.(Flushable); ok { 342 - // the cancel will be added to the pool workers description above 343 - return flushable.Flush(timeout) 344 - } 345 - return nil 346 - } 347 - 348 - // IsEmpty returns if the queue is empty 349 - func (q *ManagedQueue) IsEmpty() bool { 350 - if flushable, ok := q.Managed.(Flushable); ok { 351 - return flushable.IsEmpty() 352 - } 353 - return true 354 - } 355 - 356 - // Pausable returns whether the queue is Pausable 357 - func (q *ManagedQueue) Pausable() bool { 358 - _, ok := q.Managed.(Pausable) 359 - return ok 71 + return queues 360 72 } 361 73 362 - // Pause pauses the queue 363 - func (q *ManagedQueue) Pause() { 364 - if pausable, ok := q.Managed.(Pausable); ok { 365 - pausable.Pause() 74 + // FlushAll tries to make all managed queues process all items synchronously, until timeout or the queue is empty. 75 + // It is for testing purpose only. It's not designed to be used in a cluster. 76 + func (m *Manager) FlushAll(ctx context.Context, timeout time.Duration) error { 77 + var finalErr error 78 + qs := m.ManagedQueues() 79 + for _, q := range qs { 80 + if err := q.FlushWithContext(ctx, timeout); err != nil { 81 + finalErr = err // TODO: in Go 1.20: errors.Join 82 + } 366 83 } 84 + return finalErr 367 85 } 368 86 369 - // IsPaused reveals if the queue is paused 370 - func (q *ManagedQueue) IsPaused() bool { 371 - if pausable, ok := q.Managed.(Pausable); ok { 372 - return pausable.IsPaused() 373 - } 374 - return false 87 + // CreateSimpleQueue creates a simple queue from global setting config provider by name 88 + func CreateSimpleQueue[T any](name string, handler HandlerFuncT[T]) *WorkerPoolQueue[T] { 89 + return createWorkerPoolQueue(name, setting.CfgProvider, handler, false) 375 90 } 376 91 377 - // Resume resumes the queue 378 - func (q *ManagedQueue) Resume() { 379 - if pausable, ok := q.Managed.(Pausable); ok { 380 - pausable.Resume() 381 - } 92 + // CreateUniqueQueue creates a unique queue from global setting config provider by name 93 + func CreateUniqueQueue[T any](name string, handler HandlerFuncT[T]) *WorkerPoolQueue[T] { 94 + return createWorkerPoolQueue(name, setting.CfgProvider, handler, true) 382 95 } 383 96 384 - // NumberOfWorkers returns the number of workers in the queue 385 - func (q *ManagedQueue) NumberOfWorkers() int { 386 - if pool, ok := q.Managed.(ManagedPool); ok { 387 - return pool.NumberOfWorkers() 97 + func createWorkerPoolQueue[T any](name string, cfgProvider setting.ConfigProvider, handler HandlerFuncT[T], unique bool) *WorkerPoolQueue[T] { 98 + queueSetting, err := setting.GetQueueSettings(cfgProvider, name) 99 + if err != nil { 100 + log.Error("Failed to get queue settings for %q: %v", name, err) 101 + return nil 388 102 } 389 - return -1 390 - } 391 - 392 - // MaxNumberOfWorkers returns the maximum number of workers for the pool 393 - func (q *ManagedQueue) MaxNumberOfWorkers() int { 394 - if pool, ok := q.Managed.(ManagedPool); ok { 395 - return pool.MaxNumberOfWorkers() 103 + w, err := NewWorkerPoolQueueBySetting(name, queueSetting, handler, unique) 104 + if err != nil { 105 + log.Error("Failed to create queue %q: %v", name, err) 106 + return nil 396 107 } 397 - return 0 398 - } 399 - 400 - // BoostWorkers returns the number of workers for a boost 401 - func (q *ManagedQueue) BoostWorkers() int { 402 - if pool, ok := q.Managed.(ManagedPool); ok { 403 - return pool.BoostWorkers() 404 - } 405 - return -1 406 - } 407 - 408 - // BoostTimeout returns the timeout of the next boost 409 - func (q *ManagedQueue) BoostTimeout() time.Duration { 410 - if pool, ok := q.Managed.(ManagedPool); ok { 411 - return pool.BoostTimeout() 412 - } 413 - return 0 414 - } 415 - 416 - // BlockTimeout returns the timeout til the next boost 417 - func (q *ManagedQueue) BlockTimeout() time.Duration { 418 - if pool, ok := q.Managed.(ManagedPool); ok { 419 - return pool.BlockTimeout() 420 - } 421 - return 0 422 - } 423 - 424 - // SetPoolSettings sets the setable boost values 425 - func (q *ManagedQueue) SetPoolSettings(maxNumberOfWorkers, boostWorkers int, timeout time.Duration) { 426 - if pool, ok := q.Managed.(ManagedPool); ok { 427 - pool.SetPoolSettings(maxNumberOfWorkers, boostWorkers, timeout) 428 - } 429 - } 430 - 431 - // NumberInQueue returns the number of items in the queue 432 - func (q *ManagedQueue) NumberInQueue() int64 { 433 - if pool, ok := q.Managed.(ManagedPool); ok { 434 - return pool.NumberInQueue() 435 - } 436 - return -1 437 - } 438 - 439 - func (l ManagedQueueList) Len() int { 440 - return len(l) 441 - } 442 - 443 - func (l ManagedQueueList) Less(i, j int) bool { 444 - return l[i].Name < l[j].Name 445 - } 446 - 447 - func (l ManagedQueueList) Swap(i, j int) { 448 - l[i], l[j] = l[j], l[i] 449 - } 450 - 451 - func (l PoolWorkersList) Len() int { 452 - return len(l) 453 - } 454 - 455 - func (l PoolWorkersList) Less(i, j int) bool { 456 - return l[i].Start.Before(l[j].Start) 457 - } 458 - 459 - func (l PoolWorkersList) Swap(i, j int) { 460 - l[i], l[j] = l[j], l[i] 108 + GetManager().AddManagedQueue(w) 109 + return w 461 110 }
+124
modules/queue/manager_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "path/filepath" 9 + "testing" 10 + 11 + "code.gitea.io/gitea/modules/setting" 12 + 13 + "github.com/stretchr/testify/assert" 14 + ) 15 + 16 + func TestManager(t *testing.T) { 17 + oldAppDataPath := setting.AppDataPath 18 + setting.AppDataPath = t.TempDir() 19 + defer func() { 20 + setting.AppDataPath = oldAppDataPath 21 + }() 22 + 23 + newQueueFromConfig := func(name, cfg string) (*WorkerPoolQueue[int], error) { 24 + cfgProvider, err := setting.NewConfigProviderFromData(cfg) 25 + if err != nil { 26 + return nil, err 27 + } 28 + qs, err := setting.GetQueueSettings(cfgProvider, name) 29 + if err != nil { 30 + return nil, err 31 + } 32 + return NewWorkerPoolQueueBySetting(name, qs, func(s ...int) (unhandled []int) { return nil }, false) 33 + } 34 + 35 + // test invalid CONN_STR 36 + _, err := newQueueFromConfig("default", ` 37 + [queue] 38 + DATADIR = temp-dir 39 + CONN_STR = redis:// 40 + `) 41 + assert.ErrorContains(t, err, "invalid leveldb connection string") 42 + 43 + // test default config 44 + q, err := newQueueFromConfig("default", "") 45 + assert.NoError(t, err) 46 + assert.Equal(t, "default", q.GetName()) 47 + assert.Equal(t, "level", q.GetType()) 48 + assert.Equal(t, filepath.Join(setting.AppDataPath, "queues/common"), q.baseConfig.DataFullDir) 49 + assert.Equal(t, 100, q.baseConfig.Length) 50 + assert.Equal(t, 20, q.batchLength) 51 + assert.Equal(t, "", q.baseConfig.ConnStr) 52 + assert.Equal(t, "default_queue", q.baseConfig.QueueFullName) 53 + assert.Equal(t, "default_queue_unique", q.baseConfig.SetFullName) 54 + assert.Equal(t, 10, q.GetWorkerMaxNumber()) 55 + assert.Equal(t, 0, q.GetWorkerNumber()) 56 + assert.Equal(t, 0, q.GetWorkerActiveNumber()) 57 + assert.Equal(t, 0, q.GetQueueItemNumber()) 58 + assert.Equal(t, "int", q.GetItemTypeName()) 59 + 60 + // test inherited config 61 + cfgProvider, err := setting.NewConfigProviderFromData(` 62 + [queue] 63 + TYPE = channel 64 + DATADIR = queues/dir1 65 + LENGTH = 100 66 + BATCH_LENGTH = 20 67 + CONN_STR = "addrs=127.0.0.1:6379 db=0" 68 + QUEUE_NAME = _queue1 69 + 70 + [queue.sub] 71 + TYPE = level 72 + DATADIR = queues/dir2 73 + LENGTH = 102 74 + BATCH_LENGTH = 22 75 + CONN_STR = 76 + QUEUE_NAME = _q2 77 + SET_NAME = _u2 78 + MAX_WORKERS = 2 79 + `) 80 + 81 + assert.NoError(t, err) 82 + 83 + q1 := createWorkerPoolQueue[string]("no-such", cfgProvider, nil, false) 84 + assert.Equal(t, "no-such", q1.GetName()) 85 + assert.Equal(t, "dummy", q1.GetType()) // no handler, so it becomes dummy 86 + assert.Equal(t, filepath.Join(setting.AppDataPath, "queues/dir1"), q1.baseConfig.DataFullDir) 87 + assert.Equal(t, 100, q1.baseConfig.Length) 88 + assert.Equal(t, 20, q1.batchLength) 89 + assert.Equal(t, "addrs=127.0.0.1:6379 db=0", q1.baseConfig.ConnStr) 90 + assert.Equal(t, "no-such_queue1", q1.baseConfig.QueueFullName) 91 + assert.Equal(t, "no-such_queue1_unique", q1.baseConfig.SetFullName) 92 + assert.Equal(t, 10, q1.GetWorkerMaxNumber()) 93 + assert.Equal(t, 0, q1.GetWorkerNumber()) 94 + assert.Equal(t, 0, q1.GetWorkerActiveNumber()) 95 + assert.Equal(t, 0, q1.GetQueueItemNumber()) 96 + assert.Equal(t, "string", q1.GetItemTypeName()) 97 + qid1 := GetManager().qidCounter 98 + 99 + q2 := createWorkerPoolQueue("sub", cfgProvider, func(s ...int) (unhandled []int) { return nil }, false) 100 + assert.Equal(t, "sub", q2.GetName()) 101 + assert.Equal(t, "level", q2.GetType()) 102 + assert.Equal(t, filepath.Join(setting.AppDataPath, "queues/dir2"), q2.baseConfig.DataFullDir) 103 + assert.Equal(t, 102, q2.baseConfig.Length) 104 + assert.Equal(t, 22, q2.batchLength) 105 + assert.Equal(t, "", q2.baseConfig.ConnStr) 106 + assert.Equal(t, "sub_q2", q2.baseConfig.QueueFullName) 107 + assert.Equal(t, "sub_q2_u2", q2.baseConfig.SetFullName) 108 + assert.Equal(t, 2, q2.GetWorkerMaxNumber()) 109 + assert.Equal(t, 0, q2.GetWorkerNumber()) 110 + assert.Equal(t, 0, q2.GetWorkerActiveNumber()) 111 + assert.Equal(t, 0, q2.GetQueueItemNumber()) 112 + assert.Equal(t, "int", q2.GetItemTypeName()) 113 + qid2 := GetManager().qidCounter 114 + 115 + assert.Equal(t, q1, GetManager().ManagedQueues()[qid1]) 116 + 117 + GetManager().GetManagedQueue(qid1).SetWorkerMaxNumber(120) 118 + assert.Equal(t, 120, q1.workerMaxNum) 119 + 120 + stop := runWorkerPoolQueue(q2) 121 + assert.NoError(t, GetManager().GetManagedQueue(qid2).FlushWithContext(context.Background(), 0)) 122 + assert.NoError(t, GetManager().FlushAll(context.Background(), 0)) 123 + stop() 124 + }
+25 -195
modules/queue/queue.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 2 // SPDX-License-Identifier: MIT 3 3 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "time" 10 - ) 11 - 12 - // ErrInvalidConfiguration is called when there is invalid configuration for a queue 13 - type ErrInvalidConfiguration struct { 14 - cfg interface{} 15 - err error 16 - } 17 - 18 - func (err ErrInvalidConfiguration) Error() string { 19 - if err.err != nil { 20 - return fmt.Sprintf("Invalid Configuration Argument: %v: Error: %v", err.cfg, err.err) 21 - } 22 - return fmt.Sprintf("Invalid Configuration Argument: %v", err.cfg) 23 - } 24 - 25 - // IsErrInvalidConfiguration checks if an error is an ErrInvalidConfiguration 26 - func IsErrInvalidConfiguration(err error) bool { 27 - _, ok := err.(ErrInvalidConfiguration) 28 - return ok 29 - } 30 - 31 - // Type is a type of Queue 32 - type Type string 33 - 34 - // Data defines an type of queuable data 35 - type Data interface{} 36 - 37 - // HandlerFunc is a function that takes a variable amount of data and processes it 38 - type HandlerFunc func(...Data) (unhandled []Data) 39 - 40 - // NewQueueFunc is a function that creates a queue 41 - type NewQueueFunc func(handler HandlerFunc, config, exemplar interface{}) (Queue, error) 42 - 43 - // Shutdownable represents a queue that can be shutdown 44 - type Shutdownable interface { 45 - Shutdown() 46 - Terminate() 47 - } 48 - 49 - // Named represents a queue with a name 50 - type Named interface { 51 - Name() string 52 - } 53 - 54 - // Queue defines an interface of a queue-like item 4 + // Package queue implements a specialized queue system for Gitea. 55 5 // 56 - // Queues will handle their own contents in the Run method 57 - type Queue interface { 58 - Flushable 59 - Run(atShutdown, atTerminate func(func())) 60 - Push(Data) error 61 - } 62 - 63 - // PushBackable queues can be pushed back to 64 - type PushBackable interface { 65 - // PushBack pushes data back to the top of the fifo 66 - PushBack(Data) error 67 - } 68 - 69 - // DummyQueueType is the type for the dummy queue 70 - const DummyQueueType Type = "dummy" 71 - 72 - // NewDummyQueue creates a new DummyQueue 73 - func NewDummyQueue(handler HandlerFunc, opts, exemplar interface{}) (Queue, error) { 74 - return &DummyQueue{}, nil 75 - } 76 - 77 - // DummyQueue represents an empty queue 78 - type DummyQueue struct{} 79 - 80 - // Run does nothing 81 - func (*DummyQueue) Run(_, _ func(func())) {} 82 - 83 - // Push fakes a push of data to the queue 84 - func (*DummyQueue) Push(Data) error { 85 - return nil 86 - } 87 - 88 - // PushFunc fakes a push of data to the queue with a function. The function is never run. 89 - func (*DummyQueue) PushFunc(Data, func() error) error { 90 - return nil 91 - } 92 - 93 - // Has always returns false as this queue never does anything 94 - func (*DummyQueue) Has(Data) (bool, error) { 95 - return false, nil 96 - } 97 - 98 - // Flush always returns nil 99 - func (*DummyQueue) Flush(time.Duration) error { 100 - return nil 101 - } 102 - 103 - // FlushWithContext always returns nil 104 - func (*DummyQueue) FlushWithContext(context.Context) error { 105 - return nil 106 - } 107 - 108 - // IsEmpty asserts that the queue is empty 109 - func (*DummyQueue) IsEmpty() bool { 110 - return true 111 - } 112 - 113 - // ImmediateType is the type to execute the function when push 114 - const ImmediateType Type = "immediate" 115 - 116 - // NewImmediate creates a new false queue to execute the function when push 117 - func NewImmediate(handler HandlerFunc, opts, exemplar interface{}) (Queue, error) { 118 - return &Immediate{ 119 - handler: handler, 120 - }, nil 121 - } 122 - 123 - // Immediate represents an direct execution queue 124 - type Immediate struct { 125 - handler HandlerFunc 126 - } 127 - 128 - // Run does nothing 129 - func (*Immediate) Run(_, _ func(func())) {} 130 - 131 - // Push fakes a push of data to the queue 132 - func (q *Immediate) Push(data Data) error { 133 - return q.PushFunc(data, nil) 134 - } 135 - 136 - // PushFunc fakes a push of data to the queue with a function. The function is never run. 137 - func (q *Immediate) PushFunc(data Data, f func() error) error { 138 - if f != nil { 139 - if err := f(); err != nil { 140 - return err 141 - } 142 - } 143 - q.handler(data) 144 - return nil 145 - } 146 - 147 - // Has always returns false as this queue never does anything 148 - func (*Immediate) Has(Data) (bool, error) { 149 - return false, nil 150 - } 151 - 152 - // Flush always returns nil 153 - func (*Immediate) Flush(time.Duration) error { 154 - return nil 155 - } 156 - 157 - // FlushWithContext always returns nil 158 - func (*Immediate) FlushWithContext(context.Context) error { 159 - return nil 160 - } 161 - 162 - // IsEmpty asserts that the queue is empty 163 - func (*Immediate) IsEmpty() bool { 164 - return true 165 - } 166 - 167 - var queuesMap = map[Type]NewQueueFunc{ 168 - DummyQueueType: NewDummyQueue, 169 - ImmediateType: NewImmediate, 170 - } 6 + // There are two major kinds of concepts: 7 + // 8 + // * The "base queue": channel, level, redis: 9 + // - They have the same abstraction, the same interface, and they are tested by the same testing code. 10 + // - The dummy(immediate) queue is special, it's not a real queue, it's only used as a no-op queue or a testing queue. 11 + // 12 + // * The WorkerPoolQueue: it uses the "base queue" to provide "worker pool" function. 13 + // - It calls the "handler" to process the data in the base queue. 14 + // - Its "Push" function doesn't block forever, 15 + // it will return an error if the queue is full after the timeout. 16 + // 17 + // A queue can be "simple" or "unique". A unique queue will try to avoid duplicate items. 18 + // Unique queue's "Has" function can be used to check whether an item is already in the queue, 19 + // although it's not 100% reliable due to there is no proper transaction support. 20 + // Simple queue's "Has" function always returns "has=false". 21 + // 22 + // The HandlerFuncT function is called by the WorkerPoolQueue to process the data in the base queue. 23 + // If the handler returns "unhandled" items, they will be re-queued to the base queue after a slight delay, 24 + // in case the item processor (eg: document indexer) is not available. 25 + package queue 171 26 172 - // RegisteredTypes provides the list of requested types of queues 173 - func RegisteredTypes() []Type { 174 - types := make([]Type, len(queuesMap)) 175 - i := 0 176 - for key := range queuesMap { 177 - types[i] = key 178 - i++ 179 - } 180 - return types 181 - } 27 + import "code.gitea.io/gitea/modules/util" 182 28 183 - // RegisteredTypesAsString provides the list of requested types of queues 184 - func RegisteredTypesAsString() []string { 185 - types := make([]string, len(queuesMap)) 186 - i := 0 187 - for key := range queuesMap { 188 - types[i] = string(key) 189 - i++ 190 - } 191 - return types 192 - } 29 + type HandlerFuncT[T any] func(...T) (unhandled []T) 193 30 194 - // NewQueue takes a queue Type, HandlerFunc, some options and possibly an exemplar and returns a Queue or an error 195 - func NewQueue(queueType Type, handlerFunc HandlerFunc, opts, exemplar interface{}) (Queue, error) { 196 - newFn, ok := queuesMap[queueType] 197 - if !ok { 198 - return nil, fmt.Errorf("unsupported queue type: %v", queueType) 199 - } 200 - return newFn(handlerFunc, opts, exemplar) 201 - } 31 + var ErrAlreadyInQueue = util.NewAlreadyExistErrorf("already in queue")
-419
modules/queue/queue_bytefifo.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "runtime/pprof" 10 - "sync" 11 - "sync/atomic" 12 - "time" 13 - 14 - "code.gitea.io/gitea/modules/json" 15 - "code.gitea.io/gitea/modules/log" 16 - "code.gitea.io/gitea/modules/util" 17 - ) 18 - 19 - // ByteFIFOQueueConfiguration is the configuration for a ByteFIFOQueue 20 - type ByteFIFOQueueConfiguration struct { 21 - WorkerPoolConfiguration 22 - Workers int 23 - WaitOnEmpty bool 24 - } 25 - 26 - var _ Queue = &ByteFIFOQueue{} 27 - 28 - // ByteFIFOQueue is a Queue formed from a ByteFIFO and WorkerPool 29 - type ByteFIFOQueue struct { 30 - *WorkerPool 31 - byteFIFO ByteFIFO 32 - typ Type 33 - shutdownCtx context.Context 34 - shutdownCtxCancel context.CancelFunc 35 - terminateCtx context.Context 36 - terminateCtxCancel context.CancelFunc 37 - exemplar interface{} 38 - workers int 39 - name string 40 - lock sync.Mutex 41 - waitOnEmpty bool 42 - pushed chan struct{} 43 - } 44 - 45 - // NewByteFIFOQueue creates a new ByteFIFOQueue 46 - func NewByteFIFOQueue(typ Type, byteFIFO ByteFIFO, handle HandlerFunc, cfg, exemplar interface{}) (*ByteFIFOQueue, error) { 47 - configInterface, err := toConfig(ByteFIFOQueueConfiguration{}, cfg) 48 - if err != nil { 49 - return nil, err 50 - } 51 - config := configInterface.(ByteFIFOQueueConfiguration) 52 - 53 - terminateCtx, terminateCtxCancel := context.WithCancel(context.Background()) 54 - shutdownCtx, shutdownCtxCancel := context.WithCancel(terminateCtx) 55 - 56 - q := &ByteFIFOQueue{ 57 - byteFIFO: byteFIFO, 58 - typ: typ, 59 - shutdownCtx: shutdownCtx, 60 - shutdownCtxCancel: shutdownCtxCancel, 61 - terminateCtx: terminateCtx, 62 - terminateCtxCancel: terminateCtxCancel, 63 - exemplar: exemplar, 64 - workers: config.Workers, 65 - name: config.Name, 66 - waitOnEmpty: config.WaitOnEmpty, 67 - pushed: make(chan struct{}, 1), 68 - } 69 - q.WorkerPool = NewWorkerPool(func(data ...Data) (failed []Data) { 70 - for _, unhandled := range handle(data...) { 71 - if fail := q.PushBack(unhandled); fail != nil { 72 - failed = append(failed, fail) 73 - } 74 - } 75 - return failed 76 - }, config.WorkerPoolConfiguration) 77 - 78 - return q, nil 79 - } 80 - 81 - // Name returns the name of this queue 82 - func (q *ByteFIFOQueue) Name() string { 83 - return q.name 84 - } 85 - 86 - // Push pushes data to the fifo 87 - func (q *ByteFIFOQueue) Push(data Data) error { 88 - return q.PushFunc(data, nil) 89 - } 90 - 91 - // PushBack pushes data to the fifo 92 - func (q *ByteFIFOQueue) PushBack(data Data) error { 93 - if !assignableTo(data, q.exemplar) { 94 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in %s", data, q.exemplar, q.name) 95 - } 96 - bs, err := json.Marshal(data) 97 - if err != nil { 98 - return err 99 - } 100 - defer func() { 101 - select { 102 - case q.pushed <- struct{}{}: 103 - default: 104 - } 105 - }() 106 - return q.byteFIFO.PushBack(q.terminateCtx, bs) 107 - } 108 - 109 - // PushFunc pushes data to the fifo 110 - func (q *ByteFIFOQueue) PushFunc(data Data, fn func() error) error { 111 - if !assignableTo(data, q.exemplar) { 112 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in %s", data, q.exemplar, q.name) 113 - } 114 - bs, err := json.Marshal(data) 115 - if err != nil { 116 - return err 117 - } 118 - defer func() { 119 - select { 120 - case q.pushed <- struct{}{}: 121 - default: 122 - } 123 - }() 124 - return q.byteFIFO.PushFunc(q.terminateCtx, bs, fn) 125 - } 126 - 127 - // IsEmpty checks if the queue is empty 128 - func (q *ByteFIFOQueue) IsEmpty() bool { 129 - q.lock.Lock() 130 - defer q.lock.Unlock() 131 - if !q.WorkerPool.IsEmpty() { 132 - return false 133 - } 134 - return q.byteFIFO.Len(q.terminateCtx) == 0 135 - } 136 - 137 - // NumberInQueue returns the number in the queue 138 - func (q *ByteFIFOQueue) NumberInQueue() int64 { 139 - q.lock.Lock() 140 - defer q.lock.Unlock() 141 - return q.byteFIFO.Len(q.terminateCtx) + q.WorkerPool.NumberInQueue() 142 - } 143 - 144 - // Flush flushes the ByteFIFOQueue 145 - func (q *ByteFIFOQueue) Flush(timeout time.Duration) error { 146 - select { 147 - case q.pushed <- struct{}{}: 148 - default: 149 - } 150 - return q.WorkerPool.Flush(timeout) 151 - } 152 - 153 - // Run runs the bytefifo queue 154 - func (q *ByteFIFOQueue) Run(atShutdown, atTerminate func(func())) { 155 - pprof.SetGoroutineLabels(q.baseCtx) 156 - atShutdown(q.Shutdown) 157 - atTerminate(q.Terminate) 158 - log.Debug("%s: %s Starting", q.typ, q.name) 159 - 160 - _ = q.AddWorkers(q.workers, 0) 161 - 162 - log.Trace("%s: %s Now running", q.typ, q.name) 163 - q.readToChan() 164 - 165 - <-q.shutdownCtx.Done() 166 - log.Trace("%s: %s Waiting til done", q.typ, q.name) 167 - q.Wait() 168 - 169 - log.Trace("%s: %s Waiting til cleaned", q.typ, q.name) 170 - q.CleanUp(q.terminateCtx) 171 - q.terminateCtxCancel() 172 - } 173 - 174 - const maxBackOffTime = time.Second * 3 175 - 176 - func (q *ByteFIFOQueue) readToChan() { 177 - // handle quick cancels 178 - select { 179 - case <-q.shutdownCtx.Done(): 180 - // tell the pool to shutdown. 181 - q.baseCtxCancel() 182 - return 183 - default: 184 - } 185 - 186 - // Default backoff values 187 - backOffTime := time.Millisecond * 100 188 - backOffTimer := time.NewTimer(0) 189 - util.StopTimer(backOffTimer) 190 - 191 - paused, _ := q.IsPausedIsResumed() 192 - 193 - loop: 194 - for { 195 - select { 196 - case <-paused: 197 - log.Trace("Queue %s pausing", q.name) 198 - _, resumed := q.IsPausedIsResumed() 199 - 200 - select { 201 - case <-resumed: 202 - paused, _ = q.IsPausedIsResumed() 203 - log.Trace("Queue %s resuming", q.name) 204 - if q.HasNoWorkerScaling() { 205 - log.Warn( 206 - "Queue: %s is configured to be non-scaling and has no workers - this configuration is likely incorrect.\n"+ 207 - "The queue will be paused to prevent data-loss with the assumption that you will add workers and unpause as required.", q.name) 208 - q.Pause() 209 - continue loop 210 - } 211 - case <-q.shutdownCtx.Done(): 212 - // tell the pool to shutdown. 213 - q.baseCtxCancel() 214 - return 215 - case data, ok := <-q.dataChan: 216 - if !ok { 217 - return 218 - } 219 - if err := q.PushBack(data); err != nil { 220 - log.Error("Unable to push back data into queue %s", q.name) 221 - } 222 - atomic.AddInt64(&q.numInQueue, -1) 223 - } 224 - default: 225 - } 226 - 227 - // empty the pushed channel 228 - select { 229 - case <-q.pushed: 230 - default: 231 - } 232 - 233 - err := q.doPop() 234 - 235 - util.StopTimer(backOffTimer) 236 - 237 - if err != nil { 238 - if err == errQueueEmpty && q.waitOnEmpty { 239 - log.Trace("%s: %s Waiting on Empty", q.typ, q.name) 240 - 241 - // reset the backoff time but don't set the timer 242 - backOffTime = 100 * time.Millisecond 243 - } else if err == errUnmarshal { 244 - // reset the timer and backoff 245 - backOffTime = 100 * time.Millisecond 246 - backOffTimer.Reset(backOffTime) 247 - } else { 248 - // backoff 249 - backOffTimer.Reset(backOffTime) 250 - } 251 - 252 - // Need to Backoff 253 - select { 254 - case <-q.shutdownCtx.Done(): 255 - // Oops we've been shutdown whilst backing off 256 - // Make sure the worker pool is shutdown too 257 - q.baseCtxCancel() 258 - return 259 - case <-q.pushed: 260 - // Data has been pushed to the fifo (or flush has been called) 261 - // reset the backoff time 262 - backOffTime = 100 * time.Millisecond 263 - continue loop 264 - case <-backOffTimer.C: 265 - // Calculate the next backoff time 266 - backOffTime += backOffTime / 2 267 - if backOffTime > maxBackOffTime { 268 - backOffTime = maxBackOffTime 269 - } 270 - continue loop 271 - } 272 - } 273 - 274 - // Reset the backoff time 275 - backOffTime = 100 * time.Millisecond 276 - 277 - select { 278 - case <-q.shutdownCtx.Done(): 279 - // Oops we've been shutdown 280 - // Make sure the worker pool is shutdown too 281 - q.baseCtxCancel() 282 - return 283 - default: 284 - continue loop 285 - } 286 - } 287 - } 288 - 289 - var ( 290 - errQueueEmpty = fmt.Errorf("empty queue") 291 - errEmptyBytes = fmt.Errorf("empty bytes") 292 - errUnmarshal = fmt.Errorf("failed to unmarshal") 293 - ) 294 - 295 - func (q *ByteFIFOQueue) doPop() error { 296 - q.lock.Lock() 297 - defer q.lock.Unlock() 298 - bs, err := q.byteFIFO.Pop(q.shutdownCtx) 299 - if err != nil { 300 - if err == context.Canceled { 301 - q.baseCtxCancel() 302 - return err 303 - } 304 - log.Error("%s: %s Error on Pop: %v", q.typ, q.name, err) 305 - return err 306 - } 307 - if len(bs) == 0 { 308 - if q.waitOnEmpty && q.byteFIFO.Len(q.shutdownCtx) == 0 { 309 - return errQueueEmpty 310 - } 311 - return errEmptyBytes 312 - } 313 - 314 - data, err := unmarshalAs(bs, q.exemplar) 315 - if err != nil { 316 - log.Error("%s: %s Failed to unmarshal with error: %v", q.typ, q.name, err) 317 - return errUnmarshal 318 - } 319 - 320 - log.Trace("%s %s: Task found: %#v", q.typ, q.name, data) 321 - q.WorkerPool.Push(data) 322 - return nil 323 - } 324 - 325 - // Shutdown processing from this queue 326 - func (q *ByteFIFOQueue) Shutdown() { 327 - log.Trace("%s: %s Shutting down", q.typ, q.name) 328 - select { 329 - case <-q.shutdownCtx.Done(): 330 - return 331 - default: 332 - } 333 - q.shutdownCtxCancel() 334 - log.Debug("%s: %s Shutdown", q.typ, q.name) 335 - } 336 - 337 - // IsShutdown returns a channel which is closed when this Queue is shutdown 338 - func (q *ByteFIFOQueue) IsShutdown() <-chan struct{} { 339 - return q.shutdownCtx.Done() 340 - } 341 - 342 - // Terminate this queue and close the queue 343 - func (q *ByteFIFOQueue) Terminate() { 344 - log.Trace("%s: %s Terminating", q.typ, q.name) 345 - q.Shutdown() 346 - select { 347 - case <-q.terminateCtx.Done(): 348 - return 349 - default: 350 - } 351 - if log.IsDebug() { 352 - log.Debug("%s: %s Closing with %d tasks left in queue", q.typ, q.name, q.byteFIFO.Len(q.terminateCtx)) 353 - } 354 - q.terminateCtxCancel() 355 - if err := q.byteFIFO.Close(); err != nil { 356 - log.Error("Error whilst closing internal byte fifo in %s: %s: %v", q.typ, q.name, err) 357 - } 358 - q.baseCtxFinished() 359 - log.Debug("%s: %s Terminated", q.typ, q.name) 360 - } 361 - 362 - // IsTerminated returns a channel which is closed when this Queue is terminated 363 - func (q *ByteFIFOQueue) IsTerminated() <-chan struct{} { 364 - return q.terminateCtx.Done() 365 - } 366 - 367 - var _ UniqueQueue = &ByteFIFOUniqueQueue{} 368 - 369 - // ByteFIFOUniqueQueue represents a UniqueQueue formed from a UniqueByteFifo 370 - type ByteFIFOUniqueQueue struct { 371 - ByteFIFOQueue 372 - } 373 - 374 - // NewByteFIFOUniqueQueue creates a new ByteFIFOUniqueQueue 375 - func NewByteFIFOUniqueQueue(typ Type, byteFIFO UniqueByteFIFO, handle HandlerFunc, cfg, exemplar interface{}) (*ByteFIFOUniqueQueue, error) { 376 - configInterface, err := toConfig(ByteFIFOQueueConfiguration{}, cfg) 377 - if err != nil { 378 - return nil, err 379 - } 380 - config := configInterface.(ByteFIFOQueueConfiguration) 381 - terminateCtx, terminateCtxCancel := context.WithCancel(context.Background()) 382 - shutdownCtx, shutdownCtxCancel := context.WithCancel(terminateCtx) 383 - 384 - q := &ByteFIFOUniqueQueue{ 385 - ByteFIFOQueue: ByteFIFOQueue{ 386 - byteFIFO: byteFIFO, 387 - typ: typ, 388 - shutdownCtx: shutdownCtx, 389 - shutdownCtxCancel: shutdownCtxCancel, 390 - terminateCtx: terminateCtx, 391 - terminateCtxCancel: terminateCtxCancel, 392 - exemplar: exemplar, 393 - workers: config.Workers, 394 - name: config.Name, 395 - }, 396 - } 397 - q.WorkerPool = NewWorkerPool(func(data ...Data) (failed []Data) { 398 - for _, unhandled := range handle(data...) { 399 - if fail := q.PushBack(unhandled); fail != nil { 400 - failed = append(failed, fail) 401 - } 402 - } 403 - return failed 404 - }, config.WorkerPoolConfiguration) 405 - 406 - return q, nil 407 - } 408 - 409 - // Has checks if the provided data is in the queue 410 - func (q *ByteFIFOUniqueQueue) Has(data Data) (bool, error) { 411 - if !assignableTo(data, q.exemplar) { 412 - return false, fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in %s", data, q.exemplar, q.name) 413 - } 414 - bs, err := json.Marshal(data) 415 - if err != nil { 416 - return false, err 417 - } 418 - return q.byteFIFO.(UniqueByteFIFO).Has(q.terminateCtx, bs) 419 - }
-160
modules/queue/queue_channel.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "runtime/pprof" 10 - "sync/atomic" 11 - "time" 12 - 13 - "code.gitea.io/gitea/modules/log" 14 - ) 15 - 16 - // ChannelQueueType is the type for channel queue 17 - const ChannelQueueType Type = "channel" 18 - 19 - // ChannelQueueConfiguration is the configuration for a ChannelQueue 20 - type ChannelQueueConfiguration struct { 21 - WorkerPoolConfiguration 22 - Workers int 23 - } 24 - 25 - // ChannelQueue implements Queue 26 - // 27 - // A channel queue is not persistable and does not shutdown or terminate cleanly 28 - // It is basically a very thin wrapper around a WorkerPool 29 - type ChannelQueue struct { 30 - *WorkerPool 31 - shutdownCtx context.Context 32 - shutdownCtxCancel context.CancelFunc 33 - terminateCtx context.Context 34 - terminateCtxCancel context.CancelFunc 35 - exemplar interface{} 36 - workers int 37 - name string 38 - } 39 - 40 - // NewChannelQueue creates a memory channel queue 41 - func NewChannelQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 42 - configInterface, err := toConfig(ChannelQueueConfiguration{}, cfg) 43 - if err != nil { 44 - return nil, err 45 - } 46 - config := configInterface.(ChannelQueueConfiguration) 47 - if config.BatchLength == 0 { 48 - config.BatchLength = 1 49 - } 50 - 51 - terminateCtx, terminateCtxCancel := context.WithCancel(context.Background()) 52 - shutdownCtx, shutdownCtxCancel := context.WithCancel(terminateCtx) 53 - 54 - queue := &ChannelQueue{ 55 - shutdownCtx: shutdownCtx, 56 - shutdownCtxCancel: shutdownCtxCancel, 57 - terminateCtx: terminateCtx, 58 - terminateCtxCancel: terminateCtxCancel, 59 - exemplar: exemplar, 60 - workers: config.Workers, 61 - name: config.Name, 62 - } 63 - queue.WorkerPool = NewWorkerPool(func(data ...Data) []Data { 64 - unhandled := handle(data...) 65 - if len(unhandled) > 0 { 66 - // We can only pushback to the channel if we're paused. 67 - if queue.IsPaused() { 68 - atomic.AddInt64(&queue.numInQueue, int64(len(unhandled))) 69 - go func() { 70 - for _, datum := range data { 71 - queue.dataChan <- datum 72 - } 73 - }() 74 - return nil 75 - } 76 - } 77 - return unhandled 78 - }, config.WorkerPoolConfiguration) 79 - 80 - queue.qid = GetManager().Add(queue, ChannelQueueType, config, exemplar) 81 - return queue, nil 82 - } 83 - 84 - // Run starts to run the queue 85 - func (q *ChannelQueue) Run(atShutdown, atTerminate func(func())) { 86 - pprof.SetGoroutineLabels(q.baseCtx) 87 - atShutdown(q.Shutdown) 88 - atTerminate(q.Terminate) 89 - log.Debug("ChannelQueue: %s Starting", q.name) 90 - _ = q.AddWorkers(q.workers, 0) 91 - } 92 - 93 - // Push will push data into the queue 94 - func (q *ChannelQueue) Push(data Data) error { 95 - if !assignableTo(data, q.exemplar) { 96 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in queue: %s", data, q.exemplar, q.name) 97 - } 98 - q.WorkerPool.Push(data) 99 - return nil 100 - } 101 - 102 - // Flush flushes the channel with a timeout - the Flush worker will be registered as a flush worker with the manager 103 - func (q *ChannelQueue) Flush(timeout time.Duration) error { 104 - if q.IsPaused() { 105 - return nil 106 - } 107 - ctx, cancel := q.commonRegisterWorkers(1, timeout, true) 108 - defer cancel() 109 - return q.FlushWithContext(ctx) 110 - } 111 - 112 - // Shutdown processing from this queue 113 - func (q *ChannelQueue) Shutdown() { 114 - q.lock.Lock() 115 - defer q.lock.Unlock() 116 - select { 117 - case <-q.shutdownCtx.Done(): 118 - log.Trace("ChannelQueue: %s Already Shutting down", q.name) 119 - return 120 - default: 121 - } 122 - log.Trace("ChannelQueue: %s Shutting down", q.name) 123 - go func() { 124 - log.Trace("ChannelQueue: %s Flushing", q.name) 125 - // We can't use Cleanup here because that will close the channel 126 - if err := q.FlushWithContext(q.terminateCtx); err != nil { 127 - count := atomic.LoadInt64(&q.numInQueue) 128 - if count > 0 { 129 - log.Warn("ChannelQueue: %s Terminated before completed flushing", q.name) 130 - } 131 - return 132 - } 133 - log.Debug("ChannelQueue: %s Flushed", q.name) 134 - }() 135 - q.shutdownCtxCancel() 136 - log.Debug("ChannelQueue: %s Shutdown", q.name) 137 - } 138 - 139 - // Terminate this queue and close the queue 140 - func (q *ChannelQueue) Terminate() { 141 - log.Trace("ChannelQueue: %s Terminating", q.name) 142 - q.Shutdown() 143 - select { 144 - case <-q.terminateCtx.Done(): 145 - return 146 - default: 147 - } 148 - q.terminateCtxCancel() 149 - q.baseCtxFinished() 150 - log.Debug("ChannelQueue: %s Terminated", q.name) 151 - } 152 - 153 - // Name returns the name of this queue 154 - func (q *ChannelQueue) Name() string { 155 - return q.name 156 - } 157 - 158 - func init() { 159 - queuesMap[ChannelQueueType] = NewChannelQueue 160 - }
-315
modules/queue/queue_channel_test.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "os" 8 - "sync" 9 - "testing" 10 - "time" 11 - 12 - "code.gitea.io/gitea/modules/log" 13 - 14 - "github.com/stretchr/testify/assert" 15 - ) 16 - 17 - func TestChannelQueue(t *testing.T) { 18 - handleChan := make(chan *testData) 19 - handle := func(data ...Data) []Data { 20 - for _, datum := range data { 21 - testDatum := datum.(*testData) 22 - handleChan <- testDatum 23 - } 24 - return nil 25 - } 26 - 27 - nilFn := func(_ func()) {} 28 - 29 - queue, err := NewChannelQueue(handle, 30 - ChannelQueueConfiguration{ 31 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 32 - QueueLength: 0, 33 - MaxWorkers: 10, 34 - BlockTimeout: 1 * time.Second, 35 - BoostTimeout: 5 * time.Minute, 36 - BoostWorkers: 5, 37 - Name: "TestChannelQueue", 38 - }, 39 - Workers: 0, 40 - }, &testData{}) 41 - assert.NoError(t, err) 42 - 43 - assert.Equal(t, 5, queue.(*ChannelQueue).WorkerPool.boostWorkers) 44 - 45 - go queue.Run(nilFn, nilFn) 46 - 47 - test1 := testData{"A", 1} 48 - go queue.Push(&test1) 49 - result1 := <-handleChan 50 - assert.Equal(t, test1.TestString, result1.TestString) 51 - assert.Equal(t, test1.TestInt, result1.TestInt) 52 - 53 - err = queue.Push(test1) 54 - assert.Error(t, err) 55 - } 56 - 57 - func TestChannelQueue_Batch(t *testing.T) { 58 - handleChan := make(chan *testData) 59 - handle := func(data ...Data) []Data { 60 - assert.True(t, len(data) == 2) 61 - for _, datum := range data { 62 - testDatum := datum.(*testData) 63 - handleChan <- testDatum 64 - } 65 - return nil 66 - } 67 - 68 - nilFn := func(_ func()) {} 69 - 70 - queue, err := NewChannelQueue(handle, 71 - ChannelQueueConfiguration{ 72 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 73 - QueueLength: 20, 74 - BatchLength: 2, 75 - BlockTimeout: 0, 76 - BoostTimeout: 0, 77 - BoostWorkers: 0, 78 - MaxWorkers: 10, 79 - }, 80 - Workers: 1, 81 - }, &testData{}) 82 - assert.NoError(t, err) 83 - 84 - go queue.Run(nilFn, nilFn) 85 - 86 - test1 := testData{"A", 1} 87 - test2 := testData{"B", 2} 88 - 89 - queue.Push(&test1) 90 - go queue.Push(&test2) 91 - 92 - result1 := <-handleChan 93 - assert.Equal(t, test1.TestString, result1.TestString) 94 - assert.Equal(t, test1.TestInt, result1.TestInt) 95 - 96 - result2 := <-handleChan 97 - assert.Equal(t, test2.TestString, result2.TestString) 98 - assert.Equal(t, test2.TestInt, result2.TestInt) 99 - 100 - err = queue.Push(test1) 101 - assert.Error(t, err) 102 - } 103 - 104 - func TestChannelQueue_Pause(t *testing.T) { 105 - if os.Getenv("CI") != "" { 106 - t.Skip("Skipping because test is flaky on CI") 107 - } 108 - lock := sync.Mutex{} 109 - var queue Queue 110 - var err error 111 - pushBack := false 112 - handleChan := make(chan *testData) 113 - handle := func(data ...Data) []Data { 114 - lock.Lock() 115 - if pushBack { 116 - if pausable, ok := queue.(Pausable); ok { 117 - pausable.Pause() 118 - } 119 - lock.Unlock() 120 - return data 121 - } 122 - lock.Unlock() 123 - 124 - for _, datum := range data { 125 - testDatum := datum.(*testData) 126 - handleChan <- testDatum 127 - } 128 - return nil 129 - } 130 - 131 - queueShutdown := []func(){} 132 - queueTerminate := []func(){} 133 - 134 - terminated := make(chan struct{}) 135 - 136 - queue, err = NewChannelQueue(handle, 137 - ChannelQueueConfiguration{ 138 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 139 - QueueLength: 20, 140 - BatchLength: 1, 141 - BlockTimeout: 0, 142 - BoostTimeout: 0, 143 - BoostWorkers: 0, 144 - MaxWorkers: 10, 145 - }, 146 - Workers: 1, 147 - }, &testData{}) 148 - assert.NoError(t, err) 149 - 150 - go func() { 151 - queue.Run(func(shutdown func()) { 152 - lock.Lock() 153 - defer lock.Unlock() 154 - queueShutdown = append(queueShutdown, shutdown) 155 - }, func(terminate func()) { 156 - lock.Lock() 157 - defer lock.Unlock() 158 - queueTerminate = append(queueTerminate, terminate) 159 - }) 160 - close(terminated) 161 - }() 162 - 163 - // Shutdown and Terminate in defer 164 - defer func() { 165 - lock.Lock() 166 - callbacks := make([]func(), len(queueShutdown)) 167 - copy(callbacks, queueShutdown) 168 - lock.Unlock() 169 - for _, callback := range callbacks { 170 - callback() 171 - } 172 - lock.Lock() 173 - log.Info("Finally terminating") 174 - callbacks = make([]func(), len(queueTerminate)) 175 - copy(callbacks, queueTerminate) 176 - lock.Unlock() 177 - for _, callback := range callbacks { 178 - callback() 179 - } 180 - }() 181 - 182 - test1 := testData{"A", 1} 183 - test2 := testData{"B", 2} 184 - queue.Push(&test1) 185 - 186 - pausable, ok := queue.(Pausable) 187 - if !assert.True(t, ok) { 188 - return 189 - } 190 - result1 := <-handleChan 191 - assert.Equal(t, test1.TestString, result1.TestString) 192 - assert.Equal(t, test1.TestInt, result1.TestInt) 193 - 194 - pausable.Pause() 195 - 196 - paused, _ := pausable.IsPausedIsResumed() 197 - 198 - select { 199 - case <-paused: 200 - case <-time.After(100 * time.Millisecond): 201 - assert.Fail(t, "Queue is not paused") 202 - return 203 - } 204 - 205 - queue.Push(&test2) 206 - 207 - var result2 *testData 208 - select { 209 - case result2 = <-handleChan: 210 - assert.Fail(t, "handler chan should be empty") 211 - case <-time.After(100 * time.Millisecond): 212 - } 213 - 214 - assert.Nil(t, result2) 215 - 216 - pausable.Resume() 217 - _, resumed := pausable.IsPausedIsResumed() 218 - 219 - select { 220 - case <-resumed: 221 - case <-time.After(100 * time.Millisecond): 222 - assert.Fail(t, "Queue should be resumed") 223 - } 224 - 225 - select { 226 - case result2 = <-handleChan: 227 - case <-time.After(500 * time.Millisecond): 228 - assert.Fail(t, "handler chan should contain test2") 229 - } 230 - 231 - assert.Equal(t, test2.TestString, result2.TestString) 232 - assert.Equal(t, test2.TestInt, result2.TestInt) 233 - 234 - lock.Lock() 235 - pushBack = true 236 - lock.Unlock() 237 - 238 - _, resumed = pausable.IsPausedIsResumed() 239 - 240 - select { 241 - case <-resumed: 242 - case <-time.After(100 * time.Millisecond): 243 - assert.Fail(t, "Queue is not resumed") 244 - return 245 - } 246 - 247 - queue.Push(&test1) 248 - paused, _ = pausable.IsPausedIsResumed() 249 - 250 - select { 251 - case <-paused: 252 - case <-handleChan: 253 - assert.Fail(t, "handler chan should not contain test1") 254 - return 255 - case <-time.After(100 * time.Millisecond): 256 - assert.Fail(t, "queue should be paused") 257 - return 258 - } 259 - 260 - lock.Lock() 261 - pushBack = false 262 - lock.Unlock() 263 - 264 - paused, _ = pausable.IsPausedIsResumed() 265 - 266 - select { 267 - case <-paused: 268 - case <-time.After(100 * time.Millisecond): 269 - assert.Fail(t, "Queue is not paused") 270 - return 271 - } 272 - 273 - pausable.Resume() 274 - _, resumed = pausable.IsPausedIsResumed() 275 - 276 - select { 277 - case <-resumed: 278 - case <-time.After(100 * time.Millisecond): 279 - assert.Fail(t, "Queue should be resumed") 280 - } 281 - 282 - select { 283 - case result1 = <-handleChan: 284 - case <-time.After(500 * time.Millisecond): 285 - assert.Fail(t, "handler chan should contain test1") 286 - } 287 - assert.Equal(t, test1.TestString, result1.TestString) 288 - assert.Equal(t, test1.TestInt, result1.TestInt) 289 - 290 - lock.Lock() 291 - callbacks := make([]func(), len(queueShutdown)) 292 - copy(callbacks, queueShutdown) 293 - queueShutdown = queueShutdown[:0] 294 - lock.Unlock() 295 - // Now shutdown the queue 296 - for _, callback := range callbacks { 297 - callback() 298 - } 299 - 300 - // terminate the queue 301 - lock.Lock() 302 - callbacks = make([]func(), len(queueTerminate)) 303 - copy(callbacks, queueTerminate) 304 - queueShutdown = queueTerminate[:0] 305 - lock.Unlock() 306 - for _, callback := range callbacks { 307 - callback() 308 - } 309 - select { 310 - case <-terminated: 311 - case <-time.After(10 * time.Second): 312 - assert.Fail(t, "Queue should have terminated") 313 - return 314 - } 315 - }
-124
modules/queue/queue_disk.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - 9 - "code.gitea.io/gitea/modules/nosql" 10 - 11 - "gitea.com/lunny/levelqueue" 12 - ) 13 - 14 - // LevelQueueType is the type for level queue 15 - const LevelQueueType Type = "level" 16 - 17 - // LevelQueueConfiguration is the configuration for a LevelQueue 18 - type LevelQueueConfiguration struct { 19 - ByteFIFOQueueConfiguration 20 - DataDir string 21 - ConnectionString string 22 - QueueName string 23 - } 24 - 25 - // LevelQueue implements a disk library queue 26 - type LevelQueue struct { 27 - *ByteFIFOQueue 28 - } 29 - 30 - // NewLevelQueue creates a ledis local queue 31 - func NewLevelQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 32 - configInterface, err := toConfig(LevelQueueConfiguration{}, cfg) 33 - if err != nil { 34 - return nil, err 35 - } 36 - config := configInterface.(LevelQueueConfiguration) 37 - 38 - if len(config.ConnectionString) == 0 { 39 - config.ConnectionString = config.DataDir 40 - } 41 - config.WaitOnEmpty = true 42 - 43 - byteFIFO, err := NewLevelQueueByteFIFO(config.ConnectionString, config.QueueName) 44 - if err != nil { 45 - return nil, err 46 - } 47 - 48 - byteFIFOQueue, err := NewByteFIFOQueue(LevelQueueType, byteFIFO, handle, config.ByteFIFOQueueConfiguration, exemplar) 49 - if err != nil { 50 - return nil, err 51 - } 52 - 53 - queue := &LevelQueue{ 54 - ByteFIFOQueue: byteFIFOQueue, 55 - } 56 - queue.qid = GetManager().Add(queue, LevelQueueType, config, exemplar) 57 - return queue, nil 58 - } 59 - 60 - var _ ByteFIFO = &LevelQueueByteFIFO{} 61 - 62 - // LevelQueueByteFIFO represents a ByteFIFO formed from a LevelQueue 63 - type LevelQueueByteFIFO struct { 64 - internal *levelqueue.Queue 65 - connection string 66 - } 67 - 68 - // NewLevelQueueByteFIFO creates a ByteFIFO formed from a LevelQueue 69 - func NewLevelQueueByteFIFO(connection, prefix string) (*LevelQueueByteFIFO, error) { 70 - db, err := nosql.GetManager().GetLevelDB(connection) 71 - if err != nil { 72 - return nil, err 73 - } 74 - 75 - internal, err := levelqueue.NewQueue(db, []byte(prefix), false) 76 - if err != nil { 77 - return nil, err 78 - } 79 - 80 - return &LevelQueueByteFIFO{ 81 - connection: connection, 82 - internal: internal, 83 - }, nil 84 - } 85 - 86 - // PushFunc will push data into the fifo 87 - func (fifo *LevelQueueByteFIFO) PushFunc(ctx context.Context, data []byte, fn func() error) error { 88 - if fn != nil { 89 - if err := fn(); err != nil { 90 - return err 91 - } 92 - } 93 - return fifo.internal.LPush(data) 94 - } 95 - 96 - // PushBack pushes data to the top of the fifo 97 - func (fifo *LevelQueueByteFIFO) PushBack(ctx context.Context, data []byte) error { 98 - return fifo.internal.RPush(data) 99 - } 100 - 101 - // Pop pops data from the start of the fifo 102 - func (fifo *LevelQueueByteFIFO) Pop(ctx context.Context) ([]byte, error) { 103 - data, err := fifo.internal.RPop() 104 - if err != nil && err != levelqueue.ErrNotFound { 105 - return nil, err 106 - } 107 - return data, nil 108 - } 109 - 110 - // Close this fifo 111 - func (fifo *LevelQueueByteFIFO) Close() error { 112 - err := fifo.internal.Close() 113 - _ = nosql.GetManager().CloseLevelDB(fifo.connection) 114 - return err 115 - } 116 - 117 - // Len returns the length of the fifo 118 - func (fifo *LevelQueueByteFIFO) Len(ctx context.Context) int64 { 119 - return fifo.internal.Len() 120 - } 121 - 122 - func init() { 123 - queuesMap[LevelQueueType] = NewLevelQueue 124 - }
-358
modules/queue/queue_disk_channel.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "runtime/pprof" 10 - "sync" 11 - "sync/atomic" 12 - "time" 13 - 14 - "code.gitea.io/gitea/modules/log" 15 - ) 16 - 17 - // PersistableChannelQueueType is the type for persistable queue 18 - const PersistableChannelQueueType Type = "persistable-channel" 19 - 20 - // PersistableChannelQueueConfiguration is the configuration for a PersistableChannelQueue 21 - type PersistableChannelQueueConfiguration struct { 22 - Name string 23 - DataDir string 24 - BatchLength int 25 - QueueLength int 26 - Timeout time.Duration 27 - MaxAttempts int 28 - Workers int 29 - MaxWorkers int 30 - BlockTimeout time.Duration 31 - BoostTimeout time.Duration 32 - BoostWorkers int 33 - } 34 - 35 - // PersistableChannelQueue wraps a channel queue and level queue together 36 - // The disk level queue will be used to store data at shutdown and terminate - and will be restored 37 - // on start up. 38 - type PersistableChannelQueue struct { 39 - channelQueue *ChannelQueue 40 - delayedStarter 41 - lock sync.Mutex 42 - closed chan struct{} 43 - } 44 - 45 - // NewPersistableChannelQueue creates a wrapped batched channel queue with persistable level queue backend when shutting down 46 - // This differs from a wrapped queue in that the persistent queue is only used to persist at shutdown/terminate 47 - func NewPersistableChannelQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 48 - configInterface, err := toConfig(PersistableChannelQueueConfiguration{}, cfg) 49 - if err != nil { 50 - return nil, err 51 - } 52 - config := configInterface.(PersistableChannelQueueConfiguration) 53 - 54 - queue := &PersistableChannelQueue{ 55 - closed: make(chan struct{}), 56 - } 57 - 58 - wrappedHandle := func(data ...Data) (failed []Data) { 59 - for _, unhandled := range handle(data...) { 60 - if fail := queue.PushBack(unhandled); fail != nil { 61 - failed = append(failed, fail) 62 - } 63 - } 64 - return failed 65 - } 66 - 67 - channelQueue, err := NewChannelQueue(wrappedHandle, ChannelQueueConfiguration{ 68 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 69 - QueueLength: config.QueueLength, 70 - BatchLength: config.BatchLength, 71 - BlockTimeout: config.BlockTimeout, 72 - BoostTimeout: config.BoostTimeout, 73 - BoostWorkers: config.BoostWorkers, 74 - MaxWorkers: config.MaxWorkers, 75 - Name: config.Name + "-channel", 76 - }, 77 - Workers: config.Workers, 78 - }, exemplar) 79 - if err != nil { 80 - return nil, err 81 - } 82 - 83 - // the level backend only needs temporary workers to catch up with the previously dropped work 84 - levelCfg := LevelQueueConfiguration{ 85 - ByteFIFOQueueConfiguration: ByteFIFOQueueConfiguration{ 86 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 87 - QueueLength: config.QueueLength, 88 - BatchLength: config.BatchLength, 89 - BlockTimeout: 1 * time.Second, 90 - BoostTimeout: 5 * time.Minute, 91 - BoostWorkers: 1, 92 - MaxWorkers: 5, 93 - Name: config.Name + "-level", 94 - }, 95 - Workers: 0, 96 - }, 97 - DataDir: config.DataDir, 98 - QueueName: config.Name + "-level", 99 - } 100 - 101 - levelQueue, err := NewLevelQueue(wrappedHandle, levelCfg, exemplar) 102 - if err == nil { 103 - queue.channelQueue = channelQueue.(*ChannelQueue) 104 - queue.delayedStarter = delayedStarter{ 105 - internal: levelQueue.(*LevelQueue), 106 - name: config.Name, 107 - } 108 - _ = GetManager().Add(queue, PersistableChannelQueueType, config, exemplar) 109 - return queue, nil 110 - } 111 - if IsErrInvalidConfiguration(err) { 112 - // Retrying ain't gonna make this any better... 113 - return nil, ErrInvalidConfiguration{cfg: cfg} 114 - } 115 - 116 - queue.channelQueue = channelQueue.(*ChannelQueue) 117 - queue.delayedStarter = delayedStarter{ 118 - cfg: levelCfg, 119 - underlying: LevelQueueType, 120 - timeout: config.Timeout, 121 - maxAttempts: config.MaxAttempts, 122 - name: config.Name, 123 - } 124 - _ = GetManager().Add(queue, PersistableChannelQueueType, config, exemplar) 125 - return queue, nil 126 - } 127 - 128 - // Name returns the name of this queue 129 - func (q *PersistableChannelQueue) Name() string { 130 - return q.delayedStarter.name 131 - } 132 - 133 - // Push will push the indexer data to queue 134 - func (q *PersistableChannelQueue) Push(data Data) error { 135 - select { 136 - case <-q.closed: 137 - return q.internal.Push(data) 138 - default: 139 - return q.channelQueue.Push(data) 140 - } 141 - } 142 - 143 - // PushBack will push the indexer data to queue 144 - func (q *PersistableChannelQueue) PushBack(data Data) error { 145 - select { 146 - case <-q.closed: 147 - if pbr, ok := q.internal.(PushBackable); ok { 148 - return pbr.PushBack(data) 149 - } 150 - return q.internal.Push(data) 151 - default: 152 - return q.channelQueue.Push(data) 153 - } 154 - } 155 - 156 - // Run starts to run the queue 157 - func (q *PersistableChannelQueue) Run(atShutdown, atTerminate func(func())) { 158 - pprof.SetGoroutineLabels(q.channelQueue.baseCtx) 159 - log.Debug("PersistableChannelQueue: %s Starting", q.delayedStarter.name) 160 - _ = q.channelQueue.AddWorkers(q.channelQueue.workers, 0) 161 - 162 - q.lock.Lock() 163 - if q.internal == nil { 164 - err := q.setInternal(atShutdown, q.channelQueue.handle, q.channelQueue.exemplar) 165 - q.lock.Unlock() 166 - if err != nil { 167 - log.Fatal("Unable to create internal queue for %s Error: %v", q.Name(), err) 168 - return 169 - } 170 - } else { 171 - q.lock.Unlock() 172 - } 173 - atShutdown(q.Shutdown) 174 - atTerminate(q.Terminate) 175 - 176 - if lq, ok := q.internal.(*LevelQueue); ok && lq.byteFIFO.Len(lq.terminateCtx) != 0 { 177 - // Just run the level queue - we shut it down once it's flushed 178 - go q.internal.Run(func(_ func()) {}, func(_ func()) {}) 179 - go func() { 180 - for !lq.IsEmpty() { 181 - _ = lq.Flush(0) 182 - select { 183 - case <-time.After(100 * time.Millisecond): 184 - case <-lq.shutdownCtx.Done(): 185 - if lq.byteFIFO.Len(lq.terminateCtx) > 0 { 186 - log.Warn("LevelQueue: %s shut down before completely flushed", q.internal.(*LevelQueue).Name()) 187 - } 188 - return 189 - } 190 - } 191 - log.Debug("LevelQueue: %s flushed so shutting down", q.internal.(*LevelQueue).Name()) 192 - q.internal.(*LevelQueue).Shutdown() 193 - GetManager().Remove(q.internal.(*LevelQueue).qid) 194 - }() 195 - } else { 196 - log.Debug("PersistableChannelQueue: %s Skipping running the empty level queue", q.delayedStarter.name) 197 - q.internal.(*LevelQueue).Shutdown() 198 - GetManager().Remove(q.internal.(*LevelQueue).qid) 199 - } 200 - } 201 - 202 - // Flush flushes the queue and blocks till the queue is empty 203 - func (q *PersistableChannelQueue) Flush(timeout time.Duration) error { 204 - var ctx context.Context 205 - var cancel context.CancelFunc 206 - if timeout > 0 { 207 - ctx, cancel = context.WithTimeout(context.Background(), timeout) 208 - } else { 209 - ctx, cancel = context.WithCancel(context.Background()) 210 - } 211 - defer cancel() 212 - return q.FlushWithContext(ctx) 213 - } 214 - 215 - // FlushWithContext flushes the queue and blocks till the queue is empty 216 - func (q *PersistableChannelQueue) FlushWithContext(ctx context.Context) error { 217 - errChan := make(chan error, 1) 218 - go func() { 219 - errChan <- q.channelQueue.FlushWithContext(ctx) 220 - }() 221 - go func() { 222 - q.lock.Lock() 223 - if q.internal == nil { 224 - q.lock.Unlock() 225 - errChan <- fmt.Errorf("not ready to flush internal queue %s yet", q.Name()) 226 - return 227 - } 228 - q.lock.Unlock() 229 - errChan <- q.internal.FlushWithContext(ctx) 230 - }() 231 - err1 := <-errChan 232 - err2 := <-errChan 233 - 234 - if err1 != nil { 235 - return err1 236 - } 237 - return err2 238 - } 239 - 240 - // IsEmpty checks if a queue is empty 241 - func (q *PersistableChannelQueue) IsEmpty() bool { 242 - if !q.channelQueue.IsEmpty() { 243 - return false 244 - } 245 - q.lock.Lock() 246 - defer q.lock.Unlock() 247 - if q.internal == nil { 248 - return false 249 - } 250 - return q.internal.IsEmpty() 251 - } 252 - 253 - // IsPaused returns if the pool is paused 254 - func (q *PersistableChannelQueue) IsPaused() bool { 255 - return q.channelQueue.IsPaused() 256 - } 257 - 258 - // IsPausedIsResumed returns if the pool is paused and a channel that is closed when it is resumed 259 - func (q *PersistableChannelQueue) IsPausedIsResumed() (<-chan struct{}, <-chan struct{}) { 260 - return q.channelQueue.IsPausedIsResumed() 261 - } 262 - 263 - // Pause pauses the WorkerPool 264 - func (q *PersistableChannelQueue) Pause() { 265 - q.channelQueue.Pause() 266 - q.lock.Lock() 267 - defer q.lock.Unlock() 268 - if q.internal == nil { 269 - return 270 - } 271 - 272 - pausable, ok := q.internal.(Pausable) 273 - if !ok { 274 - return 275 - } 276 - pausable.Pause() 277 - } 278 - 279 - // Resume resumes the WorkerPool 280 - func (q *PersistableChannelQueue) Resume() { 281 - q.channelQueue.Resume() 282 - q.lock.Lock() 283 - defer q.lock.Unlock() 284 - if q.internal == nil { 285 - return 286 - } 287 - 288 - pausable, ok := q.internal.(Pausable) 289 - if !ok { 290 - return 291 - } 292 - pausable.Resume() 293 - } 294 - 295 - // Shutdown processing this queue 296 - func (q *PersistableChannelQueue) Shutdown() { 297 - log.Trace("PersistableChannelQueue: %s Shutting down", q.delayedStarter.name) 298 - q.lock.Lock() 299 - 300 - select { 301 - case <-q.closed: 302 - q.lock.Unlock() 303 - return 304 - default: 305 - } 306 - q.channelQueue.Shutdown() 307 - if q.internal != nil { 308 - q.internal.(*LevelQueue).Shutdown() 309 - } 310 - close(q.closed) 311 - q.lock.Unlock() 312 - 313 - log.Trace("PersistableChannelQueue: %s Cancelling pools", q.delayedStarter.name) 314 - q.channelQueue.baseCtxCancel() 315 - q.internal.(*LevelQueue).baseCtxCancel() 316 - log.Trace("PersistableChannelQueue: %s Waiting til done", q.delayedStarter.name) 317 - q.channelQueue.Wait() 318 - q.internal.(*LevelQueue).Wait() 319 - // Redirect all remaining data in the chan to the internal channel 320 - log.Trace("PersistableChannelQueue: %s Redirecting remaining data", q.delayedStarter.name) 321 - close(q.channelQueue.dataChan) 322 - countOK, countLost := 0, 0 323 - for data := range q.channelQueue.dataChan { 324 - err := q.internal.Push(data) 325 - if err != nil { 326 - log.Error("PersistableChannelQueue: %s Unable redirect %v due to: %v", q.delayedStarter.name, data, err) 327 - countLost++ 328 - } else { 329 - countOK++ 330 - } 331 - atomic.AddInt64(&q.channelQueue.numInQueue, -1) 332 - } 333 - if countLost > 0 { 334 - log.Warn("PersistableChannelQueue: %s %d will be restored on restart, %d lost", q.delayedStarter.name, countOK, countLost) 335 - } else if countOK > 0 { 336 - log.Warn("PersistableChannelQueue: %s %d will be restored on restart", q.delayedStarter.name, countOK) 337 - } 338 - log.Trace("PersistableChannelQueue: %s Done Redirecting remaining data", q.delayedStarter.name) 339 - 340 - log.Debug("PersistableChannelQueue: %s Shutdown", q.delayedStarter.name) 341 - } 342 - 343 - // Terminate this queue and close the queue 344 - func (q *PersistableChannelQueue) Terminate() { 345 - log.Trace("PersistableChannelQueue: %s Terminating", q.delayedStarter.name) 346 - q.Shutdown() 347 - q.lock.Lock() 348 - defer q.lock.Unlock() 349 - q.channelQueue.Terminate() 350 - if q.internal != nil { 351 - q.internal.(*LevelQueue).Terminate() 352 - } 353 - log.Debug("PersistableChannelQueue: %s Terminated", q.delayedStarter.name) 354 - } 355 - 356 - func init() { 357 - queuesMap[PersistableChannelQueueType] = NewPersistableChannelQueue 358 - }
-544
modules/queue/queue_disk_channel_test.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "sync" 8 - "testing" 9 - "time" 10 - 11 - "code.gitea.io/gitea/modules/log" 12 - 13 - "github.com/stretchr/testify/assert" 14 - ) 15 - 16 - func TestPersistableChannelQueue(t *testing.T) { 17 - handleChan := make(chan *testData) 18 - handle := func(data ...Data) []Data { 19 - for _, datum := range data { 20 - if datum == nil { 21 - continue 22 - } 23 - testDatum := datum.(*testData) 24 - handleChan <- testDatum 25 - } 26 - return nil 27 - } 28 - 29 - lock := sync.Mutex{} 30 - queueShutdown := []func(){} 31 - queueTerminate := []func(){} 32 - 33 - tmpDir := t.TempDir() 34 - 35 - queue, err := NewPersistableChannelQueue(handle, PersistableChannelQueueConfiguration{ 36 - DataDir: tmpDir, 37 - BatchLength: 2, 38 - QueueLength: 20, 39 - Workers: 1, 40 - BoostWorkers: 0, 41 - MaxWorkers: 10, 42 - Name: "test-queue", 43 - }, &testData{}) 44 - assert.NoError(t, err) 45 - 46 - readyForShutdown := make(chan struct{}) 47 - readyForTerminate := make(chan struct{}) 48 - 49 - go queue.Run(func(shutdown func()) { 50 - lock.Lock() 51 - defer lock.Unlock() 52 - select { 53 - case <-readyForShutdown: 54 - default: 55 - close(readyForShutdown) 56 - } 57 - queueShutdown = append(queueShutdown, shutdown) 58 - }, func(terminate func()) { 59 - lock.Lock() 60 - defer lock.Unlock() 61 - select { 62 - case <-readyForTerminate: 63 - default: 64 - close(readyForTerminate) 65 - } 66 - queueTerminate = append(queueTerminate, terminate) 67 - }) 68 - 69 - test1 := testData{"A", 1} 70 - test2 := testData{"B", 2} 71 - 72 - err = queue.Push(&test1) 73 - assert.NoError(t, err) 74 - go func() { 75 - err := queue.Push(&test2) 76 - assert.NoError(t, err) 77 - }() 78 - 79 - result1 := <-handleChan 80 - assert.Equal(t, test1.TestString, result1.TestString) 81 - assert.Equal(t, test1.TestInt, result1.TestInt) 82 - 83 - result2 := <-handleChan 84 - assert.Equal(t, test2.TestString, result2.TestString) 85 - assert.Equal(t, test2.TestInt, result2.TestInt) 86 - 87 - // test1 is a testData not a *testData so will be rejected 88 - err = queue.Push(test1) 89 - assert.Error(t, err) 90 - 91 - <-readyForShutdown 92 - // Now shutdown the queue 93 - lock.Lock() 94 - callbacks := make([]func(), len(queueShutdown)) 95 - copy(callbacks, queueShutdown) 96 - lock.Unlock() 97 - for _, callback := range callbacks { 98 - callback() 99 - } 100 - 101 - // Wait til it is closed 102 - <-queue.(*PersistableChannelQueue).closed 103 - 104 - err = queue.Push(&test1) 105 - assert.NoError(t, err) 106 - err = queue.Push(&test2) 107 - assert.NoError(t, err) 108 - select { 109 - case <-handleChan: 110 - assert.Fail(t, "Handler processing should have stopped") 111 - default: 112 - } 113 - 114 - // terminate the queue 115 - <-readyForTerminate 116 - lock.Lock() 117 - callbacks = make([]func(), len(queueTerminate)) 118 - copy(callbacks, queueTerminate) 119 - lock.Unlock() 120 - for _, callback := range callbacks { 121 - callback() 122 - } 123 - 124 - select { 125 - case <-handleChan: 126 - assert.Fail(t, "Handler processing should have stopped") 127 - default: 128 - } 129 - 130 - // Reopen queue 131 - queue, err = NewPersistableChannelQueue(handle, PersistableChannelQueueConfiguration{ 132 - DataDir: tmpDir, 133 - BatchLength: 2, 134 - QueueLength: 20, 135 - Workers: 1, 136 - BoostWorkers: 0, 137 - MaxWorkers: 10, 138 - Name: "test-queue", 139 - }, &testData{}) 140 - assert.NoError(t, err) 141 - 142 - readyForShutdown = make(chan struct{}) 143 - readyForTerminate = make(chan struct{}) 144 - 145 - go queue.Run(func(shutdown func()) { 146 - lock.Lock() 147 - defer lock.Unlock() 148 - select { 149 - case <-readyForShutdown: 150 - default: 151 - close(readyForShutdown) 152 - } 153 - queueShutdown = append(queueShutdown, shutdown) 154 - }, func(terminate func()) { 155 - lock.Lock() 156 - defer lock.Unlock() 157 - select { 158 - case <-readyForTerminate: 159 - default: 160 - close(readyForTerminate) 161 - } 162 - queueTerminate = append(queueTerminate, terminate) 163 - }) 164 - 165 - result3 := <-handleChan 166 - assert.Equal(t, test1.TestString, result3.TestString) 167 - assert.Equal(t, test1.TestInt, result3.TestInt) 168 - 169 - result4 := <-handleChan 170 - assert.Equal(t, test2.TestString, result4.TestString) 171 - assert.Equal(t, test2.TestInt, result4.TestInt) 172 - 173 - <-readyForShutdown 174 - lock.Lock() 175 - callbacks = make([]func(), len(queueShutdown)) 176 - copy(callbacks, queueShutdown) 177 - lock.Unlock() 178 - for _, callback := range callbacks { 179 - callback() 180 - } 181 - <-readyForTerminate 182 - lock.Lock() 183 - callbacks = make([]func(), len(queueTerminate)) 184 - copy(callbacks, queueTerminate) 185 - lock.Unlock() 186 - for _, callback := range callbacks { 187 - callback() 188 - } 189 - } 190 - 191 - func TestPersistableChannelQueue_Pause(t *testing.T) { 192 - lock := sync.Mutex{} 193 - var queue Queue 194 - var err error 195 - pushBack := false 196 - 197 - handleChan := make(chan *testData) 198 - handle := func(data ...Data) []Data { 199 - lock.Lock() 200 - if pushBack { 201 - if pausable, ok := queue.(Pausable); ok { 202 - log.Info("pausing") 203 - pausable.Pause() 204 - } 205 - lock.Unlock() 206 - return data 207 - } 208 - lock.Unlock() 209 - 210 - for _, datum := range data { 211 - testDatum := datum.(*testData) 212 - handleChan <- testDatum 213 - } 214 - return nil 215 - } 216 - 217 - queueShutdown := []func(){} 218 - queueTerminate := []func(){} 219 - terminated := make(chan struct{}) 220 - 221 - tmpDir := t.TempDir() 222 - 223 - queue, err = NewPersistableChannelQueue(handle, PersistableChannelQueueConfiguration{ 224 - DataDir: tmpDir, 225 - BatchLength: 2, 226 - QueueLength: 20, 227 - Workers: 1, 228 - BoostWorkers: 0, 229 - MaxWorkers: 10, 230 - Name: "test-queue", 231 - }, &testData{}) 232 - assert.NoError(t, err) 233 - 234 - go func() { 235 - queue.Run(func(shutdown func()) { 236 - lock.Lock() 237 - defer lock.Unlock() 238 - queueShutdown = append(queueShutdown, shutdown) 239 - }, func(terminate func()) { 240 - lock.Lock() 241 - defer lock.Unlock() 242 - queueTerminate = append(queueTerminate, terminate) 243 - }) 244 - close(terminated) 245 - }() 246 - 247 - // Shutdown and Terminate in defer 248 - defer func() { 249 - lock.Lock() 250 - callbacks := make([]func(), len(queueShutdown)) 251 - copy(callbacks, queueShutdown) 252 - lock.Unlock() 253 - for _, callback := range callbacks { 254 - callback() 255 - } 256 - lock.Lock() 257 - log.Info("Finally terminating") 258 - callbacks = make([]func(), len(queueTerminate)) 259 - copy(callbacks, queueTerminate) 260 - lock.Unlock() 261 - for _, callback := range callbacks { 262 - callback() 263 - } 264 - }() 265 - 266 - test1 := testData{"A", 1} 267 - test2 := testData{"B", 2} 268 - 269 - err = queue.Push(&test1) 270 - assert.NoError(t, err) 271 - 272 - pausable, ok := queue.(Pausable) 273 - if !assert.True(t, ok) { 274 - return 275 - } 276 - result1 := <-handleChan 277 - assert.Equal(t, test1.TestString, result1.TestString) 278 - assert.Equal(t, test1.TestInt, result1.TestInt) 279 - 280 - pausable.Pause() 281 - paused, _ := pausable.IsPausedIsResumed() 282 - 283 - select { 284 - case <-paused: 285 - case <-time.After(100 * time.Millisecond): 286 - assert.Fail(t, "Queue is not paused") 287 - return 288 - } 289 - 290 - queue.Push(&test2) 291 - 292 - var result2 *testData 293 - select { 294 - case result2 = <-handleChan: 295 - assert.Fail(t, "handler chan should be empty") 296 - case <-time.After(100 * time.Millisecond): 297 - } 298 - 299 - assert.Nil(t, result2) 300 - 301 - pausable.Resume() 302 - _, resumed := pausable.IsPausedIsResumed() 303 - 304 - select { 305 - case <-resumed: 306 - case <-time.After(100 * time.Millisecond): 307 - assert.Fail(t, "Queue should be resumed") 308 - return 309 - } 310 - 311 - select { 312 - case result2 = <-handleChan: 313 - case <-time.After(500 * time.Millisecond): 314 - assert.Fail(t, "handler chan should contain test2") 315 - } 316 - 317 - assert.Equal(t, test2.TestString, result2.TestString) 318 - assert.Equal(t, test2.TestInt, result2.TestInt) 319 - 320 - // Set pushBack to so that the next handle will result in a Pause 321 - lock.Lock() 322 - pushBack = true 323 - lock.Unlock() 324 - 325 - // Ensure that we're still resumed 326 - _, resumed = pausable.IsPausedIsResumed() 327 - 328 - select { 329 - case <-resumed: 330 - case <-time.After(100 * time.Millisecond): 331 - assert.Fail(t, "Queue is not resumed") 332 - return 333 - } 334 - 335 - // push test1 336 - queue.Push(&test1) 337 - 338 - // Now as this is handled it should pause 339 - paused, _ = pausable.IsPausedIsResumed() 340 - 341 - select { 342 - case <-paused: 343 - case <-handleChan: 344 - assert.Fail(t, "handler chan should not contain test1") 345 - return 346 - case <-time.After(500 * time.Millisecond): 347 - assert.Fail(t, "queue should be paused") 348 - return 349 - } 350 - 351 - lock.Lock() 352 - pushBack = false 353 - lock.Unlock() 354 - 355 - pausable.Resume() 356 - 357 - _, resumed = pausable.IsPausedIsResumed() 358 - select { 359 - case <-resumed: 360 - case <-time.After(500 * time.Millisecond): 361 - assert.Fail(t, "Queue should be resumed") 362 - return 363 - } 364 - 365 - select { 366 - case result1 = <-handleChan: 367 - case <-time.After(500 * time.Millisecond): 368 - assert.Fail(t, "handler chan should contain test1") 369 - return 370 - } 371 - assert.Equal(t, test1.TestString, result1.TestString) 372 - assert.Equal(t, test1.TestInt, result1.TestInt) 373 - 374 - lock.Lock() 375 - callbacks := make([]func(), len(queueShutdown)) 376 - copy(callbacks, queueShutdown) 377 - queueShutdown = queueShutdown[:0] 378 - lock.Unlock() 379 - // Now shutdown the queue 380 - for _, callback := range callbacks { 381 - callback() 382 - } 383 - 384 - // Wait til it is closed 385 - select { 386 - case <-queue.(*PersistableChannelQueue).closed: 387 - case <-time.After(5 * time.Second): 388 - assert.Fail(t, "queue should close") 389 - return 390 - } 391 - 392 - err = queue.Push(&test1) 393 - assert.NoError(t, err) 394 - err = queue.Push(&test2) 395 - assert.NoError(t, err) 396 - select { 397 - case <-handleChan: 398 - assert.Fail(t, "Handler processing should have stopped") 399 - return 400 - default: 401 - } 402 - 403 - // terminate the queue 404 - lock.Lock() 405 - callbacks = make([]func(), len(queueTerminate)) 406 - copy(callbacks, queueTerminate) 407 - queueShutdown = queueTerminate[:0] 408 - lock.Unlock() 409 - for _, callback := range callbacks { 410 - callback() 411 - } 412 - 413 - select { 414 - case <-handleChan: 415 - assert.Fail(t, "Handler processing should have stopped") 416 - return 417 - case <-terminated: 418 - case <-time.After(10 * time.Second): 419 - assert.Fail(t, "Queue should have terminated") 420 - return 421 - } 422 - 423 - lock.Lock() 424 - pushBack = true 425 - lock.Unlock() 426 - 427 - // Reopen queue 428 - terminated = make(chan struct{}) 429 - queue, err = NewPersistableChannelQueue(handle, PersistableChannelQueueConfiguration{ 430 - DataDir: tmpDir, 431 - BatchLength: 1, 432 - QueueLength: 20, 433 - Workers: 1, 434 - BoostWorkers: 0, 435 - MaxWorkers: 10, 436 - Name: "test-queue", 437 - }, &testData{}) 438 - assert.NoError(t, err) 439 - pausable, ok = queue.(Pausable) 440 - if !assert.True(t, ok) { 441 - return 442 - } 443 - 444 - paused, _ = pausable.IsPausedIsResumed() 445 - 446 - go func() { 447 - queue.Run(func(shutdown func()) { 448 - lock.Lock() 449 - defer lock.Unlock() 450 - queueShutdown = append(queueShutdown, shutdown) 451 - }, func(terminate func()) { 452 - lock.Lock() 453 - defer lock.Unlock() 454 - queueTerminate = append(queueTerminate, terminate) 455 - }) 456 - close(terminated) 457 - }() 458 - 459 - select { 460 - case <-handleChan: 461 - assert.Fail(t, "Handler processing should have stopped") 462 - return 463 - case <-paused: 464 - } 465 - 466 - paused, _ = pausable.IsPausedIsResumed() 467 - 468 - select { 469 - case <-paused: 470 - case <-time.After(500 * time.Millisecond): 471 - assert.Fail(t, "Queue is not paused") 472 - return 473 - } 474 - 475 - select { 476 - case <-handleChan: 477 - assert.Fail(t, "Handler processing should have stopped") 478 - return 479 - default: 480 - } 481 - 482 - lock.Lock() 483 - pushBack = false 484 - lock.Unlock() 485 - 486 - pausable.Resume() 487 - _, resumed = pausable.IsPausedIsResumed() 488 - select { 489 - case <-resumed: 490 - case <-time.After(500 * time.Millisecond): 491 - assert.Fail(t, "Queue should be resumed") 492 - return 493 - } 494 - 495 - var result3, result4 *testData 496 - 497 - select { 498 - case result3 = <-handleChan: 499 - case <-time.After(1 * time.Second): 500 - assert.Fail(t, "Handler processing should have resumed") 501 - return 502 - } 503 - select { 504 - case result4 = <-handleChan: 505 - case <-time.After(1 * time.Second): 506 - assert.Fail(t, "Handler processing should have resumed") 507 - return 508 - } 509 - if result4.TestString == test1.TestString { 510 - result3, result4 = result4, result3 511 - } 512 - assert.Equal(t, test1.TestString, result3.TestString) 513 - assert.Equal(t, test1.TestInt, result3.TestInt) 514 - 515 - assert.Equal(t, test2.TestString, result4.TestString) 516 - assert.Equal(t, test2.TestInt, result4.TestInt) 517 - 518 - lock.Lock() 519 - callbacks = make([]func(), len(queueShutdown)) 520 - copy(callbacks, queueShutdown) 521 - queueShutdown = queueShutdown[:0] 522 - lock.Unlock() 523 - // Now shutdown the queue 524 - for _, callback := range callbacks { 525 - callback() 526 - } 527 - 528 - // terminate the queue 529 - lock.Lock() 530 - callbacks = make([]func(), len(queueTerminate)) 531 - copy(callbacks, queueTerminate) 532 - queueShutdown = queueTerminate[:0] 533 - lock.Unlock() 534 - for _, callback := range callbacks { 535 - callback() 536 - } 537 - 538 - select { 539 - case <-time.After(10 * time.Second): 540 - assert.Fail(t, "Queue should have terminated") 541 - return 542 - case <-terminated: 543 - } 544 - }
-147
modules/queue/queue_disk_test.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "sync" 8 - "testing" 9 - "time" 10 - 11 - "github.com/stretchr/testify/assert" 12 - ) 13 - 14 - func TestLevelQueue(t *testing.T) { 15 - handleChan := make(chan *testData) 16 - handle := func(data ...Data) []Data { 17 - assert.True(t, len(data) == 2) 18 - for _, datum := range data { 19 - testDatum := datum.(*testData) 20 - handleChan <- testDatum 21 - } 22 - return nil 23 - } 24 - 25 - var lock sync.Mutex 26 - queueShutdown := []func(){} 27 - queueTerminate := []func(){} 28 - 29 - tmpDir := t.TempDir() 30 - 31 - queue, err := NewLevelQueue(handle, LevelQueueConfiguration{ 32 - ByteFIFOQueueConfiguration: ByteFIFOQueueConfiguration{ 33 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 34 - QueueLength: 20, 35 - BatchLength: 2, 36 - BlockTimeout: 1 * time.Second, 37 - BoostTimeout: 5 * time.Minute, 38 - BoostWorkers: 5, 39 - MaxWorkers: 10, 40 - }, 41 - Workers: 1, 42 - }, 43 - DataDir: tmpDir, 44 - }, &testData{}) 45 - assert.NoError(t, err) 46 - 47 - go queue.Run(func(shutdown func()) { 48 - lock.Lock() 49 - queueShutdown = append(queueShutdown, shutdown) 50 - lock.Unlock() 51 - }, func(terminate func()) { 52 - lock.Lock() 53 - queueTerminate = append(queueTerminate, terminate) 54 - lock.Unlock() 55 - }) 56 - 57 - test1 := testData{"A", 1} 58 - test2 := testData{"B", 2} 59 - 60 - err = queue.Push(&test1) 61 - assert.NoError(t, err) 62 - go func() { 63 - err := queue.Push(&test2) 64 - assert.NoError(t, err) 65 - }() 66 - 67 - result1 := <-handleChan 68 - assert.Equal(t, test1.TestString, result1.TestString) 69 - assert.Equal(t, test1.TestInt, result1.TestInt) 70 - 71 - result2 := <-handleChan 72 - assert.Equal(t, test2.TestString, result2.TestString) 73 - assert.Equal(t, test2.TestInt, result2.TestInt) 74 - 75 - err = queue.Push(test1) 76 - assert.Error(t, err) 77 - 78 - lock.Lock() 79 - for _, callback := range queueShutdown { 80 - callback() 81 - } 82 - lock.Unlock() 83 - 84 - time.Sleep(200 * time.Millisecond) 85 - err = queue.Push(&test1) 86 - assert.NoError(t, err) 87 - err = queue.Push(&test2) 88 - assert.NoError(t, err) 89 - select { 90 - case <-handleChan: 91 - assert.Fail(t, "Handler processing should have stopped") 92 - default: 93 - } 94 - lock.Lock() 95 - for _, callback := range queueTerminate { 96 - callback() 97 - } 98 - lock.Unlock() 99 - 100 - // Reopen queue 101 - queue, err = NewWrappedQueue(handle, 102 - WrappedQueueConfiguration{ 103 - Underlying: LevelQueueType, 104 - Config: LevelQueueConfiguration{ 105 - ByteFIFOQueueConfiguration: ByteFIFOQueueConfiguration{ 106 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 107 - QueueLength: 20, 108 - BatchLength: 2, 109 - BlockTimeout: 1 * time.Second, 110 - BoostTimeout: 5 * time.Minute, 111 - BoostWorkers: 5, 112 - MaxWorkers: 10, 113 - }, 114 - Workers: 1, 115 - }, 116 - DataDir: tmpDir, 117 - }, 118 - }, &testData{}) 119 - assert.NoError(t, err) 120 - 121 - go queue.Run(func(shutdown func()) { 122 - lock.Lock() 123 - queueShutdown = append(queueShutdown, shutdown) 124 - lock.Unlock() 125 - }, func(terminate func()) { 126 - lock.Lock() 127 - queueTerminate = append(queueTerminate, terminate) 128 - lock.Unlock() 129 - }) 130 - 131 - result3 := <-handleChan 132 - assert.Equal(t, test1.TestString, result3.TestString) 133 - assert.Equal(t, test1.TestInt, result3.TestInt) 134 - 135 - result4 := <-handleChan 136 - assert.Equal(t, test2.TestString, result4.TestString) 137 - assert.Equal(t, test2.TestInt, result4.TestInt) 138 - 139 - lock.Lock() 140 - for _, callback := range queueShutdown { 141 - callback() 142 - } 143 - for _, callback := range queueTerminate { 144 - callback() 145 - } 146 - lock.Unlock() 147 - }
-137
modules/queue/queue_redis.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - 9 - "code.gitea.io/gitea/modules/graceful" 10 - "code.gitea.io/gitea/modules/log" 11 - "code.gitea.io/gitea/modules/nosql" 12 - 13 - "github.com/redis/go-redis/v9" 14 - ) 15 - 16 - // RedisQueueType is the type for redis queue 17 - const RedisQueueType Type = "redis" 18 - 19 - // RedisQueueConfiguration is the configuration for the redis queue 20 - type RedisQueueConfiguration struct { 21 - ByteFIFOQueueConfiguration 22 - RedisByteFIFOConfiguration 23 - } 24 - 25 - // RedisQueue redis queue 26 - type RedisQueue struct { 27 - *ByteFIFOQueue 28 - } 29 - 30 - // NewRedisQueue creates single redis or cluster redis queue 31 - func NewRedisQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 32 - configInterface, err := toConfig(RedisQueueConfiguration{}, cfg) 33 - if err != nil { 34 - return nil, err 35 - } 36 - config := configInterface.(RedisQueueConfiguration) 37 - 38 - byteFIFO, err := NewRedisByteFIFO(config.RedisByteFIFOConfiguration) 39 - if err != nil { 40 - return nil, err 41 - } 42 - 43 - byteFIFOQueue, err := NewByteFIFOQueue(RedisQueueType, byteFIFO, handle, config.ByteFIFOQueueConfiguration, exemplar) 44 - if err != nil { 45 - return nil, err 46 - } 47 - 48 - queue := &RedisQueue{ 49 - ByteFIFOQueue: byteFIFOQueue, 50 - } 51 - 52 - queue.qid = GetManager().Add(queue, RedisQueueType, config, exemplar) 53 - 54 - return queue, nil 55 - } 56 - 57 - type redisClient interface { 58 - RPush(ctx context.Context, key string, args ...interface{}) *redis.IntCmd 59 - LPush(ctx context.Context, key string, args ...interface{}) *redis.IntCmd 60 - LPop(ctx context.Context, key string) *redis.StringCmd 61 - LLen(ctx context.Context, key string) *redis.IntCmd 62 - SAdd(ctx context.Context, key string, members ...interface{}) *redis.IntCmd 63 - SRem(ctx context.Context, key string, members ...interface{}) *redis.IntCmd 64 - SIsMember(ctx context.Context, key string, member interface{}) *redis.BoolCmd 65 - Ping(ctx context.Context) *redis.StatusCmd 66 - Close() error 67 - } 68 - 69 - var _ ByteFIFO = &RedisByteFIFO{} 70 - 71 - // RedisByteFIFO represents a ByteFIFO formed from a redisClient 72 - type RedisByteFIFO struct { 73 - client redisClient 74 - 75 - queueName string 76 - } 77 - 78 - // RedisByteFIFOConfiguration is the configuration for the RedisByteFIFO 79 - type RedisByteFIFOConfiguration struct { 80 - ConnectionString string 81 - QueueName string 82 - } 83 - 84 - // NewRedisByteFIFO creates a ByteFIFO formed from a redisClient 85 - func NewRedisByteFIFO(config RedisByteFIFOConfiguration) (*RedisByteFIFO, error) { 86 - fifo := &RedisByteFIFO{ 87 - queueName: config.QueueName, 88 - } 89 - fifo.client = nosql.GetManager().GetRedisClient(config.ConnectionString) 90 - if err := fifo.client.Ping(graceful.GetManager().ShutdownContext()).Err(); err != nil { 91 - return nil, err 92 - } 93 - return fifo, nil 94 - } 95 - 96 - // PushFunc pushes data to the end of the fifo and calls the callback if it is added 97 - func (fifo *RedisByteFIFO) PushFunc(ctx context.Context, data []byte, fn func() error) error { 98 - if fn != nil { 99 - if err := fn(); err != nil { 100 - return err 101 - } 102 - } 103 - return fifo.client.RPush(ctx, fifo.queueName, data).Err() 104 - } 105 - 106 - // PushBack pushes data to the top of the fifo 107 - func (fifo *RedisByteFIFO) PushBack(ctx context.Context, data []byte) error { 108 - return fifo.client.LPush(ctx, fifo.queueName, data).Err() 109 - } 110 - 111 - // Pop pops data from the start of the fifo 112 - func (fifo *RedisByteFIFO) Pop(ctx context.Context) ([]byte, error) { 113 - data, err := fifo.client.LPop(ctx, fifo.queueName).Bytes() 114 - if err == nil || err == redis.Nil { 115 - return data, nil 116 - } 117 - return data, err 118 - } 119 - 120 - // Close this fifo 121 - func (fifo *RedisByteFIFO) Close() error { 122 - return fifo.client.Close() 123 - } 124 - 125 - // Len returns the length of the fifo 126 - func (fifo *RedisByteFIFO) Len(ctx context.Context) int64 { 127 - val, err := fifo.client.LLen(ctx, fifo.queueName).Result() 128 - if err != nil { 129 - log.Error("Error whilst getting length of redis queue %s: Error: %v", fifo.queueName, err) 130 - return -1 131 - } 132 - return val 133 - } 134 - 135 - func init() { 136 - queuesMap[RedisQueueType] = NewRedisQueue 137 - }
-42
modules/queue/queue_test.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "testing" 8 - 9 - "code.gitea.io/gitea/modules/json" 10 - 11 - "github.com/stretchr/testify/assert" 12 - ) 13 - 14 - type testData struct { 15 - TestString string 16 - TestInt int 17 - } 18 - 19 - func TestToConfig(t *testing.T) { 20 - cfg := testData{ 21 - TestString: "Config", 22 - TestInt: 10, 23 - } 24 - exemplar := testData{} 25 - 26 - cfg2I, err := toConfig(exemplar, cfg) 27 - assert.NoError(t, err) 28 - cfg2, ok := (cfg2I).(testData) 29 - assert.True(t, ok) 30 - assert.NotEqual(t, cfg2, exemplar) 31 - assert.Equal(t, &cfg, &cfg2) 32 - cfgString, err := json.Marshal(cfg) 33 - assert.NoError(t, err) 34 - 35 - cfg3I, err := toConfig(exemplar, cfgString) 36 - assert.NoError(t, err) 37 - cfg3, ok := (cfg3I).(testData) 38 - assert.True(t, ok) 39 - assert.Equal(t, cfg.TestString, cfg3.TestString) 40 - assert.Equal(t, cfg.TestInt, cfg3.TestInt) 41 - assert.NotEqual(t, cfg3, exemplar) 42 - }
-315
modules/queue/queue_wrapped.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "sync" 10 - "sync/atomic" 11 - "time" 12 - 13 - "code.gitea.io/gitea/modules/log" 14 - "code.gitea.io/gitea/modules/util" 15 - ) 16 - 17 - // WrappedQueueType is the type for a wrapped delayed starting queue 18 - const WrappedQueueType Type = "wrapped" 19 - 20 - // WrappedQueueConfiguration is the configuration for a WrappedQueue 21 - type WrappedQueueConfiguration struct { 22 - Underlying Type 23 - Timeout time.Duration 24 - MaxAttempts int 25 - Config interface{} 26 - QueueLength int 27 - Name string 28 - } 29 - 30 - type delayedStarter struct { 31 - internal Queue 32 - underlying Type 33 - cfg interface{} 34 - timeout time.Duration 35 - maxAttempts int 36 - name string 37 - } 38 - 39 - // setInternal must be called with the lock locked. 40 - func (q *delayedStarter) setInternal(atShutdown func(func()), handle HandlerFunc, exemplar interface{}) error { 41 - var ctx context.Context 42 - var cancel context.CancelFunc 43 - if q.timeout > 0 { 44 - ctx, cancel = context.WithTimeout(context.Background(), q.timeout) 45 - } else { 46 - ctx, cancel = context.WithCancel(context.Background()) 47 - } 48 - 49 - defer cancel() 50 - // Ensure we also stop at shutdown 51 - atShutdown(cancel) 52 - 53 - i := 1 54 - for q.internal == nil { 55 - select { 56 - case <-ctx.Done(): 57 - cfg := q.cfg 58 - if s, ok := cfg.([]byte); ok { 59 - cfg = string(s) 60 - } 61 - return fmt.Errorf("timedout creating queue %v with cfg %#v in %s", q.underlying, cfg, q.name) 62 - default: 63 - queue, err := NewQueue(q.underlying, handle, q.cfg, exemplar) 64 - if err == nil { 65 - q.internal = queue 66 - break 67 - } 68 - if err.Error() != "resource temporarily unavailable" { 69 - if bs, ok := q.cfg.([]byte); ok { 70 - log.Warn("[Attempt: %d] Failed to create queue: %v for %s cfg: %s error: %v", i, q.underlying, q.name, string(bs), err) 71 - } else { 72 - log.Warn("[Attempt: %d] Failed to create queue: %v for %s cfg: %#v error: %v", i, q.underlying, q.name, q.cfg, err) 73 - } 74 - } 75 - i++ 76 - if q.maxAttempts > 0 && i > q.maxAttempts { 77 - if bs, ok := q.cfg.([]byte); ok { 78 - return fmt.Errorf("unable to create queue %v for %s with cfg %s by max attempts: error: %w", q.underlying, q.name, string(bs), err) 79 - } 80 - return fmt.Errorf("unable to create queue %v for %s with cfg %#v by max attempts: error: %w", q.underlying, q.name, q.cfg, err) 81 - } 82 - sleepTime := 100 * time.Millisecond 83 - if q.timeout > 0 && q.maxAttempts > 0 { 84 - sleepTime = (q.timeout - 200*time.Millisecond) / time.Duration(q.maxAttempts) 85 - } 86 - t := time.NewTimer(sleepTime) 87 - select { 88 - case <-ctx.Done(): 89 - util.StopTimer(t) 90 - case <-t.C: 91 - } 92 - } 93 - } 94 - return nil 95 - } 96 - 97 - // WrappedQueue wraps a delayed starting queue 98 - type WrappedQueue struct { 99 - delayedStarter 100 - lock sync.Mutex 101 - handle HandlerFunc 102 - exemplar interface{} 103 - channel chan Data 104 - numInQueue int64 105 - } 106 - 107 - // NewWrappedQueue will attempt to create a queue of the provided type, 108 - // but if there is a problem creating this queue it will instead create 109 - // a WrappedQueue with delayed startup of the queue instead and a 110 - // channel which will be redirected to the queue 111 - func NewWrappedQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 112 - configInterface, err := toConfig(WrappedQueueConfiguration{}, cfg) 113 - if err != nil { 114 - return nil, err 115 - } 116 - config := configInterface.(WrappedQueueConfiguration) 117 - 118 - queue, err := NewQueue(config.Underlying, handle, config.Config, exemplar) 119 - if err == nil { 120 - // Just return the queue there is no need to wrap 121 - return queue, nil 122 - } 123 - if IsErrInvalidConfiguration(err) { 124 - // Retrying ain't gonna make this any better... 125 - return nil, ErrInvalidConfiguration{cfg: cfg} 126 - } 127 - 128 - queue = &WrappedQueue{ 129 - handle: handle, 130 - channel: make(chan Data, config.QueueLength), 131 - exemplar: exemplar, 132 - delayedStarter: delayedStarter{ 133 - cfg: config.Config, 134 - underlying: config.Underlying, 135 - timeout: config.Timeout, 136 - maxAttempts: config.MaxAttempts, 137 - name: config.Name, 138 - }, 139 - } 140 - _ = GetManager().Add(queue, WrappedQueueType, config, exemplar) 141 - return queue, nil 142 - } 143 - 144 - // Name returns the name of the queue 145 - func (q *WrappedQueue) Name() string { 146 - return q.name + "-wrapper" 147 - } 148 - 149 - // Push will push the data to the internal channel checking it against the exemplar 150 - func (q *WrappedQueue) Push(data Data) error { 151 - if !assignableTo(data, q.exemplar) { 152 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in %s", data, q.exemplar, q.name) 153 - } 154 - atomic.AddInt64(&q.numInQueue, 1) 155 - q.channel <- data 156 - return nil 157 - } 158 - 159 - func (q *WrappedQueue) flushInternalWithContext(ctx context.Context) error { 160 - q.lock.Lock() 161 - if q.internal == nil { 162 - q.lock.Unlock() 163 - return fmt.Errorf("not ready to flush wrapped queue %s yet", q.Name()) 164 - } 165 - q.lock.Unlock() 166 - select { 167 - case <-ctx.Done(): 168 - return ctx.Err() 169 - default: 170 - } 171 - return q.internal.FlushWithContext(ctx) 172 - } 173 - 174 - // Flush flushes the queue and blocks till the queue is empty 175 - func (q *WrappedQueue) Flush(timeout time.Duration) error { 176 - var ctx context.Context 177 - var cancel context.CancelFunc 178 - if timeout > 0 { 179 - ctx, cancel = context.WithTimeout(context.Background(), timeout) 180 - } else { 181 - ctx, cancel = context.WithCancel(context.Background()) 182 - } 183 - defer cancel() 184 - return q.FlushWithContext(ctx) 185 - } 186 - 187 - // FlushWithContext implements the final part of Flushable 188 - func (q *WrappedQueue) FlushWithContext(ctx context.Context) error { 189 - log.Trace("WrappedQueue: %s FlushWithContext", q.Name()) 190 - errChan := make(chan error, 1) 191 - go func() { 192 - errChan <- q.flushInternalWithContext(ctx) 193 - close(errChan) 194 - }() 195 - 196 - select { 197 - case err := <-errChan: 198 - return err 199 - case <-ctx.Done(): 200 - go func() { 201 - <-errChan 202 - }() 203 - return ctx.Err() 204 - } 205 - } 206 - 207 - // IsEmpty checks whether the queue is empty 208 - func (q *WrappedQueue) IsEmpty() bool { 209 - if atomic.LoadInt64(&q.numInQueue) != 0 { 210 - return false 211 - } 212 - q.lock.Lock() 213 - defer q.lock.Unlock() 214 - if q.internal == nil { 215 - return false 216 - } 217 - return q.internal.IsEmpty() 218 - } 219 - 220 - // Run starts to run the queue and attempts to create the internal queue 221 - func (q *WrappedQueue) Run(atShutdown, atTerminate func(func())) { 222 - log.Debug("WrappedQueue: %s Starting", q.name) 223 - q.lock.Lock() 224 - if q.internal == nil { 225 - err := q.setInternal(atShutdown, q.handle, q.exemplar) 226 - q.lock.Unlock() 227 - if err != nil { 228 - log.Fatal("Unable to set the internal queue for %s Error: %v", q.Name(), err) 229 - return 230 - } 231 - go func() { 232 - for data := range q.channel { 233 - _ = q.internal.Push(data) 234 - atomic.AddInt64(&q.numInQueue, -1) 235 - } 236 - }() 237 - } else { 238 - q.lock.Unlock() 239 - } 240 - 241 - q.internal.Run(atShutdown, atTerminate) 242 - log.Trace("WrappedQueue: %s Done", q.name) 243 - } 244 - 245 - // Shutdown this queue and stop processing 246 - func (q *WrappedQueue) Shutdown() { 247 - log.Trace("WrappedQueue: %s Shutting down", q.name) 248 - q.lock.Lock() 249 - defer q.lock.Unlock() 250 - if q.internal == nil { 251 - return 252 - } 253 - if shutdownable, ok := q.internal.(Shutdownable); ok { 254 - shutdownable.Shutdown() 255 - } 256 - log.Debug("WrappedQueue: %s Shutdown", q.name) 257 - } 258 - 259 - // Terminate this queue and close the queue 260 - func (q *WrappedQueue) Terminate() { 261 - log.Trace("WrappedQueue: %s Terminating", q.name) 262 - q.lock.Lock() 263 - defer q.lock.Unlock() 264 - if q.internal == nil { 265 - return 266 - } 267 - if shutdownable, ok := q.internal.(Shutdownable); ok { 268 - shutdownable.Terminate() 269 - } 270 - log.Debug("WrappedQueue: %s Terminated", q.name) 271 - } 272 - 273 - // IsPaused will return if the pool or queue is paused 274 - func (q *WrappedQueue) IsPaused() bool { 275 - q.lock.Lock() 276 - defer q.lock.Unlock() 277 - pausable, ok := q.internal.(Pausable) 278 - return ok && pausable.IsPaused() 279 - } 280 - 281 - // Pause will pause the pool or queue 282 - func (q *WrappedQueue) Pause() { 283 - q.lock.Lock() 284 - defer q.lock.Unlock() 285 - if pausable, ok := q.internal.(Pausable); ok { 286 - pausable.Pause() 287 - } 288 - } 289 - 290 - // Resume will resume the pool or queue 291 - func (q *WrappedQueue) Resume() { 292 - q.lock.Lock() 293 - defer q.lock.Unlock() 294 - if pausable, ok := q.internal.(Pausable); ok { 295 - pausable.Resume() 296 - } 297 - } 298 - 299 - // IsPausedIsResumed will return a bool indicating if the pool or queue is paused and a channel that will be closed when it is resumed 300 - func (q *WrappedQueue) IsPausedIsResumed() (paused, resumed <-chan struct{}) { 301 - q.lock.Lock() 302 - defer q.lock.Unlock() 303 - if pausable, ok := q.internal.(Pausable); ok { 304 - return pausable.IsPausedIsResumed() 305 - } 306 - return context.Background().Done(), closedChan 307 - } 308 - 309 - var closedChan chan struct{} 310 - 311 - func init() { 312 - queuesMap[WrappedQueueType] = NewWrappedQueue 313 - closedChan = make(chan struct{}) 314 - close(closedChan) 315 - }
-126
modules/queue/setting.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "fmt" 8 - "strings" 9 - 10 - "code.gitea.io/gitea/modules/json" 11 - "code.gitea.io/gitea/modules/log" 12 - "code.gitea.io/gitea/modules/setting" 13 - ) 14 - 15 - func validType(t string) (Type, error) { 16 - if len(t) == 0 { 17 - return PersistableChannelQueueType, nil 18 - } 19 - for _, typ := range RegisteredTypes() { 20 - if t == string(typ) { 21 - return typ, nil 22 - } 23 - } 24 - return PersistableChannelQueueType, fmt.Errorf("unknown queue type: %s defaulting to %s", t, string(PersistableChannelQueueType)) 25 - } 26 - 27 - func getQueueSettings(name string) (setting.QueueSettings, []byte) { 28 - q := setting.GetQueueSettings(name) 29 - cfg, err := json.Marshal(q) 30 - if err != nil { 31 - log.Error("Unable to marshall generic options: %v Error: %v", q, err) 32 - log.Error("Unable to create queue for %s", name, err) 33 - return q, []byte{} 34 - } 35 - return q, cfg 36 - } 37 - 38 - // CreateQueue for name with provided handler and exemplar 39 - func CreateQueue(name string, handle HandlerFunc, exemplar interface{}) Queue { 40 - q, cfg := getQueueSettings(name) 41 - if len(cfg) == 0 { 42 - return nil 43 - } 44 - 45 - typ, err := validType(q.Type) 46 - if err != nil { 47 - log.Error("Invalid type %s provided for queue named %s defaulting to %s", q.Type, name, string(typ)) 48 - } 49 - 50 - returnable, err := NewQueue(typ, handle, cfg, exemplar) 51 - if q.WrapIfNecessary && err != nil { 52 - log.Warn("Unable to create queue for %s: %v", name, err) 53 - log.Warn("Attempting to create wrapped queue") 54 - returnable, err = NewQueue(WrappedQueueType, handle, WrappedQueueConfiguration{ 55 - Underlying: typ, 56 - Timeout: q.Timeout, 57 - MaxAttempts: q.MaxAttempts, 58 - Config: cfg, 59 - QueueLength: q.QueueLength, 60 - Name: name, 61 - }, exemplar) 62 - } 63 - if err != nil { 64 - log.Error("Unable to create queue for %s: %v", name, err) 65 - return nil 66 - } 67 - 68 - // Sanity check configuration 69 - if q.Workers == 0 && (q.BoostTimeout == 0 || q.BoostWorkers == 0 || q.MaxWorkers == 0) { 70 - log.Warn("Queue: %s is configured to be non-scaling and have no workers\n - this configuration is likely incorrect and could cause Gitea to block", q.Name) 71 - if pausable, ok := returnable.(Pausable); ok { 72 - log.Warn("Queue: %s is being paused to prevent data-loss, add workers manually and unpause.", q.Name) 73 - pausable.Pause() 74 - } 75 - } 76 - 77 - return returnable 78 - } 79 - 80 - // CreateUniqueQueue for name with provided handler and exemplar 81 - func CreateUniqueQueue(name string, handle HandlerFunc, exemplar interface{}) UniqueQueue { 82 - q, cfg := getQueueSettings(name) 83 - if len(cfg) == 0 { 84 - return nil 85 - } 86 - 87 - if len(q.Type) > 0 && q.Type != "dummy" && q.Type != "immediate" && !strings.HasPrefix(q.Type, "unique-") { 88 - q.Type = "unique-" + q.Type 89 - } 90 - 91 - typ, err := validType(q.Type) 92 - if err != nil || typ == PersistableChannelQueueType { 93 - typ = PersistableChannelUniqueQueueType 94 - if err != nil { 95 - log.Error("Invalid type %s provided for queue named %s defaulting to %s", q.Type, name, string(typ)) 96 - } 97 - } 98 - 99 - returnable, err := NewQueue(typ, handle, cfg, exemplar) 100 - if q.WrapIfNecessary && err != nil { 101 - log.Warn("Unable to create unique queue for %s: %v", name, err) 102 - log.Warn("Attempting to create wrapped queue") 103 - returnable, err = NewQueue(WrappedUniqueQueueType, handle, WrappedUniqueQueueConfiguration{ 104 - Underlying: typ, 105 - Timeout: q.Timeout, 106 - MaxAttempts: q.MaxAttempts, 107 - Config: cfg, 108 - QueueLength: q.QueueLength, 109 - }, exemplar) 110 - } 111 - if err != nil { 112 - log.Error("Unable to create unique queue for %s: %v", name, err) 113 - return nil 114 - } 115 - 116 - // Sanity check configuration 117 - if q.Workers == 0 && (q.BoostTimeout == 0 || q.BoostWorkers == 0 || q.MaxWorkers == 0) { 118 - log.Warn("Queue: %s is configured to be non-scaling and have no workers\n - this configuration is likely incorrect and could cause Gitea to block", q.Name) 119 - if pausable, ok := returnable.(Pausable); ok { 120 - log.Warn("Queue: %s is being paused to prevent data-loss, add workers manually and unpause.", q.Name) 121 - pausable.Pause() 122 - } 123 - } 124 - 125 - return returnable.(UniqueQueue) 126 - }
+40
modules/queue/testhelper.go
··· 1 + // Copyright 2019 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "fmt" 8 + "sync" 9 + ) 10 + 11 + // testStateRecorder is used to record state changes for testing, to help debug async behaviors 12 + type testStateRecorder struct { 13 + records []string 14 + mu sync.Mutex 15 + } 16 + 17 + var testRecorder = &testStateRecorder{} 18 + 19 + func (t *testStateRecorder) Record(format string, args ...any) { 20 + t.mu.Lock() 21 + t.records = append(t.records, fmt.Sprintf(format, args...)) 22 + if len(t.records) > 1000 { 23 + t.records = t.records[len(t.records)-1000:] 24 + } 25 + t.mu.Unlock() 26 + } 27 + 28 + func (t *testStateRecorder) Records() []string { 29 + t.mu.Lock() 30 + r := make([]string, len(t.records)) 31 + copy(r, t.records) 32 + t.mu.Unlock() 33 + return r 34 + } 35 + 36 + func (t *testStateRecorder) Reset() { 37 + t.mu.Lock() 38 + t.records = nil 39 + t.mu.Unlock() 40 + }
-28
modules/queue/unique_queue.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "fmt" 8 - ) 9 - 10 - // UniqueQueue defines a queue which guarantees only one instance of same 11 - // data is in the queue. Instances with same identity will be 12 - // discarded if there is already one in the line. 13 - // 14 - // This queue is particularly useful for preventing duplicated task 15 - // of same purpose - please note that this does not guarantee that a particular 16 - // task cannot be processed twice or more at the same time. Uniqueness is 17 - // only guaranteed whilst the task is waiting in the queue. 18 - // 19 - // Users of this queue should be careful to push only the identifier of the 20 - // data 21 - type UniqueQueue interface { 22 - Queue 23 - PushFunc(Data, func() error) error 24 - Has(Data) (bool, error) 25 - } 26 - 27 - // ErrAlreadyInQueue is returned when trying to push data to the queue that is already in the queue 28 - var ErrAlreadyInQueue = fmt.Errorf("already in queue")
-212
modules/queue/unique_queue_channel.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "runtime/pprof" 10 - "sync" 11 - "time" 12 - 13 - "code.gitea.io/gitea/modules/container" 14 - "code.gitea.io/gitea/modules/json" 15 - "code.gitea.io/gitea/modules/log" 16 - ) 17 - 18 - // ChannelUniqueQueueType is the type for channel queue 19 - const ChannelUniqueQueueType Type = "unique-channel" 20 - 21 - // ChannelUniqueQueueConfiguration is the configuration for a ChannelUniqueQueue 22 - type ChannelUniqueQueueConfiguration ChannelQueueConfiguration 23 - 24 - // ChannelUniqueQueue implements UniqueQueue 25 - // 26 - // It is basically a thin wrapper around a WorkerPool but keeps a store of 27 - // what has been pushed within a table. 28 - // 29 - // Please note that this Queue does not guarantee that a particular 30 - // task cannot be processed twice or more at the same time. Uniqueness is 31 - // only guaranteed whilst the task is waiting in the queue. 32 - type ChannelUniqueQueue struct { 33 - *WorkerPool 34 - lock sync.Mutex 35 - table container.Set[string] 36 - shutdownCtx context.Context 37 - shutdownCtxCancel context.CancelFunc 38 - terminateCtx context.Context 39 - terminateCtxCancel context.CancelFunc 40 - exemplar interface{} 41 - workers int 42 - name string 43 - } 44 - 45 - // NewChannelUniqueQueue create a memory channel queue 46 - func NewChannelUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 47 - configInterface, err := toConfig(ChannelUniqueQueueConfiguration{}, cfg) 48 - if err != nil { 49 - return nil, err 50 - } 51 - config := configInterface.(ChannelUniqueQueueConfiguration) 52 - if config.BatchLength == 0 { 53 - config.BatchLength = 1 54 - } 55 - 56 - terminateCtx, terminateCtxCancel := context.WithCancel(context.Background()) 57 - shutdownCtx, shutdownCtxCancel := context.WithCancel(terminateCtx) 58 - 59 - queue := &ChannelUniqueQueue{ 60 - table: make(container.Set[string]), 61 - shutdownCtx: shutdownCtx, 62 - shutdownCtxCancel: shutdownCtxCancel, 63 - terminateCtx: terminateCtx, 64 - terminateCtxCancel: terminateCtxCancel, 65 - exemplar: exemplar, 66 - workers: config.Workers, 67 - name: config.Name, 68 - } 69 - queue.WorkerPool = NewWorkerPool(func(data ...Data) (unhandled []Data) { 70 - for _, datum := range data { 71 - // No error is possible here because PushFunc ensures that this can be marshalled 72 - bs, _ := json.Marshal(datum) 73 - 74 - queue.lock.Lock() 75 - queue.table.Remove(string(bs)) 76 - queue.lock.Unlock() 77 - 78 - if u := handle(datum); u != nil { 79 - if queue.IsPaused() { 80 - // We can only pushback to the channel if we're paused. 81 - go func() { 82 - if err := queue.Push(u[0]); err != nil { 83 - log.Error("Unable to push back to queue %d. Error: %v", queue.qid, err) 84 - } 85 - }() 86 - } else { 87 - unhandled = append(unhandled, u...) 88 - } 89 - } 90 - } 91 - return unhandled 92 - }, config.WorkerPoolConfiguration) 93 - 94 - queue.qid = GetManager().Add(queue, ChannelUniqueQueueType, config, exemplar) 95 - return queue, nil 96 - } 97 - 98 - // Run starts to run the queue 99 - func (q *ChannelUniqueQueue) Run(atShutdown, atTerminate func(func())) { 100 - pprof.SetGoroutineLabels(q.baseCtx) 101 - atShutdown(q.Shutdown) 102 - atTerminate(q.Terminate) 103 - log.Debug("ChannelUniqueQueue: %s Starting", q.name) 104 - _ = q.AddWorkers(q.workers, 0) 105 - } 106 - 107 - // Push will push data into the queue if the data is not already in the queue 108 - func (q *ChannelUniqueQueue) Push(data Data) error { 109 - return q.PushFunc(data, nil) 110 - } 111 - 112 - // PushFunc will push data into the queue 113 - func (q *ChannelUniqueQueue) PushFunc(data Data, fn func() error) error { 114 - if !assignableTo(data, q.exemplar) { 115 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in queue: %s", data, q.exemplar, q.name) 116 - } 117 - 118 - bs, err := json.Marshal(data) 119 - if err != nil { 120 - return err 121 - } 122 - q.lock.Lock() 123 - locked := true 124 - defer func() { 125 - if locked { 126 - q.lock.Unlock() 127 - } 128 - }() 129 - if !q.table.Add(string(bs)) { 130 - return ErrAlreadyInQueue 131 - } 132 - // FIXME: We probably need to implement some sort of limit here 133 - // If the downstream queue blocks this table will grow without limit 134 - if fn != nil { 135 - err := fn() 136 - if err != nil { 137 - q.table.Remove(string(bs)) 138 - return err 139 - } 140 - } 141 - locked = false 142 - q.lock.Unlock() 143 - q.WorkerPool.Push(data) 144 - return nil 145 - } 146 - 147 - // Has checks if the data is in the queue 148 - func (q *ChannelUniqueQueue) Has(data Data) (bool, error) { 149 - bs, err := json.Marshal(data) 150 - if err != nil { 151 - return false, err 152 - } 153 - 154 - q.lock.Lock() 155 - defer q.lock.Unlock() 156 - return q.table.Contains(string(bs)), nil 157 - } 158 - 159 - // Flush flushes the channel with a timeout - the Flush worker will be registered as a flush worker with the manager 160 - func (q *ChannelUniqueQueue) Flush(timeout time.Duration) error { 161 - if q.IsPaused() { 162 - return nil 163 - } 164 - ctx, cancel := q.commonRegisterWorkers(1, timeout, true) 165 - defer cancel() 166 - return q.FlushWithContext(ctx) 167 - } 168 - 169 - // Shutdown processing from this queue 170 - func (q *ChannelUniqueQueue) Shutdown() { 171 - log.Trace("ChannelUniqueQueue: %s Shutting down", q.name) 172 - select { 173 - case <-q.shutdownCtx.Done(): 174 - return 175 - default: 176 - } 177 - go func() { 178 - log.Trace("ChannelUniqueQueue: %s Flushing", q.name) 179 - if err := q.FlushWithContext(q.terminateCtx); err != nil { 180 - if !q.IsEmpty() { 181 - log.Warn("ChannelUniqueQueue: %s Terminated before completed flushing", q.name) 182 - } 183 - return 184 - } 185 - log.Debug("ChannelUniqueQueue: %s Flushed", q.name) 186 - }() 187 - q.shutdownCtxCancel() 188 - log.Debug("ChannelUniqueQueue: %s Shutdown", q.name) 189 - } 190 - 191 - // Terminate this queue and close the queue 192 - func (q *ChannelUniqueQueue) Terminate() { 193 - log.Trace("ChannelUniqueQueue: %s Terminating", q.name) 194 - q.Shutdown() 195 - select { 196 - case <-q.terminateCtx.Done(): 197 - return 198 - default: 199 - } 200 - q.terminateCtxCancel() 201 - q.baseCtxFinished() 202 - log.Debug("ChannelUniqueQueue: %s Terminated", q.name) 203 - } 204 - 205 - // Name returns the name of this queue 206 - func (q *ChannelUniqueQueue) Name() string { 207 - return q.name 208 - } 209 - 210 - func init() { 211 - queuesMap[ChannelUniqueQueueType] = NewChannelUniqueQueue 212 - }
-258
modules/queue/unique_queue_channel_test.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "sync" 8 - "testing" 9 - "time" 10 - 11 - "code.gitea.io/gitea/modules/log" 12 - 13 - "github.com/stretchr/testify/assert" 14 - ) 15 - 16 - func TestChannelUniqueQueue(t *testing.T) { 17 - _ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`) 18 - handleChan := make(chan *testData) 19 - handle := func(data ...Data) []Data { 20 - for _, datum := range data { 21 - testDatum := datum.(*testData) 22 - handleChan <- testDatum 23 - } 24 - return nil 25 - } 26 - 27 - nilFn := func(_ func()) {} 28 - 29 - queue, err := NewChannelUniqueQueue(handle, 30 - ChannelQueueConfiguration{ 31 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 32 - QueueLength: 0, 33 - MaxWorkers: 10, 34 - BlockTimeout: 1 * time.Second, 35 - BoostTimeout: 5 * time.Minute, 36 - BoostWorkers: 5, 37 - Name: "TestChannelQueue", 38 - }, 39 - Workers: 0, 40 - }, &testData{}) 41 - assert.NoError(t, err) 42 - 43 - assert.Equal(t, queue.(*ChannelUniqueQueue).WorkerPool.boostWorkers, 5) 44 - 45 - go queue.Run(nilFn, nilFn) 46 - 47 - test1 := testData{"A", 1} 48 - go queue.Push(&test1) 49 - result1 := <-handleChan 50 - assert.Equal(t, test1.TestString, result1.TestString) 51 - assert.Equal(t, test1.TestInt, result1.TestInt) 52 - 53 - err = queue.Push(test1) 54 - assert.Error(t, err) 55 - } 56 - 57 - func TestChannelUniqueQueue_Batch(t *testing.T) { 58 - _ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`) 59 - 60 - handleChan := make(chan *testData) 61 - handle := func(data ...Data) []Data { 62 - for _, datum := range data { 63 - testDatum := datum.(*testData) 64 - handleChan <- testDatum 65 - } 66 - return nil 67 - } 68 - 69 - nilFn := func(_ func()) {} 70 - 71 - queue, err := NewChannelUniqueQueue(handle, 72 - ChannelQueueConfiguration{ 73 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 74 - QueueLength: 20, 75 - BatchLength: 2, 76 - BlockTimeout: 0, 77 - BoostTimeout: 0, 78 - BoostWorkers: 0, 79 - MaxWorkers: 10, 80 - }, 81 - Workers: 1, 82 - }, &testData{}) 83 - assert.NoError(t, err) 84 - 85 - go queue.Run(nilFn, nilFn) 86 - 87 - test1 := testData{"A", 1} 88 - test2 := testData{"B", 2} 89 - 90 - queue.Push(&test1) 91 - go queue.Push(&test2) 92 - 93 - result1 := <-handleChan 94 - assert.Equal(t, test1.TestString, result1.TestString) 95 - assert.Equal(t, test1.TestInt, result1.TestInt) 96 - 97 - result2 := <-handleChan 98 - assert.Equal(t, test2.TestString, result2.TestString) 99 - assert.Equal(t, test2.TestInt, result2.TestInt) 100 - 101 - err = queue.Push(test1) 102 - assert.Error(t, err) 103 - } 104 - 105 - func TestChannelUniqueQueue_Pause(t *testing.T) { 106 - _ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`) 107 - 108 - lock := sync.Mutex{} 109 - var queue Queue 110 - var err error 111 - pushBack := false 112 - handleChan := make(chan *testData) 113 - handle := func(data ...Data) []Data { 114 - lock.Lock() 115 - if pushBack { 116 - if pausable, ok := queue.(Pausable); ok { 117 - pausable.Pause() 118 - } 119 - pushBack = false 120 - lock.Unlock() 121 - return data 122 - } 123 - lock.Unlock() 124 - 125 - for _, datum := range data { 126 - testDatum := datum.(*testData) 127 - handleChan <- testDatum 128 - } 129 - return nil 130 - } 131 - nilFn := func(_ func()) {} 132 - 133 - queue, err = NewChannelUniqueQueue(handle, 134 - ChannelQueueConfiguration{ 135 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 136 - QueueLength: 20, 137 - BatchLength: 1, 138 - BlockTimeout: 0, 139 - BoostTimeout: 0, 140 - BoostWorkers: 0, 141 - MaxWorkers: 10, 142 - }, 143 - Workers: 1, 144 - }, &testData{}) 145 - assert.NoError(t, err) 146 - 147 - go queue.Run(nilFn, nilFn) 148 - 149 - test1 := testData{"A", 1} 150 - test2 := testData{"B", 2} 151 - queue.Push(&test1) 152 - 153 - pausable, ok := queue.(Pausable) 154 - if !assert.True(t, ok) { 155 - return 156 - } 157 - result1 := <-handleChan 158 - assert.Equal(t, test1.TestString, result1.TestString) 159 - assert.Equal(t, test1.TestInt, result1.TestInt) 160 - 161 - pausable.Pause() 162 - 163 - paused, resumed := pausable.IsPausedIsResumed() 164 - 165 - select { 166 - case <-paused: 167 - case <-resumed: 168 - assert.Fail(t, "Queue should not be resumed") 169 - return 170 - default: 171 - assert.Fail(t, "Queue is not paused") 172 - return 173 - } 174 - 175 - queue.Push(&test2) 176 - 177 - var result2 *testData 178 - select { 179 - case result2 = <-handleChan: 180 - assert.Fail(t, "handler chan should be empty") 181 - case <-time.After(100 * time.Millisecond): 182 - } 183 - 184 - assert.Nil(t, result2) 185 - 186 - pausable.Resume() 187 - 188 - select { 189 - case <-resumed: 190 - default: 191 - assert.Fail(t, "Queue should be resumed") 192 - } 193 - 194 - select { 195 - case result2 = <-handleChan: 196 - case <-time.After(500 * time.Millisecond): 197 - assert.Fail(t, "handler chan should contain test2") 198 - } 199 - 200 - assert.Equal(t, test2.TestString, result2.TestString) 201 - assert.Equal(t, test2.TestInt, result2.TestInt) 202 - 203 - lock.Lock() 204 - pushBack = true 205 - lock.Unlock() 206 - 207 - paused, resumed = pausable.IsPausedIsResumed() 208 - 209 - select { 210 - case <-paused: 211 - assert.Fail(t, "Queue should not be paused") 212 - return 213 - case <-resumed: 214 - default: 215 - assert.Fail(t, "Queue is not resumed") 216 - return 217 - } 218 - 219 - queue.Push(&test1) 220 - 221 - select { 222 - case <-paused: 223 - case <-handleChan: 224 - assert.Fail(t, "handler chan should not contain test1") 225 - return 226 - case <-time.After(500 * time.Millisecond): 227 - assert.Fail(t, "queue should be paused") 228 - return 229 - } 230 - 231 - paused, resumed = pausable.IsPausedIsResumed() 232 - 233 - select { 234 - case <-paused: 235 - case <-resumed: 236 - assert.Fail(t, "Queue should not be resumed") 237 - return 238 - default: 239 - assert.Fail(t, "Queue is not paused") 240 - return 241 - } 242 - 243 - pausable.Resume() 244 - 245 - select { 246 - case <-resumed: 247 - default: 248 - assert.Fail(t, "Queue should be resumed") 249 - } 250 - 251 - select { 252 - case result1 = <-handleChan: 253 - case <-time.After(500 * time.Millisecond): 254 - assert.Fail(t, "handler chan should contain test1") 255 - } 256 - assert.Equal(t, test1.TestString, result1.TestString) 257 - assert.Equal(t, test1.TestInt, result1.TestInt) 258 - }
-128
modules/queue/unique_queue_disk.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - 9 - "code.gitea.io/gitea/modules/nosql" 10 - 11 - "gitea.com/lunny/levelqueue" 12 - ) 13 - 14 - // LevelUniqueQueueType is the type for level queue 15 - const LevelUniqueQueueType Type = "unique-level" 16 - 17 - // LevelUniqueQueueConfiguration is the configuration for a LevelUniqueQueue 18 - type LevelUniqueQueueConfiguration struct { 19 - ByteFIFOQueueConfiguration 20 - DataDir string 21 - ConnectionString string 22 - QueueName string 23 - } 24 - 25 - // LevelUniqueQueue implements a disk library queue 26 - type LevelUniqueQueue struct { 27 - *ByteFIFOUniqueQueue 28 - } 29 - 30 - // NewLevelUniqueQueue creates a ledis local queue 31 - // 32 - // Please note that this Queue does not guarantee that a particular 33 - // task cannot be processed twice or more at the same time. Uniqueness is 34 - // only guaranteed whilst the task is waiting in the queue. 35 - func NewLevelUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 36 - configInterface, err := toConfig(LevelUniqueQueueConfiguration{}, cfg) 37 - if err != nil { 38 - return nil, err 39 - } 40 - config := configInterface.(LevelUniqueQueueConfiguration) 41 - 42 - if len(config.ConnectionString) == 0 { 43 - config.ConnectionString = config.DataDir 44 - } 45 - config.WaitOnEmpty = true 46 - 47 - byteFIFO, err := NewLevelUniqueQueueByteFIFO(config.ConnectionString, config.QueueName) 48 - if err != nil { 49 - return nil, err 50 - } 51 - 52 - byteFIFOQueue, err := NewByteFIFOUniqueQueue(LevelUniqueQueueType, byteFIFO, handle, config.ByteFIFOQueueConfiguration, exemplar) 53 - if err != nil { 54 - return nil, err 55 - } 56 - 57 - queue := &LevelUniqueQueue{ 58 - ByteFIFOUniqueQueue: byteFIFOQueue, 59 - } 60 - queue.qid = GetManager().Add(queue, LevelUniqueQueueType, config, exemplar) 61 - return queue, nil 62 - } 63 - 64 - var _ UniqueByteFIFO = &LevelUniqueQueueByteFIFO{} 65 - 66 - // LevelUniqueQueueByteFIFO represents a ByteFIFO formed from a LevelUniqueQueue 67 - type LevelUniqueQueueByteFIFO struct { 68 - internal *levelqueue.UniqueQueue 69 - connection string 70 - } 71 - 72 - // NewLevelUniqueQueueByteFIFO creates a new ByteFIFO formed from a LevelUniqueQueue 73 - func NewLevelUniqueQueueByteFIFO(connection, prefix string) (*LevelUniqueQueueByteFIFO, error) { 74 - db, err := nosql.GetManager().GetLevelDB(connection) 75 - if err != nil { 76 - return nil, err 77 - } 78 - 79 - internal, err := levelqueue.NewUniqueQueue(db, []byte(prefix), []byte(prefix+"-unique"), false) 80 - if err != nil { 81 - return nil, err 82 - } 83 - 84 - return &LevelUniqueQueueByteFIFO{ 85 - connection: connection, 86 - internal: internal, 87 - }, nil 88 - } 89 - 90 - // PushFunc pushes data to the end of the fifo and calls the callback if it is added 91 - func (fifo *LevelUniqueQueueByteFIFO) PushFunc(ctx context.Context, data []byte, fn func() error) error { 92 - return fifo.internal.LPushFunc(data, fn) 93 - } 94 - 95 - // PushBack pushes data to the top of the fifo 96 - func (fifo *LevelUniqueQueueByteFIFO) PushBack(ctx context.Context, data []byte) error { 97 - return fifo.internal.RPush(data) 98 - } 99 - 100 - // Pop pops data from the start of the fifo 101 - func (fifo *LevelUniqueQueueByteFIFO) Pop(ctx context.Context) ([]byte, error) { 102 - data, err := fifo.internal.RPop() 103 - if err != nil && err != levelqueue.ErrNotFound { 104 - return nil, err 105 - } 106 - return data, nil 107 - } 108 - 109 - // Len returns the length of the fifo 110 - func (fifo *LevelUniqueQueueByteFIFO) Len(ctx context.Context) int64 { 111 - return fifo.internal.Len() 112 - } 113 - 114 - // Has returns whether the fifo contains this data 115 - func (fifo *LevelUniqueQueueByteFIFO) Has(ctx context.Context, data []byte) (bool, error) { 116 - return fifo.internal.Has(data) 117 - } 118 - 119 - // Close this fifo 120 - func (fifo *LevelUniqueQueueByteFIFO) Close() error { 121 - err := fifo.internal.Close() 122 - _ = nosql.GetManager().CloseLevelDB(fifo.connection) 123 - return err 124 - } 125 - 126 - func init() { 127 - queuesMap[LevelUniqueQueueType] = NewLevelUniqueQueue 128 - }
-336
modules/queue/unique_queue_disk_channel.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "runtime/pprof" 9 - "sync" 10 - "time" 11 - 12 - "code.gitea.io/gitea/modules/log" 13 - ) 14 - 15 - // PersistableChannelUniqueQueueType is the type for persistable queue 16 - const PersistableChannelUniqueQueueType Type = "unique-persistable-channel" 17 - 18 - // PersistableChannelUniqueQueueConfiguration is the configuration for a PersistableChannelUniqueQueue 19 - type PersistableChannelUniqueQueueConfiguration struct { 20 - Name string 21 - DataDir string 22 - BatchLength int 23 - QueueLength int 24 - Timeout time.Duration 25 - MaxAttempts int 26 - Workers int 27 - MaxWorkers int 28 - BlockTimeout time.Duration 29 - BoostTimeout time.Duration 30 - BoostWorkers int 31 - } 32 - 33 - // PersistableChannelUniqueQueue wraps a channel queue and level queue together 34 - // 35 - // Please note that this Queue does not guarantee that a particular 36 - // task cannot be processed twice or more at the same time. Uniqueness is 37 - // only guaranteed whilst the task is waiting in the queue. 38 - type PersistableChannelUniqueQueue struct { 39 - channelQueue *ChannelUniqueQueue 40 - delayedStarter 41 - lock sync.Mutex 42 - closed chan struct{} 43 - } 44 - 45 - // NewPersistableChannelUniqueQueue creates a wrapped batched channel queue with persistable level queue backend when shutting down 46 - // This differs from a wrapped queue in that the persistent queue is only used to persist at shutdown/terminate 47 - func NewPersistableChannelUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 48 - configInterface, err := toConfig(PersistableChannelUniqueQueueConfiguration{}, cfg) 49 - if err != nil { 50 - return nil, err 51 - } 52 - config := configInterface.(PersistableChannelUniqueQueueConfiguration) 53 - 54 - queue := &PersistableChannelUniqueQueue{ 55 - closed: make(chan struct{}), 56 - } 57 - 58 - wrappedHandle := func(data ...Data) (failed []Data) { 59 - for _, unhandled := range handle(data...) { 60 - if fail := queue.PushBack(unhandled); fail != nil { 61 - failed = append(failed, fail) 62 - } 63 - } 64 - return failed 65 - } 66 - 67 - channelUniqueQueue, err := NewChannelUniqueQueue(wrappedHandle, ChannelUniqueQueueConfiguration{ 68 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 69 - QueueLength: config.QueueLength, 70 - BatchLength: config.BatchLength, 71 - BlockTimeout: config.BlockTimeout, 72 - BoostTimeout: config.BoostTimeout, 73 - BoostWorkers: config.BoostWorkers, 74 - MaxWorkers: config.MaxWorkers, 75 - Name: config.Name + "-channel", 76 - }, 77 - Workers: config.Workers, 78 - }, exemplar) 79 - if err != nil { 80 - return nil, err 81 - } 82 - 83 - // the level backend only needs temporary workers to catch up with the previously dropped work 84 - levelCfg := LevelUniqueQueueConfiguration{ 85 - ByteFIFOQueueConfiguration: ByteFIFOQueueConfiguration{ 86 - WorkerPoolConfiguration: WorkerPoolConfiguration{ 87 - QueueLength: config.QueueLength, 88 - BatchLength: config.BatchLength, 89 - BlockTimeout: 1 * time.Second, 90 - BoostTimeout: 5 * time.Minute, 91 - BoostWorkers: 1, 92 - MaxWorkers: 5, 93 - Name: config.Name + "-level", 94 - }, 95 - Workers: 0, 96 - }, 97 - DataDir: config.DataDir, 98 - QueueName: config.Name + "-level", 99 - } 100 - 101 - queue.channelQueue = channelUniqueQueue.(*ChannelUniqueQueue) 102 - 103 - levelQueue, err := NewLevelUniqueQueue(func(data ...Data) []Data { 104 - for _, datum := range data { 105 - err := queue.Push(datum) 106 - if err != nil && err != ErrAlreadyInQueue { 107 - log.Error("Unable push to channelled queue: %v", err) 108 - } 109 - } 110 - return nil 111 - }, levelCfg, exemplar) 112 - if err == nil { 113 - queue.delayedStarter = delayedStarter{ 114 - internal: levelQueue.(*LevelUniqueQueue), 115 - name: config.Name, 116 - } 117 - 118 - _ = GetManager().Add(queue, PersistableChannelUniqueQueueType, config, exemplar) 119 - return queue, nil 120 - } 121 - if IsErrInvalidConfiguration(err) { 122 - // Retrying ain't gonna make this any better... 123 - return nil, ErrInvalidConfiguration{cfg: cfg} 124 - } 125 - 126 - queue.delayedStarter = delayedStarter{ 127 - cfg: levelCfg, 128 - underlying: LevelUniqueQueueType, 129 - timeout: config.Timeout, 130 - maxAttempts: config.MaxAttempts, 131 - name: config.Name, 132 - } 133 - _ = GetManager().Add(queue, PersistableChannelUniqueQueueType, config, exemplar) 134 - return queue, nil 135 - } 136 - 137 - // Name returns the name of this queue 138 - func (q *PersistableChannelUniqueQueue) Name() string { 139 - return q.delayedStarter.name 140 - } 141 - 142 - // Push will push the indexer data to queue 143 - func (q *PersistableChannelUniqueQueue) Push(data Data) error { 144 - return q.PushFunc(data, nil) 145 - } 146 - 147 - // PushFunc will push the indexer data to queue 148 - func (q *PersistableChannelUniqueQueue) PushFunc(data Data, fn func() error) error { 149 - select { 150 - case <-q.closed: 151 - return q.internal.(UniqueQueue).PushFunc(data, fn) 152 - default: 153 - return q.channelQueue.PushFunc(data, fn) 154 - } 155 - } 156 - 157 - // PushBack will push the indexer data to queue 158 - func (q *PersistableChannelUniqueQueue) PushBack(data Data) error { 159 - select { 160 - case <-q.closed: 161 - if pbr, ok := q.internal.(PushBackable); ok { 162 - return pbr.PushBack(data) 163 - } 164 - return q.internal.Push(data) 165 - default: 166 - return q.channelQueue.Push(data) 167 - } 168 - } 169 - 170 - // Has will test if the queue has the data 171 - func (q *PersistableChannelUniqueQueue) Has(data Data) (bool, error) { 172 - // This is more difficult... 173 - has, err := q.channelQueue.Has(data) 174 - if err != nil || has { 175 - return has, err 176 - } 177 - q.lock.Lock() 178 - defer q.lock.Unlock() 179 - if q.internal == nil { 180 - return false, nil 181 - } 182 - return q.internal.(UniqueQueue).Has(data) 183 - } 184 - 185 - // Run starts to run the queue 186 - func (q *PersistableChannelUniqueQueue) Run(atShutdown, atTerminate func(func())) { 187 - pprof.SetGoroutineLabels(q.channelQueue.baseCtx) 188 - log.Debug("PersistableChannelUniqueQueue: %s Starting", q.delayedStarter.name) 189 - 190 - q.lock.Lock() 191 - if q.internal == nil { 192 - err := q.setInternal(atShutdown, func(data ...Data) []Data { 193 - for _, datum := range data { 194 - err := q.Push(datum) 195 - if err != nil && err != ErrAlreadyInQueue { 196 - log.Error("Unable push to channelled queue: %v", err) 197 - } 198 - } 199 - return nil 200 - }, q.channelQueue.exemplar) 201 - q.lock.Unlock() 202 - if err != nil { 203 - log.Fatal("Unable to create internal queue for %s Error: %v", q.Name(), err) 204 - return 205 - } 206 - } else { 207 - q.lock.Unlock() 208 - } 209 - atShutdown(q.Shutdown) 210 - atTerminate(q.Terminate) 211 - _ = q.channelQueue.AddWorkers(q.channelQueue.workers, 0) 212 - 213 - if luq, ok := q.internal.(*LevelUniqueQueue); ok && !luq.IsEmpty() { 214 - // Just run the level queue - we shut it down once it's flushed 215 - go luq.Run(func(_ func()) {}, func(_ func()) {}) 216 - go func() { 217 - _ = luq.Flush(0) 218 - for !luq.IsEmpty() { 219 - _ = luq.Flush(0) 220 - select { 221 - case <-time.After(100 * time.Millisecond): 222 - case <-luq.shutdownCtx.Done(): 223 - if luq.byteFIFO.Len(luq.terminateCtx) > 0 { 224 - log.Warn("LevelUniqueQueue: %s shut down before completely flushed", luq.Name()) 225 - } 226 - return 227 - } 228 - } 229 - log.Debug("LevelUniqueQueue: %s flushed so shutting down", luq.Name()) 230 - luq.Shutdown() 231 - GetManager().Remove(luq.qid) 232 - }() 233 - } else { 234 - log.Debug("PersistableChannelUniqueQueue: %s Skipping running the empty level queue", q.delayedStarter.name) 235 - _ = q.internal.Flush(0) 236 - q.internal.(*LevelUniqueQueue).Shutdown() 237 - GetManager().Remove(q.internal.(*LevelUniqueQueue).qid) 238 - } 239 - } 240 - 241 - // Flush flushes the queue 242 - func (q *PersistableChannelUniqueQueue) Flush(timeout time.Duration) error { 243 - return q.channelQueue.Flush(timeout) 244 - } 245 - 246 - // FlushWithContext flushes the queue 247 - func (q *PersistableChannelUniqueQueue) FlushWithContext(ctx context.Context) error { 248 - return q.channelQueue.FlushWithContext(ctx) 249 - } 250 - 251 - // IsEmpty checks if a queue is empty 252 - func (q *PersistableChannelUniqueQueue) IsEmpty() bool { 253 - return q.channelQueue.IsEmpty() 254 - } 255 - 256 - // IsPaused will return if the pool or queue is paused 257 - func (q *PersistableChannelUniqueQueue) IsPaused() bool { 258 - return q.channelQueue.IsPaused() 259 - } 260 - 261 - // Pause will pause the pool or queue 262 - func (q *PersistableChannelUniqueQueue) Pause() { 263 - q.channelQueue.Pause() 264 - } 265 - 266 - // Resume will resume the pool or queue 267 - func (q *PersistableChannelUniqueQueue) Resume() { 268 - q.channelQueue.Resume() 269 - } 270 - 271 - // IsPausedIsResumed will return a bool indicating if the pool or queue is paused and a channel that will be closed when it is resumed 272 - func (q *PersistableChannelUniqueQueue) IsPausedIsResumed() (paused, resumed <-chan struct{}) { 273 - return q.channelQueue.IsPausedIsResumed() 274 - } 275 - 276 - // Shutdown processing this queue 277 - func (q *PersistableChannelUniqueQueue) Shutdown() { 278 - log.Trace("PersistableChannelUniqueQueue: %s Shutting down", q.delayedStarter.name) 279 - q.lock.Lock() 280 - select { 281 - case <-q.closed: 282 - q.lock.Unlock() 283 - return 284 - default: 285 - if q.internal != nil { 286 - q.internal.(*LevelUniqueQueue).Shutdown() 287 - } 288 - close(q.closed) 289 - q.lock.Unlock() 290 - } 291 - 292 - log.Trace("PersistableChannelUniqueQueue: %s Cancelling pools", q.delayedStarter.name) 293 - q.internal.(*LevelUniqueQueue).baseCtxCancel() 294 - q.channelQueue.baseCtxCancel() 295 - log.Trace("PersistableChannelUniqueQueue: %s Waiting til done", q.delayedStarter.name) 296 - q.channelQueue.Wait() 297 - q.internal.(*LevelUniqueQueue).Wait() 298 - // Redirect all remaining data in the chan to the internal channel 299 - close(q.channelQueue.dataChan) 300 - log.Trace("PersistableChannelUniqueQueue: %s Redirecting remaining data", q.delayedStarter.name) 301 - countOK, countLost := 0, 0 302 - for data := range q.channelQueue.dataChan { 303 - err := q.internal.(*LevelUniqueQueue).Push(data) 304 - if err != nil { 305 - log.Error("PersistableChannelUniqueQueue: %s Unable redirect %v due to: %v", q.delayedStarter.name, data, err) 306 - countLost++ 307 - } else { 308 - countOK++ 309 - } 310 - } 311 - if countLost > 0 { 312 - log.Warn("PersistableChannelUniqueQueue: %s %d will be restored on restart, %d lost", q.delayedStarter.name, countOK, countLost) 313 - } else if countOK > 0 { 314 - log.Warn("PersistableChannelUniqueQueue: %s %d will be restored on restart", q.delayedStarter.name, countOK) 315 - } 316 - log.Trace("PersistableChannelUniqueQueue: %s Done Redirecting remaining data", q.delayedStarter.name) 317 - 318 - log.Debug("PersistableChannelUniqueQueue: %s Shutdown", q.delayedStarter.name) 319 - } 320 - 321 - // Terminate this queue and close the queue 322 - func (q *PersistableChannelUniqueQueue) Terminate() { 323 - log.Trace("PersistableChannelUniqueQueue: %s Terminating", q.delayedStarter.name) 324 - q.Shutdown() 325 - q.lock.Lock() 326 - defer q.lock.Unlock() 327 - if q.internal != nil { 328 - q.internal.(*LevelUniqueQueue).Terminate() 329 - } 330 - q.channelQueue.baseCtxFinished() 331 - log.Debug("PersistableChannelUniqueQueue: %s Terminated", q.delayedStarter.name) 332 - } 333 - 334 - func init() { 335 - queuesMap[PersistableChannelUniqueQueueType] = NewPersistableChannelUniqueQueue 336 - }
-265
modules/queue/unique_queue_disk_channel_test.go
··· 1 - // Copyright 2023 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "strconv" 8 - "sync" 9 - "sync/atomic" 10 - "testing" 11 - "time" 12 - 13 - "code.gitea.io/gitea/modules/log" 14 - 15 - "github.com/stretchr/testify/assert" 16 - ) 17 - 18 - func TestPersistableChannelUniqueQueue(t *testing.T) { 19 - // Create a temporary directory for the queue 20 - tmpDir := t.TempDir() 21 - _ = log.NewLogger(1000, "console", "console", `{"level":"warn","stacktracelevel":"NONE","stderr":true}`) 22 - 23 - // Common function to create the Queue 24 - newQueue := func(name string, handle func(data ...Data) []Data) Queue { 25 - q, err := NewPersistableChannelUniqueQueue(handle, 26 - PersistableChannelUniqueQueueConfiguration{ 27 - Name: name, 28 - DataDir: tmpDir, 29 - QueueLength: 200, 30 - MaxWorkers: 1, 31 - BlockTimeout: 1 * time.Second, 32 - BoostTimeout: 5 * time.Minute, 33 - BoostWorkers: 1, 34 - Workers: 0, 35 - }, "task-0") 36 - assert.NoError(t, err) 37 - return q 38 - } 39 - 40 - // runs the provided queue and provides some timer function 41 - type channels struct { 42 - readyForShutdown chan struct{} // closed when shutdown functions have been assigned 43 - readyForTerminate chan struct{} // closed when terminate functions have been assigned 44 - signalShutdown chan struct{} // Should close to signal shutdown 45 - doneShutdown chan struct{} // closed when shutdown function is done 46 - queueTerminate []func() // list of atTerminate functions to call atTerminate - need to be accessed with lock 47 - } 48 - runQueue := func(q Queue, lock *sync.Mutex) *channels { 49 - chans := &channels{ 50 - readyForShutdown: make(chan struct{}), 51 - readyForTerminate: make(chan struct{}), 52 - signalShutdown: make(chan struct{}), 53 - doneShutdown: make(chan struct{}), 54 - } 55 - go q.Run(func(atShutdown func()) { 56 - go func() { 57 - lock.Lock() 58 - select { 59 - case <-chans.readyForShutdown: 60 - default: 61 - close(chans.readyForShutdown) 62 - } 63 - lock.Unlock() 64 - <-chans.signalShutdown 65 - atShutdown() 66 - close(chans.doneShutdown) 67 - }() 68 - }, func(atTerminate func()) { 69 - lock.Lock() 70 - defer lock.Unlock() 71 - select { 72 - case <-chans.readyForTerminate: 73 - default: 74 - close(chans.readyForTerminate) 75 - } 76 - chans.queueTerminate = append(chans.queueTerminate, atTerminate) 77 - }) 78 - 79 - return chans 80 - } 81 - 82 - // call to shutdown and terminate the queue associated with the channels 83 - doTerminate := func(chans *channels, lock *sync.Mutex) { 84 - <-chans.readyForTerminate 85 - 86 - lock.Lock() 87 - callbacks := []func(){} 88 - callbacks = append(callbacks, chans.queueTerminate...) 89 - lock.Unlock() 90 - 91 - for _, callback := range callbacks { 92 - callback() 93 - } 94 - } 95 - 96 - mapLock := sync.Mutex{} 97 - executedInitial := map[string][]string{} 98 - hasInitial := map[string][]string{} 99 - 100 - fillQueue := func(name string, done chan int64) { 101 - t.Run("Initial Filling: "+name, func(t *testing.T) { 102 - lock := sync.Mutex{} 103 - 104 - startAt100Queued := make(chan struct{}) 105 - stopAt20Shutdown := make(chan struct{}) // stop and shutdown at the 20th item 106 - 107 - handle := func(data ...Data) []Data { 108 - <-startAt100Queued 109 - for _, datum := range data { 110 - s := datum.(string) 111 - mapLock.Lock() 112 - executedInitial[name] = append(executedInitial[name], s) 113 - mapLock.Unlock() 114 - if s == "task-20" { 115 - close(stopAt20Shutdown) 116 - } 117 - } 118 - return nil 119 - } 120 - 121 - q := newQueue(name, handle) 122 - 123 - // add 100 tasks to the queue 124 - for i := 0; i < 100; i++ { 125 - _ = q.Push("task-" + strconv.Itoa(i)) 126 - } 127 - close(startAt100Queued) 128 - 129 - chans := runQueue(q, &lock) 130 - 131 - <-chans.readyForShutdown 132 - <-stopAt20Shutdown 133 - close(chans.signalShutdown) 134 - <-chans.doneShutdown 135 - _ = q.Push("final") 136 - 137 - // check which tasks are still in the queue 138 - for i := 0; i < 100; i++ { 139 - if has, _ := q.(UniqueQueue).Has("task-" + strconv.Itoa(i)); has { 140 - mapLock.Lock() 141 - hasInitial[name] = append(hasInitial[name], "task-"+strconv.Itoa(i)) 142 - mapLock.Unlock() 143 - } 144 - } 145 - if has, _ := q.(UniqueQueue).Has("final"); has { 146 - mapLock.Lock() 147 - hasInitial[name] = append(hasInitial[name], "final") 148 - mapLock.Unlock() 149 - } else { 150 - assert.Fail(t, "UnqueQueue %s should have \"final\"", name) 151 - } 152 - doTerminate(chans, &lock) 153 - mapLock.Lock() 154 - assert.Equal(t, 101, len(executedInitial[name])+len(hasInitial[name])) 155 - mapLock.Unlock() 156 - }) 157 - mapLock.Lock() 158 - count := int64(len(hasInitial[name])) 159 - mapLock.Unlock() 160 - done <- count 161 - close(done) 162 - } 163 - 164 - hasQueueAChan := make(chan int64) 165 - hasQueueBChan := make(chan int64) 166 - 167 - go fillQueue("QueueA", hasQueueAChan) 168 - go fillQueue("QueueB", hasQueueBChan) 169 - 170 - hasA := <-hasQueueAChan 171 - hasB := <-hasQueueBChan 172 - 173 - executedEmpty := map[string][]string{} 174 - hasEmpty := map[string][]string{} 175 - emptyQueue := func(name string, numInQueue int64, done chan struct{}) { 176 - t.Run("Empty Queue: "+name, func(t *testing.T) { 177 - lock := sync.Mutex{} 178 - stop := make(chan struct{}) 179 - 180 - // collect the tasks that have been executed 181 - atomicCount := int64(0) 182 - handle := func(data ...Data) []Data { 183 - lock.Lock() 184 - for _, datum := range data { 185 - mapLock.Lock() 186 - executedEmpty[name] = append(executedEmpty[name], datum.(string)) 187 - mapLock.Unlock() 188 - count := atomic.AddInt64(&atomicCount, 1) 189 - if count >= numInQueue { 190 - close(stop) 191 - } 192 - } 193 - lock.Unlock() 194 - return nil 195 - } 196 - 197 - q := newQueue(name, handle) 198 - chans := runQueue(q, &lock) 199 - 200 - <-chans.readyForShutdown 201 - <-stop 202 - close(chans.signalShutdown) 203 - <-chans.doneShutdown 204 - 205 - // check which tasks are still in the queue 206 - for i := 0; i < 100; i++ { 207 - if has, _ := q.(UniqueQueue).Has("task-" + strconv.Itoa(i)); has { 208 - mapLock.Lock() 209 - hasEmpty[name] = append(hasEmpty[name], "task-"+strconv.Itoa(i)) 210 - mapLock.Unlock() 211 - } 212 - } 213 - doTerminate(chans, &lock) 214 - 215 - mapLock.Lock() 216 - assert.Equal(t, 101, len(executedInitial[name])+len(executedEmpty[name])) 217 - assert.Empty(t, hasEmpty[name]) 218 - mapLock.Unlock() 219 - }) 220 - close(done) 221 - } 222 - 223 - doneA := make(chan struct{}) 224 - doneB := make(chan struct{}) 225 - 226 - go emptyQueue("QueueA", hasA, doneA) 227 - go emptyQueue("QueueB", hasB, doneB) 228 - 229 - <-doneA 230 - <-doneB 231 - 232 - mapLock.Lock() 233 - t.Logf("TestPersistableChannelUniqueQueue executedInitiallyA=%v, executedInitiallyB=%v, executedToEmptyA=%v, executedToEmptyB=%v", 234 - len(executedInitial["QueueA"]), len(executedInitial["QueueB"]), len(executedEmpty["QueueA"]), len(executedEmpty["QueueB"])) 235 - 236 - // reset and rerun 237 - executedInitial = map[string][]string{} 238 - hasInitial = map[string][]string{} 239 - executedEmpty = map[string][]string{} 240 - hasEmpty = map[string][]string{} 241 - mapLock.Unlock() 242 - 243 - hasQueueAChan = make(chan int64) 244 - hasQueueBChan = make(chan int64) 245 - 246 - go fillQueue("QueueA", hasQueueAChan) 247 - go fillQueue("QueueB", hasQueueBChan) 248 - 249 - hasA = <-hasQueueAChan 250 - hasB = <-hasQueueBChan 251 - 252 - doneA = make(chan struct{}) 253 - doneB = make(chan struct{}) 254 - 255 - go emptyQueue("QueueA", hasA, doneA) 256 - go emptyQueue("QueueB", hasB, doneB) 257 - 258 - <-doneA 259 - <-doneB 260 - 261 - mapLock.Lock() 262 - t.Logf("TestPersistableChannelUniqueQueue executedInitiallyA=%v, executedInitiallyB=%v, executedToEmptyA=%v, executedToEmptyB=%v", 263 - len(executedInitial["QueueA"]), len(executedInitial["QueueB"]), len(executedEmpty["QueueA"]), len(executedEmpty["QueueB"])) 264 - mapLock.Unlock() 265 - }
-141
modules/queue/unique_queue_redis.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - 9 - "github.com/redis/go-redis/v9" 10 - ) 11 - 12 - // RedisUniqueQueueType is the type for redis queue 13 - const RedisUniqueQueueType Type = "unique-redis" 14 - 15 - // RedisUniqueQueue redis queue 16 - type RedisUniqueQueue struct { 17 - *ByteFIFOUniqueQueue 18 - } 19 - 20 - // RedisUniqueQueueConfiguration is the configuration for the redis queue 21 - type RedisUniqueQueueConfiguration struct { 22 - ByteFIFOQueueConfiguration 23 - RedisUniqueByteFIFOConfiguration 24 - } 25 - 26 - // NewRedisUniqueQueue creates single redis or cluster redis queue. 27 - // 28 - // Please note that this Queue does not guarantee that a particular 29 - // task cannot be processed twice or more at the same time. Uniqueness is 30 - // only guaranteed whilst the task is waiting in the queue. 31 - func NewRedisUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 32 - configInterface, err := toConfig(RedisUniqueQueueConfiguration{}, cfg) 33 - if err != nil { 34 - return nil, err 35 - } 36 - config := configInterface.(RedisUniqueQueueConfiguration) 37 - 38 - byteFIFO, err := NewRedisUniqueByteFIFO(config.RedisUniqueByteFIFOConfiguration) 39 - if err != nil { 40 - return nil, err 41 - } 42 - 43 - if len(byteFIFO.setName) == 0 { 44 - byteFIFO.setName = byteFIFO.queueName + "_unique" 45 - } 46 - 47 - byteFIFOQueue, err := NewByteFIFOUniqueQueue(RedisUniqueQueueType, byteFIFO, handle, config.ByteFIFOQueueConfiguration, exemplar) 48 - if err != nil { 49 - return nil, err 50 - } 51 - 52 - queue := &RedisUniqueQueue{ 53 - ByteFIFOUniqueQueue: byteFIFOQueue, 54 - } 55 - 56 - queue.qid = GetManager().Add(queue, RedisUniqueQueueType, config, exemplar) 57 - 58 - return queue, nil 59 - } 60 - 61 - var _ UniqueByteFIFO = &RedisUniqueByteFIFO{} 62 - 63 - // RedisUniqueByteFIFO represents a UniqueByteFIFO formed from a redisClient 64 - type RedisUniqueByteFIFO struct { 65 - RedisByteFIFO 66 - setName string 67 - } 68 - 69 - // RedisUniqueByteFIFOConfiguration is the configuration for the RedisUniqueByteFIFO 70 - type RedisUniqueByteFIFOConfiguration struct { 71 - RedisByteFIFOConfiguration 72 - SetName string 73 - } 74 - 75 - // NewRedisUniqueByteFIFO creates a UniqueByteFIFO formed from a redisClient 76 - func NewRedisUniqueByteFIFO(config RedisUniqueByteFIFOConfiguration) (*RedisUniqueByteFIFO, error) { 77 - internal, err := NewRedisByteFIFO(config.RedisByteFIFOConfiguration) 78 - if err != nil { 79 - return nil, err 80 - } 81 - 82 - fifo := &RedisUniqueByteFIFO{ 83 - RedisByteFIFO: *internal, 84 - setName: config.SetName, 85 - } 86 - 87 - return fifo, nil 88 - } 89 - 90 - // PushFunc pushes data to the end of the fifo and calls the callback if it is added 91 - func (fifo *RedisUniqueByteFIFO) PushFunc(ctx context.Context, data []byte, fn func() error) error { 92 - added, err := fifo.client.SAdd(ctx, fifo.setName, data).Result() 93 - if err != nil { 94 - return err 95 - } 96 - if added == 0 { 97 - return ErrAlreadyInQueue 98 - } 99 - if fn != nil { 100 - if err := fn(); err != nil { 101 - return err 102 - } 103 - } 104 - return fifo.client.RPush(ctx, fifo.queueName, data).Err() 105 - } 106 - 107 - // PushBack pushes data to the top of the fifo 108 - func (fifo *RedisUniqueByteFIFO) PushBack(ctx context.Context, data []byte) error { 109 - added, err := fifo.client.SAdd(ctx, fifo.setName, data).Result() 110 - if err != nil { 111 - return err 112 - } 113 - if added == 0 { 114 - return ErrAlreadyInQueue 115 - } 116 - return fifo.client.LPush(ctx, fifo.queueName, data).Err() 117 - } 118 - 119 - // Pop pops data from the start of the fifo 120 - func (fifo *RedisUniqueByteFIFO) Pop(ctx context.Context) ([]byte, error) { 121 - data, err := fifo.client.LPop(ctx, fifo.queueName).Bytes() 122 - if err != nil && err != redis.Nil { 123 - return data, err 124 - } 125 - 126 - if len(data) == 0 { 127 - return data, nil 128 - } 129 - 130 - err = fifo.client.SRem(ctx, fifo.setName, data).Err() 131 - return data, err 132 - } 133 - 134 - // Has returns whether the fifo contains this data 135 - func (fifo *RedisUniqueByteFIFO) Has(ctx context.Context, data []byte) (bool, error) { 136 - return fifo.client.SIsMember(ctx, fifo.setName, data).Result() 137 - } 138 - 139 - func init() { 140 - queuesMap[RedisUniqueQueueType] = NewRedisUniqueQueue 141 - }
-174
modules/queue/unique_queue_wrapped.go
··· 1 - // Copyright 2020 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "fmt" 8 - "sync" 9 - "time" 10 - ) 11 - 12 - // WrappedUniqueQueueType is the type for a wrapped delayed starting queue 13 - const WrappedUniqueQueueType Type = "unique-wrapped" 14 - 15 - // WrappedUniqueQueueConfiguration is the configuration for a WrappedUniqueQueue 16 - type WrappedUniqueQueueConfiguration struct { 17 - Underlying Type 18 - Timeout time.Duration 19 - MaxAttempts int 20 - Config interface{} 21 - QueueLength int 22 - Name string 23 - } 24 - 25 - // WrappedUniqueQueue wraps a delayed starting unique queue 26 - type WrappedUniqueQueue struct { 27 - *WrappedQueue 28 - table map[Data]bool 29 - tlock sync.Mutex 30 - ready bool 31 - } 32 - 33 - // NewWrappedUniqueQueue will attempt to create a unique queue of the provided type, 34 - // but if there is a problem creating this queue it will instead create 35 - // a WrappedUniqueQueue with delayed startup of the queue instead and a 36 - // channel which will be redirected to the queue 37 - // 38 - // Please note that this Queue does not guarantee that a particular 39 - // task cannot be processed twice or more at the same time. Uniqueness is 40 - // only guaranteed whilst the task is waiting in the queue. 41 - func NewWrappedUniqueQueue(handle HandlerFunc, cfg, exemplar interface{}) (Queue, error) { 42 - configInterface, err := toConfig(WrappedUniqueQueueConfiguration{}, cfg) 43 - if err != nil { 44 - return nil, err 45 - } 46 - config := configInterface.(WrappedUniqueQueueConfiguration) 47 - 48 - queue, err := NewQueue(config.Underlying, handle, config.Config, exemplar) 49 - if err == nil { 50 - // Just return the queue there is no need to wrap 51 - return queue, nil 52 - } 53 - if IsErrInvalidConfiguration(err) { 54 - // Retrying ain't gonna make this any better... 55 - return nil, ErrInvalidConfiguration{cfg: cfg} 56 - } 57 - 58 - wrapped := &WrappedUniqueQueue{ 59 - WrappedQueue: &WrappedQueue{ 60 - channel: make(chan Data, config.QueueLength), 61 - exemplar: exemplar, 62 - delayedStarter: delayedStarter{ 63 - cfg: config.Config, 64 - underlying: config.Underlying, 65 - timeout: config.Timeout, 66 - maxAttempts: config.MaxAttempts, 67 - name: config.Name, 68 - }, 69 - }, 70 - table: map[Data]bool{}, 71 - } 72 - 73 - // wrapped.handle is passed to the delayedStarting internal queue and is run to handle 74 - // data passed to 75 - wrapped.handle = func(data ...Data) (unhandled []Data) { 76 - for _, datum := range data { 77 - wrapped.tlock.Lock() 78 - if !wrapped.ready { 79 - delete(wrapped.table, data) 80 - // If our table is empty all of the requests we have buffered between the 81 - // wrapper queue starting and the internal queue starting have been handled. 82 - // We can stop buffering requests in our local table and just pass Push 83 - // direct to the internal queue 84 - if len(wrapped.table) == 0 { 85 - wrapped.ready = true 86 - } 87 - } 88 - wrapped.tlock.Unlock() 89 - if u := handle(datum); u != nil { 90 - unhandled = append(unhandled, u...) 91 - } 92 - } 93 - return unhandled 94 - } 95 - _ = GetManager().Add(queue, WrappedUniqueQueueType, config, exemplar) 96 - return wrapped, nil 97 - } 98 - 99 - // Push will push the data to the internal channel checking it against the exemplar 100 - func (q *WrappedUniqueQueue) Push(data Data) error { 101 - return q.PushFunc(data, nil) 102 - } 103 - 104 - // PushFunc will push the data to the internal channel checking it against the exemplar 105 - func (q *WrappedUniqueQueue) PushFunc(data Data, fn func() error) error { 106 - if !assignableTo(data, q.exemplar) { 107 - return fmt.Errorf("unable to assign data: %v to same type as exemplar: %v in %s", data, q.exemplar, q.name) 108 - } 109 - 110 - q.tlock.Lock() 111 - if q.ready { 112 - // ready means our table is empty and all of the requests we have buffered between the 113 - // wrapper queue starting and the internal queue starting have been handled. 114 - // We can stop buffering requests in our local table and just pass Push 115 - // direct to the internal queue 116 - q.tlock.Unlock() 117 - return q.internal.(UniqueQueue).PushFunc(data, fn) 118 - } 119 - 120 - locked := true 121 - defer func() { 122 - if locked { 123 - q.tlock.Unlock() 124 - } 125 - }() 126 - if _, ok := q.table[data]; ok { 127 - return ErrAlreadyInQueue 128 - } 129 - // FIXME: We probably need to implement some sort of limit here 130 - // If the downstream queue blocks this table will grow without limit 131 - q.table[data] = true 132 - if fn != nil { 133 - err := fn() 134 - if err != nil { 135 - delete(q.table, data) 136 - return err 137 - } 138 - } 139 - locked = false 140 - q.tlock.Unlock() 141 - 142 - q.channel <- data 143 - return nil 144 - } 145 - 146 - // Has checks if the data is in the queue 147 - func (q *WrappedUniqueQueue) Has(data Data) (bool, error) { 148 - q.tlock.Lock() 149 - defer q.tlock.Unlock() 150 - if q.ready { 151 - return q.internal.(UniqueQueue).Has(data) 152 - } 153 - _, has := q.table[data] 154 - return has, nil 155 - } 156 - 157 - // IsEmpty checks whether the queue is empty 158 - func (q *WrappedUniqueQueue) IsEmpty() bool { 159 - q.tlock.Lock() 160 - if len(q.table) > 0 { 161 - q.tlock.Unlock() 162 - return false 163 - } 164 - if q.ready { 165 - q.tlock.Unlock() 166 - return q.internal.IsEmpty() 167 - } 168 - q.tlock.Unlock() 169 - return false 170 - } 171 - 172 - func init() { 173 - queuesMap[WrappedUniqueQueueType] = NewWrappedUniqueQueue 174 - }
+331
modules/queue/workergroup.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "sync" 9 + "sync/atomic" 10 + "time" 11 + 12 + "code.gitea.io/gitea/modules/log" 13 + ) 14 + 15 + var ( 16 + infiniteTimerC = make(chan time.Time) 17 + batchDebounceDuration = 100 * time.Millisecond 18 + workerIdleDuration = 1 * time.Second 19 + 20 + unhandledItemRequeueDuration atomic.Int64 // to avoid data race during test 21 + ) 22 + 23 + func init() { 24 + unhandledItemRequeueDuration.Store(int64(5 * time.Second)) 25 + } 26 + 27 + // workerGroup is a group of workers to work with a WorkerPoolQueue 28 + type workerGroup[T any] struct { 29 + q *WorkerPoolQueue[T] 30 + wg sync.WaitGroup 31 + 32 + ctxWorker context.Context 33 + ctxWorkerCancel context.CancelFunc 34 + 35 + batchBuffer []T 36 + popItemChan chan []byte 37 + popItemErr chan error 38 + } 39 + 40 + func (wg *workerGroup[T]) doPrepareWorkerContext() { 41 + wg.ctxWorker, wg.ctxWorkerCancel = context.WithCancel(wg.q.ctxRun) 42 + } 43 + 44 + // doDispatchBatchToWorker dispatches a batch of items to worker's channel. 45 + // If the channel is full, it tries to start a new worker if possible. 46 + func (q *WorkerPoolQueue[T]) doDispatchBatchToWorker(wg *workerGroup[T], flushChan chan flushType) { 47 + batch := wg.batchBuffer 48 + wg.batchBuffer = nil 49 + 50 + if len(batch) == 0 { 51 + return 52 + } 53 + 54 + full := false 55 + select { 56 + case q.batchChan <- batch: 57 + default: 58 + full = true 59 + } 60 + 61 + q.workerNumMu.Lock() 62 + noWorker := q.workerNum == 0 63 + if full || noWorker { 64 + if q.workerNum < q.workerMaxNum || noWorker && q.workerMaxNum <= 0 { 65 + q.workerNum++ 66 + q.doStartNewWorker(wg) 67 + } 68 + } 69 + q.workerNumMu.Unlock() 70 + 71 + if full { 72 + select { 73 + case q.batchChan <- batch: 74 + case flush := <-flushChan: 75 + q.doWorkerHandle(batch) 76 + q.doFlush(wg, flush) 77 + case <-q.ctxRun.Done(): 78 + wg.batchBuffer = batch // return the batch to buffer, the "doRun" function will handle it 79 + } 80 + } 81 + } 82 + 83 + // doWorkerHandle calls the safeHandler to handle a batch of items, and it increases/decreases the active worker number. 84 + // If the context has been canceled, it should not be caller because the "Push" still needs the context, in such case, call q.safeHandler directly 85 + func (q *WorkerPoolQueue[T]) doWorkerHandle(batch []T) { 86 + q.workerNumMu.Lock() 87 + q.workerActiveNum++ 88 + q.workerNumMu.Unlock() 89 + 90 + defer func() { 91 + q.workerNumMu.Lock() 92 + q.workerActiveNum-- 93 + q.workerNumMu.Unlock() 94 + }() 95 + 96 + unhandled := q.safeHandler(batch...) 97 + // if none of the items were handled, it should back-off for a few seconds 98 + // in this case the handler (eg: document indexer) may have encountered some errors/failures 99 + if len(unhandled) == len(batch) && unhandledItemRequeueDuration.Load() != 0 { 100 + log.Error("Queue %q failed to handle batch of %d items, backoff for a few seconds", q.GetName(), len(batch)) 101 + select { 102 + case <-q.ctxRun.Done(): 103 + case <-time.After(time.Duration(unhandledItemRequeueDuration.Load())): 104 + } 105 + } 106 + for _, item := range unhandled { 107 + if err := q.Push(item); err != nil { 108 + if !q.basePushForShutdown(item) { 109 + log.Error("Failed to requeue item for queue %q when calling handler: %v", q.GetName(), err) 110 + } 111 + } 112 + } 113 + } 114 + 115 + // basePushForShutdown tries to requeue items into the base queue when the WorkerPoolQueue is shutting down. 116 + // If the queue is shutting down, it returns true and try to push the items 117 + // Otherwise it does nothing and returns false 118 + func (q *WorkerPoolQueue[T]) basePushForShutdown(items ...T) bool { 119 + ctxShutdown := q.ctxShutdown.Load() 120 + if ctxShutdown == nil { 121 + return false 122 + } 123 + for _, item := range items { 124 + // if there is still any error, the queue can do nothing instead of losing the items 125 + if err := q.baseQueue.PushItem(*ctxShutdown, q.marshal(item)); err != nil { 126 + log.Error("Failed to requeue item for queue %q when shutting down: %v", q.GetName(), err) 127 + } 128 + } 129 + return true 130 + } 131 + 132 + // doStartNewWorker starts a new worker for the queue, the worker reads from worker's channel and handles the items. 133 + func (q *WorkerPoolQueue[T]) doStartNewWorker(wp *workerGroup[T]) { 134 + wp.wg.Add(1) 135 + 136 + go func() { 137 + defer wp.wg.Done() 138 + 139 + log.Debug("Queue %q starts new worker", q.GetName()) 140 + defer log.Debug("Queue %q stops idle worker", q.GetName()) 141 + 142 + t := time.NewTicker(workerIdleDuration) 143 + keepWorking := true 144 + stopWorking := func() { 145 + q.workerNumMu.Lock() 146 + keepWorking = false 147 + q.workerNum-- 148 + q.workerNumMu.Unlock() 149 + } 150 + for keepWorking { 151 + select { 152 + case <-wp.ctxWorker.Done(): 153 + stopWorking() 154 + case batch, ok := <-q.batchChan: 155 + if !ok { 156 + stopWorking() 157 + } else { 158 + q.doWorkerHandle(batch) 159 + t.Reset(workerIdleDuration) 160 + } 161 + case <-t.C: 162 + q.workerNumMu.Lock() 163 + keepWorking = q.workerNum <= 1 164 + if !keepWorking { 165 + q.workerNum-- 166 + } 167 + q.workerNumMu.Unlock() 168 + } 169 + } 170 + }() 171 + } 172 + 173 + // doFlush flushes the queue: it tries to read all items from the queue and handles them. 174 + // It is for testing purpose only. It's not designed to work for a cluster. 175 + func (q *WorkerPoolQueue[T]) doFlush(wg *workerGroup[T], flush flushType) { 176 + log.Debug("Queue %q starts flushing", q.GetName()) 177 + defer log.Debug("Queue %q finishes flushing", q.GetName()) 178 + 179 + // stop all workers, and prepare a new worker context to start new workers 180 + 181 + wg.ctxWorkerCancel() 182 + wg.wg.Wait() 183 + 184 + defer func() { 185 + close(flush) 186 + wg.doPrepareWorkerContext() 187 + }() 188 + 189 + // drain the batch channel first 190 + loop: 191 + for { 192 + select { 193 + case batch := <-q.batchChan: 194 + q.doWorkerHandle(batch) 195 + default: 196 + break loop 197 + } 198 + } 199 + 200 + // drain the popItem channel 201 + emptyCounter := 0 202 + for { 203 + select { 204 + case data, dataOk := <-wg.popItemChan: 205 + if !dataOk { 206 + return 207 + } 208 + emptyCounter = 0 209 + if v, jsonOk := q.unmarshal(data); !jsonOk { 210 + continue 211 + } else { 212 + q.doWorkerHandle([]T{v}) 213 + } 214 + case err := <-wg.popItemErr: 215 + if !q.isCtxRunCanceled() { 216 + log.Error("Failed to pop item from queue %q (doFlush): %v", q.GetName(), err) 217 + } 218 + return 219 + case <-q.ctxRun.Done(): 220 + log.Debug("Queue %q is shutting down", q.GetName()) 221 + return 222 + case <-time.After(20 * time.Millisecond): 223 + // There is no reliable way to make sure all queue items are consumed by the Flush, there always might be some items stored in some buffers/temp variables. 224 + // If we run Gitea in a cluster, we can even not guarantee all items are consumed in a deterministic instance. 225 + // Luckily, the "Flush" trick is only used in tests, so far so good. 226 + if cnt, _ := q.baseQueue.Len(q.ctxRun); cnt == 0 && len(wg.popItemChan) == 0 { 227 + emptyCounter++ 228 + } 229 + if emptyCounter >= 2 { 230 + return 231 + } 232 + } 233 + } 234 + } 235 + 236 + func (q *WorkerPoolQueue[T]) isCtxRunCanceled() bool { 237 + select { 238 + case <-q.ctxRun.Done(): 239 + return true 240 + default: 241 + return false 242 + } 243 + } 244 + 245 + var skipFlushChan = make(chan flushType) // an empty flush chan, used to skip reading other flush requests 246 + 247 + // doRun is the main loop of the queue. All related "doXxx" functions are executed in its context. 248 + func (q *WorkerPoolQueue[T]) doRun() { 249 + log.Debug("Queue %q starts running", q.GetName()) 250 + defer log.Debug("Queue %q stops running", q.GetName()) 251 + 252 + wg := &workerGroup[T]{q: q} 253 + wg.doPrepareWorkerContext() 254 + wg.popItemChan, wg.popItemErr = popItemByChan(q.ctxRun, q.baseQueue.PopItem) 255 + 256 + defer func() { 257 + q.ctxRunCancel() 258 + 259 + // drain all data on the fly 260 + // since the queue is shutting down, the items can't be dispatched to workers because the context is canceled 261 + // it can't call doWorkerHandle either, because there is no chance to push unhandled items back to the queue 262 + var unhandled []T 263 + close(q.batchChan) 264 + for batch := range q.batchChan { 265 + unhandled = append(unhandled, batch...) 266 + } 267 + unhandled = append(unhandled, wg.batchBuffer...) 268 + for data := range wg.popItemChan { 269 + if v, ok := q.unmarshal(data); ok { 270 + unhandled = append(unhandled, v) 271 + } 272 + } 273 + 274 + ctxShutdownPtr := q.ctxShutdown.Load() 275 + if ctxShutdownPtr != nil { 276 + // if there is a shutdown context, try to push the items back to the base queue 277 + q.basePushForShutdown(unhandled...) 278 + workerDone := make(chan struct{}) 279 + // the only way to wait for the workers, because the handlers do not have context to wait for 280 + go func() { wg.wg.Wait(); close(workerDone) }() 281 + select { 282 + case <-workerDone: 283 + case <-(*ctxShutdownPtr).Done(): 284 + log.Error("Queue %q is shutting down, but workers are still running after timeout", q.GetName()) 285 + } 286 + } else { 287 + // if there is no shutdown context, just call the handler to try to handle the items. if the handler fails again, the items are lost 288 + q.safeHandler(unhandled...) 289 + } 290 + 291 + close(q.shutdownDone) 292 + }() 293 + 294 + var batchDispatchC <-chan time.Time = infiniteTimerC 295 + for { 296 + select { 297 + case data, dataOk := <-wg.popItemChan: 298 + if !dataOk { 299 + return 300 + } 301 + if v, jsonOk := q.unmarshal(data); !jsonOk { 302 + testRecorder.Record("pop:corrupted:%s", data) // in rare cases the levelqueue(leveldb) might be corrupted 303 + continue 304 + } else { 305 + wg.batchBuffer = append(wg.batchBuffer, v) 306 + } 307 + if len(wg.batchBuffer) >= q.batchLength { 308 + q.doDispatchBatchToWorker(wg, q.flushChan) 309 + } else if batchDispatchC == infiniteTimerC { 310 + batchDispatchC = time.After(batchDebounceDuration) 311 + } // else: batchDispatchC is already a debounce timer, it will be triggered soon 312 + case <-batchDispatchC: 313 + batchDispatchC = infiniteTimerC 314 + q.doDispatchBatchToWorker(wg, q.flushChan) 315 + case flush := <-q.flushChan: 316 + // before flushing, it needs to try to dispatch the batch to worker first, in case there is no worker running 317 + // after the flushing, there is at least one worker running, so "doFlush" could wait for workers to finish 318 + // since we are already in a "flush" operation, so the dispatching function shouldn't read the flush chan. 319 + q.doDispatchBatchToWorker(wg, skipFlushChan) 320 + q.doFlush(wg, flush) 321 + case err := <-wg.popItemErr: 322 + if !q.isCtxRunCanceled() { 323 + log.Error("Failed to pop item from queue %q (doRun): %v", q.GetName(), err) 324 + } 325 + return 326 + case <-q.ctxRun.Done(): 327 + log.Debug("Queue %q is shutting down", q.GetName()) 328 + return 329 + } 330 + } 331 + }
-613
modules/queue/workerpool.go
··· 1 - // Copyright 2019 The Gitea Authors. All rights reserved. 2 - // SPDX-License-Identifier: MIT 3 - 4 - package queue 5 - 6 - import ( 7 - "context" 8 - "fmt" 9 - "runtime/pprof" 10 - "sync" 11 - "sync/atomic" 12 - "time" 13 - 14 - "code.gitea.io/gitea/modules/log" 15 - "code.gitea.io/gitea/modules/process" 16 - "code.gitea.io/gitea/modules/util" 17 - ) 18 - 19 - // WorkerPool represent a dynamically growable worker pool for a 20 - // provided handler function. They have an internal channel which 21 - // they use to detect if there is a block and will grow and shrink in 22 - // response to demand as per configuration. 23 - type WorkerPool struct { 24 - // This field requires to be the first one in the struct. 25 - // This is to allow 64 bit atomic operations on 32-bit machines. 26 - // See: https://pkg.go.dev/sync/atomic#pkg-note-BUG & Gitea issue 19518 27 - numInQueue int64 28 - lock sync.Mutex 29 - baseCtx context.Context 30 - baseCtxCancel context.CancelFunc 31 - baseCtxFinished process.FinishedFunc 32 - paused chan struct{} 33 - resumed chan struct{} 34 - cond *sync.Cond 35 - qid int64 36 - maxNumberOfWorkers int 37 - numberOfWorkers int 38 - batchLength int 39 - handle HandlerFunc 40 - dataChan chan Data 41 - blockTimeout time.Duration 42 - boostTimeout time.Duration 43 - boostWorkers int 44 - } 45 - 46 - var ( 47 - _ Flushable = &WorkerPool{} 48 - _ ManagedPool = &WorkerPool{} 49 - ) 50 - 51 - // WorkerPoolConfiguration is the basic configuration for a WorkerPool 52 - type WorkerPoolConfiguration struct { 53 - Name string 54 - QueueLength int 55 - BatchLength int 56 - BlockTimeout time.Duration 57 - BoostTimeout time.Duration 58 - BoostWorkers int 59 - MaxWorkers int 60 - } 61 - 62 - // NewWorkerPool creates a new worker pool 63 - func NewWorkerPool(handle HandlerFunc, config WorkerPoolConfiguration) *WorkerPool { 64 - ctx, cancel, finished := process.GetManager().AddTypedContext(context.Background(), fmt.Sprintf("Queue: %s", config.Name), process.SystemProcessType, false) 65 - 66 - dataChan := make(chan Data, config.QueueLength) 67 - pool := &WorkerPool{ 68 - baseCtx: ctx, 69 - baseCtxCancel: cancel, 70 - baseCtxFinished: finished, 71 - batchLength: config.BatchLength, 72 - dataChan: dataChan, 73 - resumed: closedChan, 74 - paused: make(chan struct{}), 75 - handle: handle, 76 - blockTimeout: config.BlockTimeout, 77 - boostTimeout: config.BoostTimeout, 78 - boostWorkers: config.BoostWorkers, 79 - maxNumberOfWorkers: config.MaxWorkers, 80 - } 81 - 82 - return pool 83 - } 84 - 85 - // Done returns when this worker pool's base context has been cancelled 86 - func (p *WorkerPool) Done() <-chan struct{} { 87 - return p.baseCtx.Done() 88 - } 89 - 90 - // Push pushes the data to the internal channel 91 - func (p *WorkerPool) Push(data Data) { 92 - atomic.AddInt64(&p.numInQueue, 1) 93 - p.lock.Lock() 94 - select { 95 - case <-p.paused: 96 - p.lock.Unlock() 97 - p.dataChan <- data 98 - return 99 - default: 100 - } 101 - 102 - if p.blockTimeout > 0 && p.boostTimeout > 0 && (p.numberOfWorkers <= p.maxNumberOfWorkers || p.maxNumberOfWorkers < 0) { 103 - if p.numberOfWorkers == 0 { 104 - p.zeroBoost() 105 - } else { 106 - p.lock.Unlock() 107 - } 108 - p.pushBoost(data) 109 - } else { 110 - p.lock.Unlock() 111 - p.dataChan <- data 112 - } 113 - } 114 - 115 - // HasNoWorkerScaling will return true if the queue has no workers, and has no worker boosting 116 - func (p *WorkerPool) HasNoWorkerScaling() bool { 117 - p.lock.Lock() 118 - defer p.lock.Unlock() 119 - return p.hasNoWorkerScaling() 120 - } 121 - 122 - func (p *WorkerPool) hasNoWorkerScaling() bool { 123 - return p.numberOfWorkers == 0 && (p.boostTimeout == 0 || p.boostWorkers == 0 || p.maxNumberOfWorkers == 0) 124 - } 125 - 126 - // zeroBoost will add a temporary boost worker for a no worker queue 127 - // p.lock must be locked at the start of this function BUT it will be unlocked by the end of this function 128 - // (This is because addWorkers has to be called whilst unlocked) 129 - func (p *WorkerPool) zeroBoost() { 130 - ctx, cancel := context.WithTimeout(p.baseCtx, p.boostTimeout) 131 - mq := GetManager().GetManagedQueue(p.qid) 132 - boost := p.boostWorkers 133 - if (boost+p.numberOfWorkers) > p.maxNumberOfWorkers && p.maxNumberOfWorkers >= 0 { 134 - boost = p.maxNumberOfWorkers - p.numberOfWorkers 135 - } 136 - if mq != nil { 137 - log.Debug("WorkerPool: %d (for %s) has zero workers - adding %d temporary workers for %s", p.qid, mq.Name, boost, p.boostTimeout) 138 - 139 - start := time.Now() 140 - pid := mq.RegisterWorkers(boost, start, true, start.Add(p.boostTimeout), cancel, false) 141 - cancel = func() { 142 - mq.RemoveWorkers(pid) 143 - } 144 - } else { 145 - log.Debug("WorkerPool: %d has zero workers - adding %d temporary workers for %s", p.qid, p.boostWorkers, p.boostTimeout) 146 - } 147 - p.lock.Unlock() 148 - p.addWorkers(ctx, cancel, boost) 149 - } 150 - 151 - func (p *WorkerPool) pushBoost(data Data) { 152 - select { 153 - case p.dataChan <- data: 154 - default: 155 - p.lock.Lock() 156 - if p.blockTimeout <= 0 { 157 - p.lock.Unlock() 158 - p.dataChan <- data 159 - return 160 - } 161 - ourTimeout := p.blockTimeout 162 - timer := time.NewTimer(p.blockTimeout) 163 - p.lock.Unlock() 164 - select { 165 - case p.dataChan <- data: 166 - util.StopTimer(timer) 167 - case <-timer.C: 168 - p.lock.Lock() 169 - if p.blockTimeout > ourTimeout || (p.numberOfWorkers > p.maxNumberOfWorkers && p.maxNumberOfWorkers >= 0) { 170 - p.lock.Unlock() 171 - p.dataChan <- data 172 - return 173 - } 174 - p.blockTimeout *= 2 175 - boostCtx, boostCtxCancel := context.WithCancel(p.baseCtx) 176 - mq := GetManager().GetManagedQueue(p.qid) 177 - boost := p.boostWorkers 178 - if (boost+p.numberOfWorkers) > p.maxNumberOfWorkers && p.maxNumberOfWorkers >= 0 { 179 - boost = p.maxNumberOfWorkers - p.numberOfWorkers 180 - } 181 - if mq != nil { 182 - log.Debug("WorkerPool: %d (for %s) Channel blocked for %v - adding %d temporary workers for %s, block timeout now %v", p.qid, mq.Name, ourTimeout, boost, p.boostTimeout, p.blockTimeout) 183 - 184 - start := time.Now() 185 - pid := mq.RegisterWorkers(boost, start, true, start.Add(p.boostTimeout), boostCtxCancel, false) 186 - go func() { 187 - <-boostCtx.Done() 188 - mq.RemoveWorkers(pid) 189 - boostCtxCancel() 190 - }() 191 - } else { 192 - log.Debug("WorkerPool: %d Channel blocked for %v - adding %d temporary workers for %s, block timeout now %v", p.qid, ourTimeout, p.boostWorkers, p.boostTimeout, p.blockTimeout) 193 - } 194 - go func() { 195 - <-time.After(p.boostTimeout) 196 - boostCtxCancel() 197 - p.lock.Lock() 198 - p.blockTimeout /= 2 199 - p.lock.Unlock() 200 - }() 201 - p.lock.Unlock() 202 - p.addWorkers(boostCtx, boostCtxCancel, boost) 203 - p.dataChan <- data 204 - } 205 - } 206 - } 207 - 208 - // NumberOfWorkers returns the number of current workers in the pool 209 - func (p *WorkerPool) NumberOfWorkers() int { 210 - p.lock.Lock() 211 - defer p.lock.Unlock() 212 - return p.numberOfWorkers 213 - } 214 - 215 - // NumberInQueue returns the number of items in the queue 216 - func (p *WorkerPool) NumberInQueue() int64 { 217 - return atomic.LoadInt64(&p.numInQueue) 218 - } 219 - 220 - // MaxNumberOfWorkers returns the maximum number of workers automatically added to the pool 221 - func (p *WorkerPool) MaxNumberOfWorkers() int { 222 - p.lock.Lock() 223 - defer p.lock.Unlock() 224 - return p.maxNumberOfWorkers 225 - } 226 - 227 - // BoostWorkers returns the number of workers for a boost 228 - func (p *WorkerPool) BoostWorkers() int { 229 - p.lock.Lock() 230 - defer p.lock.Unlock() 231 - return p.boostWorkers 232 - } 233 - 234 - // BoostTimeout returns the timeout of the next boost 235 - func (p *WorkerPool) BoostTimeout() time.Duration { 236 - p.lock.Lock() 237 - defer p.lock.Unlock() 238 - return p.boostTimeout 239 - } 240 - 241 - // BlockTimeout returns the timeout til the next boost 242 - func (p *WorkerPool) BlockTimeout() time.Duration { 243 - p.lock.Lock() 244 - defer p.lock.Unlock() 245 - return p.blockTimeout 246 - } 247 - 248 - // SetPoolSettings sets the setable boost values 249 - func (p *WorkerPool) SetPoolSettings(maxNumberOfWorkers, boostWorkers int, timeout time.Duration) { 250 - p.lock.Lock() 251 - defer p.lock.Unlock() 252 - p.maxNumberOfWorkers = maxNumberOfWorkers 253 - p.boostWorkers = boostWorkers 254 - p.boostTimeout = timeout 255 - } 256 - 257 - // SetMaxNumberOfWorkers sets the maximum number of workers automatically added to the pool 258 - // Changing this number will not change the number of current workers but will change the limit 259 - // for future additions 260 - func (p *WorkerPool) SetMaxNumberOfWorkers(newMax int) { 261 - p.lock.Lock() 262 - defer p.lock.Unlock() 263 - p.maxNumberOfWorkers = newMax 264 - } 265 - 266 - func (p *WorkerPool) commonRegisterWorkers(number int, timeout time.Duration, isFlusher bool) (context.Context, context.CancelFunc) { 267 - var ctx context.Context 268 - var cancel context.CancelFunc 269 - start := time.Now() 270 - end := start 271 - hasTimeout := false 272 - if timeout > 0 { 273 - ctx, cancel = context.WithTimeout(p.baseCtx, timeout) 274 - end = start.Add(timeout) 275 - hasTimeout = true 276 - } else { 277 - ctx, cancel = context.WithCancel(p.baseCtx) 278 - } 279 - 280 - mq := GetManager().GetManagedQueue(p.qid) 281 - if mq != nil { 282 - pid := mq.RegisterWorkers(number, start, hasTimeout, end, cancel, isFlusher) 283 - log.Trace("WorkerPool: %d (for %s) adding %d workers with group id: %d", p.qid, mq.Name, number, pid) 284 - return ctx, func() { 285 - mq.RemoveWorkers(pid) 286 - } 287 - } 288 - log.Trace("WorkerPool: %d adding %d workers (no group id)", p.qid, number) 289 - 290 - return ctx, cancel 291 - } 292 - 293 - // AddWorkers adds workers to the pool - this allows the number of workers to go above the limit 294 - func (p *WorkerPool) AddWorkers(number int, timeout time.Duration) context.CancelFunc { 295 - ctx, cancel := p.commonRegisterWorkers(number, timeout, false) 296 - p.addWorkers(ctx, cancel, number) 297 - return cancel 298 - } 299 - 300 - // addWorkers adds workers to the pool 301 - func (p *WorkerPool) addWorkers(ctx context.Context, cancel context.CancelFunc, number int) { 302 - for i := 0; i < number; i++ { 303 - p.lock.Lock() 304 - if p.cond == nil { 305 - p.cond = sync.NewCond(&p.lock) 306 - } 307 - p.numberOfWorkers++ 308 - p.lock.Unlock() 309 - go func() { 310 - pprof.SetGoroutineLabels(ctx) 311 - p.doWork(ctx) 312 - 313 - p.lock.Lock() 314 - p.numberOfWorkers-- 315 - if p.numberOfWorkers == 0 { 316 - p.cond.Broadcast() 317 - cancel() 318 - } else if p.numberOfWorkers < 0 { 319 - // numberOfWorkers can't go negative but... 320 - log.Warn("Number of Workers < 0 for QID %d - this shouldn't happen", p.qid) 321 - p.numberOfWorkers = 0 322 - p.cond.Broadcast() 323 - cancel() 324 - } 325 - select { 326 - case <-p.baseCtx.Done(): 327 - // Don't warn or check for ongoing work if the baseCtx is shutdown 328 - case <-p.paused: 329 - // Don't warn or check for ongoing work if the pool is paused 330 - default: 331 - if p.hasNoWorkerScaling() { 332 - log.Warn( 333 - "Queue: %d is configured to be non-scaling and has no workers - this configuration is likely incorrect.\n"+ 334 - "The queue will be paused to prevent data-loss with the assumption that you will add workers and unpause as required.", p.qid) 335 - p.pause() 336 - } else if p.numberOfWorkers == 0 && atomic.LoadInt64(&p.numInQueue) > 0 { 337 - // OK there are no workers but... there's still work to be done -> Reboost 338 - p.zeroBoost() 339 - // p.lock will be unlocked by zeroBoost 340 - return 341 - } 342 - } 343 - p.lock.Unlock() 344 - }() 345 - } 346 - } 347 - 348 - // Wait for WorkerPool to finish 349 - func (p *WorkerPool) Wait() { 350 - p.lock.Lock() 351 - defer p.lock.Unlock() 352 - if p.cond == nil { 353 - p.cond = sync.NewCond(&p.lock) 354 - } 355 - if p.numberOfWorkers <= 0 { 356 - return 357 - } 358 - p.cond.Wait() 359 - } 360 - 361 - // IsPaused returns if the pool is paused 362 - func (p *WorkerPool) IsPaused() bool { 363 - p.lock.Lock() 364 - defer p.lock.Unlock() 365 - select { 366 - case <-p.paused: 367 - return true 368 - default: 369 - return false 370 - } 371 - } 372 - 373 - // IsPausedIsResumed returns if the pool is paused and a channel that is closed when it is resumed 374 - func (p *WorkerPool) IsPausedIsResumed() (<-chan struct{}, <-chan struct{}) { 375 - p.lock.Lock() 376 - defer p.lock.Unlock() 377 - return p.paused, p.resumed 378 - } 379 - 380 - // Pause pauses the WorkerPool 381 - func (p *WorkerPool) Pause() { 382 - p.lock.Lock() 383 - defer p.lock.Unlock() 384 - p.pause() 385 - } 386 - 387 - func (p *WorkerPool) pause() { 388 - select { 389 - case <-p.paused: 390 - default: 391 - p.resumed = make(chan struct{}) 392 - close(p.paused) 393 - } 394 - } 395 - 396 - // Resume resumes the WorkerPool 397 - func (p *WorkerPool) Resume() { 398 - p.lock.Lock() // can't defer unlock because of the zeroBoost at the end 399 - select { 400 - case <-p.resumed: 401 - // already resumed - there's nothing to do 402 - p.lock.Unlock() 403 - return 404 - default: 405 - } 406 - 407 - p.paused = make(chan struct{}) 408 - close(p.resumed) 409 - 410 - // OK now we need to check if we need to add some workers... 411 - if p.numberOfWorkers > 0 || p.hasNoWorkerScaling() || atomic.LoadInt64(&p.numInQueue) == 0 { 412 - // We either have workers, can't scale or there's no work to be done -> so just resume 413 - p.lock.Unlock() 414 - return 415 - } 416 - 417 - // OK we got some work but no workers we need to think about boosting 418 - select { 419 - case <-p.baseCtx.Done(): 420 - // don't bother boosting if the baseCtx is done 421 - p.lock.Unlock() 422 - return 423 - default: 424 - } 425 - 426 - // OK we'd better add some boost workers! 427 - p.zeroBoost() 428 - // p.zeroBoost will unlock the lock 429 - } 430 - 431 - // CleanUp will drain the remaining contents of the channel 432 - // This should be called after AddWorkers context is closed 433 - func (p *WorkerPool) CleanUp(ctx context.Context) { 434 - log.Trace("WorkerPool: %d CleanUp", p.qid) 435 - close(p.dataChan) 436 - for data := range p.dataChan { 437 - if unhandled := p.handle(data); unhandled != nil { 438 - if unhandled != nil { 439 - log.Error("Unhandled Data in clean-up of queue %d", p.qid) 440 - } 441 - } 442 - 443 - atomic.AddInt64(&p.numInQueue, -1) 444 - select { 445 - case <-ctx.Done(): 446 - log.Warn("WorkerPool: %d Cleanup context closed before finishing clean-up", p.qid) 447 - return 448 - default: 449 - } 450 - } 451 - log.Trace("WorkerPool: %d CleanUp Done", p.qid) 452 - } 453 - 454 - // Flush flushes the channel with a timeout - the Flush worker will be registered as a flush worker with the manager 455 - func (p *WorkerPool) Flush(timeout time.Duration) error { 456 - ctx, cancel := p.commonRegisterWorkers(1, timeout, true) 457 - defer cancel() 458 - return p.FlushWithContext(ctx) 459 - } 460 - 461 - // IsEmpty returns if true if the worker queue is empty 462 - func (p *WorkerPool) IsEmpty() bool { 463 - return atomic.LoadInt64(&p.numInQueue) == 0 464 - } 465 - 466 - // contextError returns either ctx.Done(), the base context's error or nil 467 - func (p *WorkerPool) contextError(ctx context.Context) error { 468 - select { 469 - case <-p.baseCtx.Done(): 470 - return p.baseCtx.Err() 471 - case <-ctx.Done(): 472 - return ctx.Err() 473 - default: 474 - return nil 475 - } 476 - } 477 - 478 - // FlushWithContext is very similar to CleanUp but it will return as soon as the dataChan is empty 479 - // NB: The worker will not be registered with the manager. 480 - func (p *WorkerPool) FlushWithContext(ctx context.Context) error { 481 - log.Trace("WorkerPool: %d Flush", p.qid) 482 - paused, _ := p.IsPausedIsResumed() 483 - for { 484 - // Because select will return any case that is satisified at random we precheck here before looking at dataChan. 485 - select { 486 - case <-paused: 487 - // Ensure that even if paused that the cancelled error is still sent 488 - return p.contextError(ctx) 489 - case <-p.baseCtx.Done(): 490 - return p.baseCtx.Err() 491 - case <-ctx.Done(): 492 - return ctx.Err() 493 - default: 494 - } 495 - 496 - select { 497 - case <-paused: 498 - return p.contextError(ctx) 499 - case data, ok := <-p.dataChan: 500 - if !ok { 501 - return nil 502 - } 503 - if unhandled := p.handle(data); unhandled != nil { 504 - log.Error("Unhandled Data whilst flushing queue %d", p.qid) 505 - } 506 - atomic.AddInt64(&p.numInQueue, -1) 507 - case <-p.baseCtx.Done(): 508 - return p.baseCtx.Err() 509 - case <-ctx.Done(): 510 - return ctx.Err() 511 - default: 512 - return nil 513 - } 514 - } 515 - } 516 - 517 - func (p *WorkerPool) doWork(ctx context.Context) { 518 - pprof.SetGoroutineLabels(ctx) 519 - delay := time.Millisecond * 300 520 - 521 - // Create a common timer - we will use this elsewhere 522 - timer := time.NewTimer(0) 523 - util.StopTimer(timer) 524 - 525 - paused, _ := p.IsPausedIsResumed() 526 - data := make([]Data, 0, p.batchLength) 527 - for { 528 - // Because select will return any case that is satisified at random we precheck here before looking at dataChan. 529 - select { 530 - case <-paused: 531 - log.Trace("Worker for Queue %d Pausing", p.qid) 532 - if len(data) > 0 { 533 - log.Trace("Handling: %d data, %v", len(data), data) 534 - if unhandled := p.handle(data...); unhandled != nil { 535 - log.Error("Unhandled Data in queue %d", p.qid) 536 - } 537 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 538 - } 539 - _, resumed := p.IsPausedIsResumed() 540 - select { 541 - case <-resumed: 542 - paused, _ = p.IsPausedIsResumed() 543 - log.Trace("Worker for Queue %d Resuming", p.qid) 544 - util.StopTimer(timer) 545 - case <-ctx.Done(): 546 - log.Trace("Worker shutting down") 547 - return 548 - } 549 - case <-ctx.Done(): 550 - if len(data) > 0 { 551 - log.Trace("Handling: %d data, %v", len(data), data) 552 - if unhandled := p.handle(data...); unhandled != nil { 553 - log.Error("Unhandled Data in queue %d", p.qid) 554 - } 555 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 556 - } 557 - log.Trace("Worker shutting down") 558 - return 559 - default: 560 - } 561 - 562 - select { 563 - case <-paused: 564 - // go back around 565 - case <-ctx.Done(): 566 - if len(data) > 0 { 567 - log.Trace("Handling: %d data, %v", len(data), data) 568 - if unhandled := p.handle(data...); unhandled != nil { 569 - log.Error("Unhandled Data in queue %d", p.qid) 570 - } 571 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 572 - } 573 - log.Trace("Worker shutting down") 574 - return 575 - case datum, ok := <-p.dataChan: 576 - if !ok { 577 - // the dataChan has been closed - we should finish up: 578 - if len(data) > 0 { 579 - log.Trace("Handling: %d data, %v", len(data), data) 580 - if unhandled := p.handle(data...); unhandled != nil { 581 - log.Error("Unhandled Data in queue %d", p.qid) 582 - } 583 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 584 - } 585 - log.Trace("Worker shutting down") 586 - return 587 - } 588 - data = append(data, datum) 589 - util.StopTimer(timer) 590 - 591 - if len(data) >= p.batchLength { 592 - log.Trace("Handling: %d data, %v", len(data), data) 593 - if unhandled := p.handle(data...); unhandled != nil { 594 - log.Error("Unhandled Data in queue %d", p.qid) 595 - } 596 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 597 - data = make([]Data, 0, p.batchLength) 598 - } else { 599 - timer.Reset(delay) 600 - } 601 - case <-timer.C: 602 - delay = time.Millisecond * 100 603 - if len(data) > 0 { 604 - log.Trace("Handling: %d data, %v", len(data), data) 605 - if unhandled := p.handle(data...); unhandled != nil { 606 - log.Error("Unhandled Data in queue %d", p.qid) 607 - } 608 - atomic.AddInt64(&p.numInQueue, -1*int64(len(data))) 609 - data = make([]Data, 0, p.batchLength) 610 - } 611 - } 612 - } 613 - }
+241
modules/queue/workerqueue.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "fmt" 9 + "sync" 10 + "sync/atomic" 11 + "time" 12 + 13 + "code.gitea.io/gitea/modules/graceful" 14 + "code.gitea.io/gitea/modules/json" 15 + "code.gitea.io/gitea/modules/log" 16 + "code.gitea.io/gitea/modules/setting" 17 + ) 18 + 19 + // WorkerPoolQueue is a queue that uses a pool of workers to process items 20 + // It can use different underlying (base) queue types 21 + type WorkerPoolQueue[T any] struct { 22 + ctxRun context.Context 23 + ctxRunCancel context.CancelFunc 24 + ctxShutdown atomic.Pointer[context.Context] 25 + shutdownDone chan struct{} 26 + 27 + origHandler HandlerFuncT[T] 28 + safeHandler HandlerFuncT[T] 29 + 30 + baseQueueType string 31 + baseConfig *BaseConfig 32 + baseQueue baseQueue 33 + 34 + batchChan chan []T 35 + flushChan chan flushType 36 + 37 + batchLength int 38 + workerNum int 39 + workerMaxNum int 40 + workerActiveNum int 41 + workerNumMu sync.Mutex 42 + } 43 + 44 + type flushType chan struct{} 45 + 46 + var _ ManagedWorkerPoolQueue = (*WorkerPoolQueue[any])(nil) 47 + 48 + func (q *WorkerPoolQueue[T]) GetName() string { 49 + return q.baseConfig.ManagedName 50 + } 51 + 52 + func (q *WorkerPoolQueue[T]) GetType() string { 53 + return q.baseQueueType 54 + } 55 + 56 + func (q *WorkerPoolQueue[T]) GetItemTypeName() string { 57 + var t T 58 + return fmt.Sprintf("%T", t) 59 + } 60 + 61 + func (q *WorkerPoolQueue[T]) GetWorkerNumber() int { 62 + q.workerNumMu.Lock() 63 + defer q.workerNumMu.Unlock() 64 + return q.workerNum 65 + } 66 + 67 + func (q *WorkerPoolQueue[T]) GetWorkerActiveNumber() int { 68 + q.workerNumMu.Lock() 69 + defer q.workerNumMu.Unlock() 70 + return q.workerActiveNum 71 + } 72 + 73 + func (q *WorkerPoolQueue[T]) GetWorkerMaxNumber() int { 74 + q.workerNumMu.Lock() 75 + defer q.workerNumMu.Unlock() 76 + return q.workerMaxNum 77 + } 78 + 79 + func (q *WorkerPoolQueue[T]) SetWorkerMaxNumber(num int) { 80 + q.workerNumMu.Lock() 81 + defer q.workerNumMu.Unlock() 82 + q.workerMaxNum = num 83 + } 84 + 85 + func (q *WorkerPoolQueue[T]) GetQueueItemNumber() int { 86 + cnt, err := q.baseQueue.Len(q.ctxRun) 87 + if err != nil { 88 + log.Error("Failed to get number of items in queue %q: %v", q.GetName(), err) 89 + } 90 + return cnt 91 + } 92 + 93 + func (q *WorkerPoolQueue[T]) FlushWithContext(ctx context.Context, timeout time.Duration) (err error) { 94 + if q.isBaseQueueDummy() { 95 + return 96 + } 97 + 98 + log.Debug("Try to flush queue %q with timeout %v", q.GetName(), timeout) 99 + defer log.Debug("Finish flushing queue %q, err: %v", q.GetName(), err) 100 + 101 + var after <-chan time.Time 102 + after = infiniteTimerC 103 + if timeout > 0 { 104 + after = time.After(timeout) 105 + } 106 + c := make(flushType) 107 + 108 + // send flush request 109 + // if it blocks, it means that there is a flush in progress or the queue hasn't been started yet 110 + select { 111 + case q.flushChan <- c: 112 + case <-ctx.Done(): 113 + return ctx.Err() 114 + case <-q.ctxRun.Done(): 115 + return q.ctxRun.Err() 116 + case <-after: 117 + return context.DeadlineExceeded 118 + } 119 + 120 + // wait for flush to finish 121 + select { 122 + case <-c: 123 + return nil 124 + case <-ctx.Done(): 125 + return ctx.Err() 126 + case <-q.ctxRun.Done(): 127 + return q.ctxRun.Err() 128 + case <-after: 129 + return context.DeadlineExceeded 130 + } 131 + } 132 + 133 + func (q *WorkerPoolQueue[T]) marshal(data T) []byte { 134 + bs, err := json.Marshal(data) 135 + if err != nil { 136 + log.Error("Failed to marshal item for queue %q: %v", q.GetName(), err) 137 + return nil 138 + } 139 + return bs 140 + } 141 + 142 + func (q *WorkerPoolQueue[T]) unmarshal(data []byte) (t T, ok bool) { 143 + if err := json.Unmarshal(data, &t); err != nil { 144 + log.Error("Failed to unmarshal item from queue %q: %v", q.GetName(), err) 145 + return t, false 146 + } 147 + return t, true 148 + } 149 + 150 + func (q *WorkerPoolQueue[T]) isBaseQueueDummy() bool { 151 + _, isDummy := q.baseQueue.(*baseDummy) 152 + return isDummy 153 + } 154 + 155 + // Push adds an item to the queue, it may block for a while and then returns an error if the queue is full 156 + func (q *WorkerPoolQueue[T]) Push(data T) error { 157 + if q.isBaseQueueDummy() && q.safeHandler != nil { 158 + // FIXME: the "immediate" queue is only for testing, but it really causes problems because its behavior is different from a real queue. 159 + // Even if tests pass, it doesn't mean that there is no bug in code. 160 + if data, ok := q.unmarshal(q.marshal(data)); ok { 161 + q.safeHandler(data) 162 + } 163 + } 164 + return q.baseQueue.PushItem(q.ctxRun, q.marshal(data)) 165 + } 166 + 167 + // Has only works for unique queues. Keep in mind that this check may not be reliable (due to lacking of proper transaction support) 168 + // There could be a small chance that duplicate items appear in the queue 169 + func (q *WorkerPoolQueue[T]) Has(data T) (bool, error) { 170 + return q.baseQueue.HasItem(q.ctxRun, q.marshal(data)) 171 + } 172 + 173 + func (q *WorkerPoolQueue[T]) Run(atShutdown, atTerminate func(func())) { 174 + atShutdown(func() { 175 + // in case some queue handlers are slow or have hanging bugs, at most wait for a short time 176 + q.ShutdownWait(1 * time.Second) 177 + }) 178 + q.doRun() 179 + } 180 + 181 + // ShutdownWait shuts down the queue, waits for all workers to finish their jobs, and pushes the unhandled items back to the base queue 182 + // It waits for all workers (handlers) to finish their jobs, in case some buggy handlers would hang forever, a reasonable timeout is needed 183 + func (q *WorkerPoolQueue[T]) ShutdownWait(timeout time.Duration) { 184 + shutdownCtx, shutdownCtxCancel := context.WithTimeout(context.Background(), timeout) 185 + defer shutdownCtxCancel() 186 + if q.ctxShutdown.CompareAndSwap(nil, &shutdownCtx) { 187 + q.ctxRunCancel() 188 + } 189 + <-q.shutdownDone 190 + } 191 + 192 + func getNewQueueFn(t string) (string, func(cfg *BaseConfig, unique bool) (baseQueue, error)) { 193 + switch t { 194 + case "dummy", "immediate": 195 + return t, newBaseDummy 196 + case "channel": 197 + return t, newBaseChannelGeneric 198 + case "redis": 199 + return t, newBaseRedisGeneric 200 + default: // level(leveldb,levelqueue,persistable-channel) 201 + return "level", newBaseLevelQueueGeneric 202 + } 203 + } 204 + 205 + func NewWorkerPoolQueueBySetting[T any](name string, queueSetting setting.QueueSettings, handler HandlerFuncT[T], unique bool) (*WorkerPoolQueue[T], error) { 206 + if handler == nil { 207 + log.Debug("Use dummy queue for %q because handler is nil and caller doesn't want to process the queue items", name) 208 + queueSetting.Type = "dummy" 209 + } 210 + 211 + var w WorkerPoolQueue[T] 212 + var err error 213 + queueType, newQueueFn := getNewQueueFn(queueSetting.Type) 214 + w.baseQueueType = queueType 215 + w.baseConfig = toBaseConfig(name, queueSetting) 216 + w.baseQueue, err = newQueueFn(w.baseConfig, unique) 217 + if err != nil { 218 + return nil, err 219 + } 220 + log.Trace("Created queue %q of type %q", name, queueType) 221 + 222 + w.ctxRun, w.ctxRunCancel = context.WithCancel(graceful.GetManager().ShutdownContext()) 223 + w.batchChan = make(chan []T) 224 + w.flushChan = make(chan flushType) 225 + w.shutdownDone = make(chan struct{}) 226 + w.workerMaxNum = queueSetting.MaxWorkers 227 + w.batchLength = queueSetting.BatchLength 228 + 229 + w.origHandler = handler 230 + w.safeHandler = func(t ...T) (unhandled []T) { 231 + defer func() { 232 + err := recover() 233 + if err != nil { 234 + log.Error("Recovered from panic in queue %q handler: %v\n%s", name, err, log.Stack(2)) 235 + } 236 + }() 237 + return w.origHandler(t...) 238 + } 239 + 240 + return &w, nil 241 + }
+260
modules/queue/workerqueue_test.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package queue 5 + 6 + import ( 7 + "context" 8 + "strconv" 9 + "sync" 10 + "testing" 11 + "time" 12 + 13 + "code.gitea.io/gitea/modules/setting" 14 + 15 + "github.com/stretchr/testify/assert" 16 + ) 17 + 18 + func runWorkerPoolQueue[T any](q *WorkerPoolQueue[T]) func() { 19 + var stop func() 20 + started := make(chan struct{}) 21 + stopped := make(chan struct{}) 22 + go func() { 23 + q.Run(func(f func()) { stop = f; close(started) }, nil) 24 + close(stopped) 25 + }() 26 + <-started 27 + return func() { 28 + stop() 29 + <-stopped 30 + } 31 + } 32 + 33 + func TestWorkerPoolQueueUnhandled(t *testing.T) { 34 + oldUnhandledItemRequeueDuration := unhandledItemRequeueDuration.Load() 35 + unhandledItemRequeueDuration.Store(0) 36 + defer unhandledItemRequeueDuration.Store(oldUnhandledItemRequeueDuration) 37 + 38 + mu := sync.Mutex{} 39 + 40 + test := func(t *testing.T, queueSetting setting.QueueSettings) { 41 + queueSetting.Length = 100 42 + queueSetting.Type = "channel" 43 + queueSetting.Datadir = t.TempDir() + "/test-queue" 44 + m := map[int]int{} 45 + 46 + // odds are handled once, evens are handled twice 47 + handler := func(items ...int) (unhandled []int) { 48 + testRecorder.Record("handle:%v", items) 49 + for _, item := range items { 50 + mu.Lock() 51 + if item%2 == 0 && m[item] == 0 { 52 + unhandled = append(unhandled, item) 53 + } 54 + m[item]++ 55 + mu.Unlock() 56 + } 57 + return unhandled 58 + } 59 + 60 + q, _ := NewWorkerPoolQueueBySetting("test-workpoolqueue", queueSetting, handler, false) 61 + stop := runWorkerPoolQueue(q) 62 + for i := 0; i < queueSetting.Length; i++ { 63 + testRecorder.Record("push:%v", i) 64 + assert.NoError(t, q.Push(i)) 65 + } 66 + assert.NoError(t, q.FlushWithContext(context.Background(), 0)) 67 + stop() 68 + 69 + ok := true 70 + for i := 0; i < queueSetting.Length; i++ { 71 + if i%2 == 0 { 72 + ok = ok && assert.EqualValues(t, 2, m[i], "test %s: item %d", t.Name(), i) 73 + } else { 74 + ok = ok && assert.EqualValues(t, 1, m[i], "test %s: item %d", t.Name(), i) 75 + } 76 + } 77 + if !ok { 78 + t.Logf("m: %v", m) 79 + t.Logf("records: %v", testRecorder.Records()) 80 + } 81 + testRecorder.Reset() 82 + } 83 + 84 + runCount := 2 // we can run these tests even hundreds times to see its stability 85 + t.Run("1/1", func(t *testing.T) { 86 + for i := 0; i < runCount; i++ { 87 + test(t, setting.QueueSettings{BatchLength: 1, MaxWorkers: 1}) 88 + } 89 + }) 90 + t.Run("3/1", func(t *testing.T) { 91 + for i := 0; i < runCount; i++ { 92 + test(t, setting.QueueSettings{BatchLength: 3, MaxWorkers: 1}) 93 + } 94 + }) 95 + t.Run("4/5", func(t *testing.T) { 96 + for i := 0; i < runCount; i++ { 97 + test(t, setting.QueueSettings{BatchLength: 4, MaxWorkers: 5}) 98 + } 99 + }) 100 + } 101 + 102 + func TestWorkerPoolQueuePersistence(t *testing.T) { 103 + runCount := 2 // we can run these tests even hundreds times to see its stability 104 + t.Run("1/1", func(t *testing.T) { 105 + for i := 0; i < runCount; i++ { 106 + testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 1, MaxWorkers: 1, Length: 100}) 107 + } 108 + }) 109 + t.Run("3/1", func(t *testing.T) { 110 + for i := 0; i < runCount; i++ { 111 + testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 3, MaxWorkers: 1, Length: 100}) 112 + } 113 + }) 114 + t.Run("4/5", func(t *testing.T) { 115 + for i := 0; i < runCount; i++ { 116 + testWorkerPoolQueuePersistence(t, setting.QueueSettings{BatchLength: 4, MaxWorkers: 5, Length: 100}) 117 + } 118 + }) 119 + } 120 + 121 + func testWorkerPoolQueuePersistence(t *testing.T, queueSetting setting.QueueSettings) { 122 + testCount := queueSetting.Length 123 + queueSetting.Type = "level" 124 + queueSetting.Datadir = t.TempDir() + "/test-queue" 125 + 126 + mu := sync.Mutex{} 127 + 128 + var tasksQ1, tasksQ2 []string 129 + q1 := func() { 130 + startWhenAllReady := make(chan struct{}) // only start data consuming when the "testCount" tasks are all pushed into queue 131 + stopAt20Shutdown := make(chan struct{}) // stop and shutdown at the 20th item 132 + 133 + testHandler := func(data ...string) []string { 134 + <-startWhenAllReady 135 + time.Sleep(10 * time.Millisecond) 136 + for _, s := range data { 137 + mu.Lock() 138 + tasksQ1 = append(tasksQ1, s) 139 + mu.Unlock() 140 + 141 + if s == "task-20" { 142 + close(stopAt20Shutdown) 143 + } 144 + } 145 + return nil 146 + } 147 + 148 + q, _ := NewWorkerPoolQueueBySetting("pr_patch_checker_test", queueSetting, testHandler, true) 149 + stop := runWorkerPoolQueue(q) 150 + for i := 0; i < testCount; i++ { 151 + _ = q.Push("task-" + strconv.Itoa(i)) 152 + } 153 + close(startWhenAllReady) 154 + <-stopAt20Shutdown // it's possible to have more than 20 tasks executed 155 + stop() 156 + } 157 + 158 + q1() // run some tasks and shutdown at an intermediate point 159 + 160 + time.Sleep(100 * time.Millisecond) // because the handler in q1 has a slight delay, we need to wait for it to finish 161 + 162 + q2 := func() { 163 + testHandler := func(data ...string) []string { 164 + for _, s := range data { 165 + mu.Lock() 166 + tasksQ2 = append(tasksQ2, s) 167 + mu.Unlock() 168 + } 169 + return nil 170 + } 171 + 172 + q, _ := NewWorkerPoolQueueBySetting("pr_patch_checker_test", queueSetting, testHandler, true) 173 + stop := runWorkerPoolQueue(q) 174 + assert.NoError(t, q.FlushWithContext(context.Background(), 0)) 175 + stop() 176 + } 177 + 178 + q2() // restart the queue to continue to execute the tasks in it 179 + 180 + assert.NotZero(t, len(tasksQ1)) 181 + assert.NotZero(t, len(tasksQ2)) 182 + assert.EqualValues(t, testCount, len(tasksQ1)+len(tasksQ2)) 183 + } 184 + 185 + func TestWorkerPoolQueueActiveWorkers(t *testing.T) { 186 + oldWorkerIdleDuration := workerIdleDuration 187 + workerIdleDuration = 300 * time.Millisecond 188 + defer func() { 189 + workerIdleDuration = oldWorkerIdleDuration 190 + }() 191 + 192 + handler := func(items ...int) (unhandled []int) { 193 + time.Sleep(100 * time.Millisecond) 194 + return nil 195 + } 196 + 197 + q, _ := NewWorkerPoolQueueBySetting("test-workpoolqueue", setting.QueueSettings{Type: "channel", BatchLength: 1, MaxWorkers: 1, Length: 100}, handler, false) 198 + stop := runWorkerPoolQueue(q) 199 + for i := 0; i < 5; i++ { 200 + assert.NoError(t, q.Push(i)) 201 + } 202 + 203 + time.Sleep(50 * time.Millisecond) 204 + assert.EqualValues(t, 1, q.GetWorkerNumber()) 205 + assert.EqualValues(t, 1, q.GetWorkerActiveNumber()) 206 + time.Sleep(500 * time.Millisecond) 207 + assert.EqualValues(t, 1, q.GetWorkerNumber()) 208 + assert.EqualValues(t, 0, q.GetWorkerActiveNumber()) 209 + time.Sleep(workerIdleDuration) 210 + assert.EqualValues(t, 1, q.GetWorkerNumber()) // there is at least one worker after the queue begins working 211 + stop() 212 + 213 + q, _ = NewWorkerPoolQueueBySetting("test-workpoolqueue", setting.QueueSettings{Type: "channel", BatchLength: 1, MaxWorkers: 3, Length: 100}, handler, false) 214 + stop = runWorkerPoolQueue(q) 215 + for i := 0; i < 15; i++ { 216 + assert.NoError(t, q.Push(i)) 217 + } 218 + 219 + time.Sleep(50 * time.Millisecond) 220 + assert.EqualValues(t, 3, q.GetWorkerNumber()) 221 + assert.EqualValues(t, 3, q.GetWorkerActiveNumber()) 222 + time.Sleep(500 * time.Millisecond) 223 + assert.EqualValues(t, 3, q.GetWorkerNumber()) 224 + assert.EqualValues(t, 0, q.GetWorkerActiveNumber()) 225 + time.Sleep(workerIdleDuration) 226 + assert.EqualValues(t, 1, q.GetWorkerNumber()) // there is at least one worker after the queue begins working 227 + stop() 228 + } 229 + 230 + func TestWorkerPoolQueueShutdown(t *testing.T) { 231 + oldUnhandledItemRequeueDuration := unhandledItemRequeueDuration.Load() 232 + unhandledItemRequeueDuration.Store(int64(100 * time.Millisecond)) 233 + defer unhandledItemRequeueDuration.Store(oldUnhandledItemRequeueDuration) 234 + 235 + // simulate a slow handler, it doesn't handle any item (all items will be pushed back to the queue) 236 + handlerCalled := make(chan struct{}) 237 + handler := func(items ...int) (unhandled []int) { 238 + if items[0] == 0 { 239 + close(handlerCalled) 240 + } 241 + time.Sleep(100 * time.Millisecond) 242 + return items 243 + } 244 + 245 + qs := setting.QueueSettings{Type: "level", Datadir: t.TempDir() + "/queue", BatchLength: 3, MaxWorkers: 4, Length: 20} 246 + q, _ := NewWorkerPoolQueueBySetting("test-workpoolqueue", qs, handler, false) 247 + stop := runWorkerPoolQueue(q) 248 + for i := 0; i < qs.Length; i++ { 249 + assert.NoError(t, q.Push(i)) 250 + } 251 + <-handlerCalled 252 + time.Sleep(50 * time.Millisecond) // wait for a while to make sure all workers are active 253 + assert.EqualValues(t, 4, q.GetWorkerActiveNumber()) 254 + stop() // stop triggers shutdown 255 + assert.EqualValues(t, 0, q.GetWorkerActiveNumber()) 256 + 257 + // no item was ever handled, so we still get all of them again 258 + q, _ = NewWorkerPoolQueueBySetting("test-workpoolqueue", qs, handler, false) 259 + assert.EqualValues(t, 20, q.GetQueueItemNumber()) 260 + }
+3 -3
modules/setting/config_provider.go
··· 42 42 43 43 // NewEmptyConfigProvider create a new empty config provider 44 44 func NewEmptyConfigProvider() ConfigProvider { 45 - cp, _ := newConfigProviderFromData("") 45 + cp, _ := NewConfigProviderFromData("") 46 46 return cp 47 47 } 48 48 49 - // newConfigProviderFromData this function is only for testing 50 - func newConfigProviderFromData(configContent string) (ConfigProvider, error) { 49 + // NewConfigProviderFromData this function is only for testing 50 + func NewConfigProviderFromData(configContent string) (ConfigProvider, error) { 51 51 var cfg *ini.File 52 52 var err error 53 53 if configContent == "" {
+1 -1
modules/setting/cron_test.go
··· 26 26 SECOND = white rabbit 27 27 EXTEND = true 28 28 ` 29 - cfg, err := newConfigProviderFromData(iniStr) 29 + cfg, err := NewConfigProviderFromData(iniStr) 30 30 assert.NoError(t, err) 31 31 32 32 extended := &Extended{
-9
modules/setting/indexer.go
··· 70 70 71 71 Indexer.IssueIndexerName = sec.Key("ISSUE_INDEXER_NAME").MustString(Indexer.IssueIndexerName) 72 72 73 - // The following settings are deprecated and can be overridden by settings in [queue] or [queue.issue_indexer] 74 - // DEPRECATED should not be removed because users maybe upgrade from lower version to the latest version 75 - // if these are removed, the warning will not be shown 76 - deprecatedSetting(rootCfg, "indexer", "ISSUE_INDEXER_QUEUE_TYPE", "queue.issue_indexer", "TYPE", "v1.19.0") 77 - deprecatedSetting(rootCfg, "indexer", "ISSUE_INDEXER_QUEUE_DIR", "queue.issue_indexer", "DATADIR", "v1.19.0") 78 - deprecatedSetting(rootCfg, "indexer", "ISSUE_INDEXER_QUEUE_CONN_STR", "queue.issue_indexer", "CONN_STR", "v1.19.0") 79 - deprecatedSetting(rootCfg, "indexer", "ISSUE_INDEXER_QUEUE_BATCH_NUMBER", "queue.issue_indexer", "BATCH_LENGTH", "v1.19.0") 80 - deprecatedSetting(rootCfg, "indexer", "UPDATE_BUFFER_LEN", "queue.issue_indexer", "LENGTH", "v1.19.0") 81 - 82 73 Indexer.RepoIndexerEnabled = sec.Key("REPO_INDEXER_ENABLED").MustBool(false) 83 74 Indexer.RepoType = sec.Key("REPO_INDEXER_TYPE").MustString("bleve") 84 75 Indexer.RepoPath = filepath.ToSlash(sec.Key("REPO_INDEXER_PATH").MustString(filepath.ToSlash(filepath.Join(AppDataPath, "indexers/repos.bleve"))))
+75 -164
modules/setting/queue.go
··· 5 5 6 6 import ( 7 7 "path/filepath" 8 - "strconv" 9 - "time" 10 8 11 - "code.gitea.io/gitea/modules/container" 9 + "code.gitea.io/gitea/modules/json" 12 10 "code.gitea.io/gitea/modules/log" 13 11 ) 14 12 15 13 // QueueSettings represent the settings for a queue from the ini 16 14 type QueueSettings struct { 17 - Name string 18 - DataDir string 19 - QueueLength int `ini:"LENGTH"` 20 - BatchLength int 21 - ConnectionString string 22 - Type string 23 - QueueName string 24 - SetName string 25 - WrapIfNecessary bool 26 - MaxAttempts int 27 - Timeout time.Duration 28 - Workers int 29 - MaxWorkers int 30 - BlockTimeout time.Duration 31 - BoostTimeout time.Duration 32 - BoostWorkers int 33 - } 34 - 35 - // Queue settings 36 - var Queue = QueueSettings{} 37 - 38 - // GetQueueSettings returns the queue settings for the appropriately named queue 39 - func GetQueueSettings(name string) QueueSettings { 40 - return getQueueSettings(CfgProvider, name) 41 - } 42 - 43 - func getQueueSettings(rootCfg ConfigProvider, name string) QueueSettings { 44 - q := QueueSettings{} 45 - sec := rootCfg.Section("queue." + name) 46 - q.Name = name 15 + Name string // not an INI option, it is the name for [queue.the-name] section 47 16 48 - // DataDir is not directly inheritable 49 - q.DataDir = filepath.ToSlash(filepath.Join(Queue.DataDir, "common")) 50 - // QueueName is not directly inheritable either 51 - q.QueueName = name + Queue.QueueName 52 - for _, key := range sec.Keys() { 53 - switch key.Name() { 54 - case "DATADIR": 55 - q.DataDir = key.MustString(q.DataDir) 56 - case "QUEUE_NAME": 57 - q.QueueName = key.MustString(q.QueueName) 58 - case "SET_NAME": 59 - q.SetName = key.MustString(q.SetName) 60 - } 61 - } 62 - if len(q.SetName) == 0 && len(Queue.SetName) > 0 { 63 - q.SetName = q.QueueName + Queue.SetName 64 - } 65 - if !filepath.IsAbs(q.DataDir) { 66 - q.DataDir = filepath.ToSlash(filepath.Join(AppDataPath, q.DataDir)) 67 - } 68 - _, _ = sec.NewKey("DATADIR", q.DataDir) 17 + Type string 18 + Datadir string 19 + ConnStr string // for leveldb or redis 20 + Length int // max queue length before blocking 69 21 70 - // The rest are... 71 - q.QueueLength = sec.Key("LENGTH").MustInt(Queue.QueueLength) 72 - q.BatchLength = sec.Key("BATCH_LENGTH").MustInt(Queue.BatchLength) 73 - q.ConnectionString = sec.Key("CONN_STR").MustString(Queue.ConnectionString) 74 - q.Type = sec.Key("TYPE").MustString(Queue.Type) 75 - q.WrapIfNecessary = sec.Key("WRAP_IF_NECESSARY").MustBool(Queue.WrapIfNecessary) 76 - q.MaxAttempts = sec.Key("MAX_ATTEMPTS").MustInt(Queue.MaxAttempts) 77 - q.Timeout = sec.Key("TIMEOUT").MustDuration(Queue.Timeout) 78 - q.Workers = sec.Key("WORKERS").MustInt(Queue.Workers) 79 - q.MaxWorkers = sec.Key("MAX_WORKERS").MustInt(Queue.MaxWorkers) 80 - q.BlockTimeout = sec.Key("BLOCK_TIMEOUT").MustDuration(Queue.BlockTimeout) 81 - q.BoostTimeout = sec.Key("BOOST_TIMEOUT").MustDuration(Queue.BoostTimeout) 82 - q.BoostWorkers = sec.Key("BOOST_WORKERS").MustInt(Queue.BoostWorkers) 22 + QueueName, SetName string // the name suffix for storage (db key, redis key), "set" is for unique queue 83 23 84 - return q 24 + BatchLength int 25 + MaxWorkers int 85 26 } 86 27 87 - // LoadQueueSettings sets up the default settings for Queues 88 - // This is exported for tests to be able to use the queue 89 - func LoadQueueSettings() { 90 - loadQueueFrom(CfgProvider) 28 + var queueSettingsDefault = QueueSettings{ 29 + Type: "level", // dummy, channel, level, redis 30 + Datadir: "queues/common", // relative to AppDataPath 31 + Length: 100, // queue length before a channel queue will block 32 + 33 + QueueName: "_queue", 34 + SetName: "_unique", 35 + BatchLength: 20, 36 + MaxWorkers: 10, 91 37 } 92 38 93 - func loadQueueFrom(rootCfg ConfigProvider) { 94 - sec := rootCfg.Section("queue") 95 - Queue.DataDir = filepath.ToSlash(sec.Key("DATADIR").MustString("queues/")) 96 - if !filepath.IsAbs(Queue.DataDir) { 97 - Queue.DataDir = filepath.ToSlash(filepath.Join(AppDataPath, Queue.DataDir)) 39 + func GetQueueSettings(rootCfg ConfigProvider, name string) (QueueSettings, error) { 40 + // deep copy default settings 41 + cfg := QueueSettings{} 42 + if cfgBs, err := json.Marshal(queueSettingsDefault); err != nil { 43 + return cfg, err 44 + } else if err = json.Unmarshal(cfgBs, &cfg); err != nil { 45 + return cfg, err 98 46 } 99 - Queue.QueueLength = sec.Key("LENGTH").MustInt(20) 100 - Queue.BatchLength = sec.Key("BATCH_LENGTH").MustInt(20) 101 - Queue.ConnectionString = sec.Key("CONN_STR").MustString("") 102 - defaultType := sec.Key("TYPE").String() 103 - Queue.Type = sec.Key("TYPE").MustString("persistable-channel") 104 - Queue.WrapIfNecessary = sec.Key("WRAP_IF_NECESSARY").MustBool(true) 105 - Queue.MaxAttempts = sec.Key("MAX_ATTEMPTS").MustInt(10) 106 - Queue.Timeout = sec.Key("TIMEOUT").MustDuration(GracefulHammerTime + 30*time.Second) 107 - Queue.Workers = sec.Key("WORKERS").MustInt(0) 108 - Queue.MaxWorkers = sec.Key("MAX_WORKERS").MustInt(10) 109 - Queue.BlockTimeout = sec.Key("BLOCK_TIMEOUT").MustDuration(1 * time.Second) 110 - Queue.BoostTimeout = sec.Key("BOOST_TIMEOUT").MustDuration(5 * time.Minute) 111 - Queue.BoostWorkers = sec.Key("BOOST_WORKERS").MustInt(1) 112 - Queue.QueueName = sec.Key("QUEUE_NAME").MustString("_queue") 113 - Queue.SetName = sec.Key("SET_NAME").MustString("") 114 47 115 - // Now handle the old issue_indexer configuration 116 - // FIXME: DEPRECATED to be removed in v1.18.0 117 - section := rootCfg.Section("queue.issue_indexer") 118 - directlySet := toDirectlySetKeysSet(section) 119 - if !directlySet.Contains("TYPE") && defaultType == "" { 120 - switch typ := rootCfg.Section("indexer").Key("ISSUE_INDEXER_QUEUE_TYPE").MustString(""); typ { 121 - case "levelqueue": 122 - _, _ = section.NewKey("TYPE", "level") 123 - case "channel": 124 - _, _ = section.NewKey("TYPE", "persistable-channel") 125 - case "redis": 126 - _, _ = section.NewKey("TYPE", "redis") 127 - case "": 128 - _, _ = section.NewKey("TYPE", "level") 129 - default: 130 - log.Fatal("Unsupported indexer queue type: %v", typ) 48 + cfg.Name = name 49 + if sec, err := rootCfg.GetSection("queue"); err == nil { 50 + if err = sec.MapTo(&cfg); err != nil { 51 + log.Error("Failed to map queue common config for %q: %v", name, err) 52 + return cfg, nil 131 53 } 132 54 } 133 - if !directlySet.Contains("LENGTH") { 134 - length := rootCfg.Section("indexer").Key("UPDATE_BUFFER_LEN").MustInt(0) 135 - if length != 0 { 136 - _, _ = section.NewKey("LENGTH", strconv.Itoa(length)) 55 + if sec, err := rootCfg.GetSection("queue." + name); err == nil { 56 + if err = sec.MapTo(&cfg); err != nil { 57 + log.Error("Failed to map queue spec config for %q: %v", name, err) 58 + return cfg, nil 137 59 } 138 - } 139 - if !directlySet.Contains("BATCH_LENGTH") { 140 - fallback := rootCfg.Section("indexer").Key("ISSUE_INDEXER_QUEUE_BATCH_NUMBER").MustInt(0) 141 - if fallback != 0 { 142 - _, _ = section.NewKey("BATCH_LENGTH", strconv.Itoa(fallback)) 60 + if sec.HasKey("CONN_STR") { 61 + cfg.ConnStr = sec.Key("CONN_STR").String() 143 62 } 144 63 } 145 - if !directlySet.Contains("DATADIR") { 146 - queueDir := filepath.ToSlash(rootCfg.Section("indexer").Key("ISSUE_INDEXER_QUEUE_DIR").MustString("")) 147 - if queueDir != "" { 148 - _, _ = section.NewKey("DATADIR", queueDir) 149 - } 64 + 65 + if cfg.Datadir == "" { 66 + cfg.Datadir = queueSettingsDefault.Datadir 150 67 } 151 - if !directlySet.Contains("CONN_STR") { 152 - connStr := rootCfg.Section("indexer").Key("ISSUE_INDEXER_QUEUE_CONN_STR").MustString("") 153 - if connStr != "" { 154 - _, _ = section.NewKey("CONN_STR", connStr) 155 - } 68 + if !filepath.IsAbs(cfg.Datadir) { 69 + cfg.Datadir = filepath.Join(AppDataPath, cfg.Datadir) 156 70 } 71 + cfg.Datadir = filepath.ToSlash(cfg.Datadir) 157 72 158 - // FIXME: DEPRECATED to be removed in v1.18.0 159 - // - will need to set default for [queue.*)] LENGTH appropriately though though 73 + if cfg.Type == "redis" && cfg.ConnStr == "" { 74 + cfg.ConnStr = "redis://127.0.0.1:6379/0" 75 + } 160 76 161 - // Handle the old mailer configuration 162 - handleOldLengthConfiguration(rootCfg, "mailer", "mailer", "SEND_BUFFER_LEN", 100) 77 + if cfg.Length <= 0 { 78 + cfg.Length = queueSettingsDefault.Length 79 + } 80 + if cfg.MaxWorkers <= 0 { 81 + cfg.MaxWorkers = queueSettingsDefault.MaxWorkers 82 + } 83 + if cfg.BatchLength <= 0 { 84 + cfg.BatchLength = queueSettingsDefault.BatchLength 85 + } 163 86 164 - // Handle the old test pull requests configuration 165 - // Please note this will be a unique queue 166 - handleOldLengthConfiguration(rootCfg, "pr_patch_checker", "repository", "PULL_REQUEST_QUEUE_LENGTH", 1000) 87 + return cfg, nil 88 + } 167 89 168 - // Handle the old mirror queue configuration 169 - // Please note this will be a unique queue 170 - handleOldLengthConfiguration(rootCfg, "mirror", "repository", "MIRROR_QUEUE_LENGTH", 1000) 90 + func LoadQueueSettings() { 91 + loadQueueFrom(CfgProvider) 171 92 } 172 93 173 - // handleOldLengthConfiguration allows fallback to older configuration. `[queue.name]` `LENGTH` will override this configuration, but 174 - // if that is left unset then we should fallback to the older configuration. (Except where the new length woul be <=0) 175 - func handleOldLengthConfiguration(rootCfg ConfigProvider, queueName, oldSection, oldKey string, defaultValue int) { 176 - if rootCfg.Section(oldSection).HasKey(oldKey) { 177 - log.Error("Deprecated fallback for %s queue length `[%s]` `%s` present. Use `[queue.%s]` `LENGTH`. This will be removed in v1.18.0", queueName, queueName, oldSection, oldKey) 94 + func loadQueueFrom(rootCfg ConfigProvider) { 95 + hasOld := false 96 + handleOldLengthConfiguration := func(rootCfg ConfigProvider, newQueueName, oldSection, oldKey string) { 97 + if rootCfg.Section(oldSection).HasKey(oldKey) { 98 + hasOld = true 99 + log.Error("Removed queue option: `[%s].%s`. Use new options in `[queue.%s]`", oldSection, oldKey, newQueueName) 100 + } 178 101 } 179 - value := rootCfg.Section(oldSection).Key(oldKey).MustInt(defaultValue) 180 - 181 - // Don't override with 0 182 - if value <= 0 { 183 - return 102 + handleOldLengthConfiguration(rootCfg, "issue_indexer", "indexer", "ISSUE_INDEXER_QUEUE_TYPE") 103 + handleOldLengthConfiguration(rootCfg, "issue_indexer", "indexer", "ISSUE_INDEXER_QUEUE_BATCH_NUMBER") 104 + handleOldLengthConfiguration(rootCfg, "issue_indexer", "indexer", "ISSUE_INDEXER_QUEUE_DIR") 105 + handleOldLengthConfiguration(rootCfg, "issue_indexer", "indexer", "ISSUE_INDEXER_QUEUE_CONN_STR") 106 + handleOldLengthConfiguration(rootCfg, "issue_indexer", "indexer", "UPDATE_BUFFER_LEN") 107 + handleOldLengthConfiguration(rootCfg, "mailer", "mailer", "SEND_BUFFER_LEN") 108 + handleOldLengthConfiguration(rootCfg, "pr_patch_checker", "repository", "PULL_REQUEST_QUEUE_LENGTH") 109 + handleOldLengthConfiguration(rootCfg, "mirror", "repository", "MIRROR_QUEUE_LENGTH") 110 + if hasOld { 111 + log.Fatal("Please update your app.ini to remove deprecated config options") 184 112 } 185 - 186 - section := rootCfg.Section("queue." + queueName) 187 - directlySet := toDirectlySetKeysSet(section) 188 - if !directlySet.Contains("LENGTH") { 189 - _, _ = section.NewKey("LENGTH", strconv.Itoa(value)) 190 - } 191 - } 192 - 193 - // toDirectlySetKeysSet returns a set of keys directly set by this section 194 - // Note: we cannot use section.HasKey(...) as that will immediately set the Key if a parent section has the Key 195 - // but this section does not. 196 - func toDirectlySetKeysSet(section ConfigSection) container.Set[string] { 197 - sections := make(container.Set[string]) 198 - for _, key := range section.Keys() { 199 - sections.Add(key.Name()) 200 - } 201 - return sections 202 113 }
+9 -9
modules/setting/storage_test.go
··· 19 19 STORAGE_TYPE = minio 20 20 MINIO_ENDPOINT = my_minio:9000 21 21 ` 22 - cfg, err := newConfigProviderFromData(iniStr) 22 + cfg, err := NewConfigProviderFromData(iniStr) 23 23 assert.NoError(t, err) 24 24 25 25 sec := cfg.Section("attachment") ··· 42 42 [storage.minio] 43 43 MINIO_BUCKET = gitea 44 44 ` 45 - cfg, err := newConfigProviderFromData(iniStr) 45 + cfg, err := NewConfigProviderFromData(iniStr) 46 46 assert.NoError(t, err) 47 47 48 48 sec := cfg.Section("attachment") ··· 64 64 [storage] 65 65 MINIO_BUCKET = gitea 66 66 ` 67 - cfg, err := newConfigProviderFromData(iniStr) 67 + cfg, err := NewConfigProviderFromData(iniStr) 68 68 assert.NoError(t, err) 69 69 70 70 sec := cfg.Section("attachment") ··· 87 87 [storage] 88 88 STORAGE_TYPE = local 89 89 ` 90 - cfg, err := newConfigProviderFromData(iniStr) 90 + cfg, err := NewConfigProviderFromData(iniStr) 91 91 assert.NoError(t, err) 92 92 93 93 sec := cfg.Section("attachment") ··· 99 99 } 100 100 101 101 func Test_getStorageGetDefaults(t *testing.T) { 102 - cfg, err := newConfigProviderFromData("") 102 + cfg, err := NewConfigProviderFromData("") 103 103 assert.NoError(t, err) 104 104 105 105 sec := cfg.Section("attachment") ··· 120 120 [storage] 121 121 MINIO_BUCKET = gitea-storage 122 122 ` 123 - cfg, err := newConfigProviderFromData(iniStr) 123 + cfg, err := NewConfigProviderFromData(iniStr) 124 124 assert.NoError(t, err) 125 125 126 126 { ··· 154 154 [storage.lfs] 155 155 MINIO_BUCKET = gitea-storage 156 156 ` 157 - cfg, err := newConfigProviderFromData(iniStr) 157 + cfg, err := NewConfigProviderFromData(iniStr) 158 158 assert.NoError(t, err) 159 159 160 160 { ··· 178 178 [storage] 179 179 STORAGE_TYPE = minio 180 180 ` 181 - cfg, err := newConfigProviderFromData(iniStr) 181 + cfg, err := NewConfigProviderFromData(iniStr) 182 182 assert.NoError(t, err) 183 183 184 184 sec := cfg.Section("attachment") ··· 193 193 [storage.attachments] 194 194 STORAGE_TYPE = minio 195 195 ` 196 - cfg, err := newConfigProviderFromData(iniStr) 196 + cfg, err := NewConfigProviderFromData(iniStr) 197 197 assert.NoError(t, err) 198 198 199 199 sec := cfg.Section("attachment")
+1
modules/test/context_tests.go
··· 26 26 ) 27 27 28 28 // MockContext mock context for unit tests 29 + // TODO: move this function to other packages, because it depends on "models" package 29 30 func MockContext(t *testing.T, path string) *context.Context { 30 31 resp := &mockResponseWriter{} 31 32 ctx := context.Context{
-12
modules/util/timer.go
··· 8 8 "time" 9 9 ) 10 10 11 - // StopTimer is a utility function to safely stop a time.Timer and clean its channel 12 - func StopTimer(t *time.Timer) bool { 13 - stopped := t.Stop() 14 - if !stopped { 15 - select { 16 - case <-t.C: 17 - default: 18 - } 19 - } 20 - return stopped 21 - } 22 - 23 11 func Debounce(d time.Duration) func(f func()) { 24 12 type debouncer struct { 25 13 mu sync.Mutex
+4 -174
routers/web/admin/admin.go
··· 8 8 "fmt" 9 9 "net/http" 10 10 "runtime" 11 - "strconv" 12 11 "time" 13 12 14 13 activities_model "code.gitea.io/gitea/models/activities" 15 14 "code.gitea.io/gitea/modules/base" 16 15 "code.gitea.io/gitea/modules/context" 17 - "code.gitea.io/gitea/modules/log" 18 16 "code.gitea.io/gitea/modules/process" 19 17 "code.gitea.io/gitea/modules/queue" 20 18 "code.gitea.io/gitea/modules/setting" ··· 25 23 ) 26 24 27 25 const ( 28 - tplDashboard base.TplName = "admin/dashboard" 29 - tplMonitor base.TplName = "admin/monitor" 30 - tplStacktrace base.TplName = "admin/stacktrace" 31 - tplQueue base.TplName = "admin/queue" 26 + tplDashboard base.TplName = "admin/dashboard" 27 + tplMonitor base.TplName = "admin/monitor" 28 + tplStacktrace base.TplName = "admin/stacktrace" 29 + tplQueueManage base.TplName = "admin/queue_manage" 32 30 ) 33 31 34 32 var sysStatus struct { ··· 188 186 "redirect": setting.AppSubURL + "/admin/monitor", 189 187 }) 190 188 } 191 - 192 - // Queue shows details for a specific queue 193 - func Queue(ctx *context.Context) { 194 - qid := ctx.ParamsInt64("qid") 195 - mq := queue.GetManager().GetManagedQueue(qid) 196 - if mq == nil { 197 - ctx.Status(http.StatusNotFound) 198 - return 199 - } 200 - ctx.Data["Title"] = ctx.Tr("admin.monitor.queue", mq.Name) 201 - ctx.Data["PageIsAdminMonitor"] = true 202 - ctx.Data["Queue"] = mq 203 - ctx.HTML(http.StatusOK, tplQueue) 204 - } 205 - 206 - // WorkerCancel cancels a worker group 207 - func WorkerCancel(ctx *context.Context) { 208 - qid := ctx.ParamsInt64("qid") 209 - mq := queue.GetManager().GetManagedQueue(qid) 210 - if mq == nil { 211 - ctx.Status(http.StatusNotFound) 212 - return 213 - } 214 - pid := ctx.ParamsInt64("pid") 215 - mq.CancelWorkers(pid) 216 - ctx.Flash.Info(ctx.Tr("admin.monitor.queue.pool.cancelling")) 217 - ctx.JSON(http.StatusOK, map[string]interface{}{ 218 - "redirect": setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10), 219 - }) 220 - } 221 - 222 - // Flush flushes a queue 223 - func Flush(ctx *context.Context) { 224 - qid := ctx.ParamsInt64("qid") 225 - mq := queue.GetManager().GetManagedQueue(qid) 226 - if mq == nil { 227 - ctx.Status(http.StatusNotFound) 228 - return 229 - } 230 - timeout, err := time.ParseDuration(ctx.FormString("timeout")) 231 - if err != nil { 232 - timeout = -1 233 - } 234 - ctx.Flash.Info(ctx.Tr("admin.monitor.queue.pool.flush.added", mq.Name)) 235 - go func() { 236 - err := mq.Flush(timeout) 237 - if err != nil { 238 - log.Error("Flushing failure for %s: Error %v", mq.Name, err) 239 - } 240 - }() 241 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 242 - } 243 - 244 - // Pause pauses a queue 245 - func Pause(ctx *context.Context) { 246 - qid := ctx.ParamsInt64("qid") 247 - mq := queue.GetManager().GetManagedQueue(qid) 248 - if mq == nil { 249 - ctx.Status(404) 250 - return 251 - } 252 - mq.Pause() 253 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 254 - } 255 - 256 - // Resume resumes a queue 257 - func Resume(ctx *context.Context) { 258 - qid := ctx.ParamsInt64("qid") 259 - mq := queue.GetManager().GetManagedQueue(qid) 260 - if mq == nil { 261 - ctx.Status(404) 262 - return 263 - } 264 - mq.Resume() 265 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 266 - } 267 - 268 - // AddWorkers adds workers to a worker group 269 - func AddWorkers(ctx *context.Context) { 270 - qid := ctx.ParamsInt64("qid") 271 - mq := queue.GetManager().GetManagedQueue(qid) 272 - if mq == nil { 273 - ctx.Status(http.StatusNotFound) 274 - return 275 - } 276 - number := ctx.FormInt("number") 277 - if number < 1 { 278 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.pool.addworkers.mustnumbergreaterzero")) 279 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 280 - return 281 - } 282 - timeout, err := time.ParseDuration(ctx.FormString("timeout")) 283 - if err != nil { 284 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.pool.addworkers.musttimeoutduration")) 285 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 286 - return 287 - } 288 - if _, ok := mq.Managed.(queue.ManagedPool); !ok { 289 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.pool.none")) 290 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 291 - return 292 - } 293 - mq.AddWorkers(number, timeout) 294 - ctx.Flash.Success(ctx.Tr("admin.monitor.queue.pool.added")) 295 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 296 - } 297 - 298 - // SetQueueSettings sets the maximum number of workers and other settings for this queue 299 - func SetQueueSettings(ctx *context.Context) { 300 - qid := ctx.ParamsInt64("qid") 301 - mq := queue.GetManager().GetManagedQueue(qid) 302 - if mq == nil { 303 - ctx.Status(http.StatusNotFound) 304 - return 305 - } 306 - if _, ok := mq.Managed.(queue.ManagedPool); !ok { 307 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.pool.none")) 308 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 309 - return 310 - } 311 - 312 - maxNumberStr := ctx.FormString("max-number") 313 - numberStr := ctx.FormString("number") 314 - timeoutStr := ctx.FormString("timeout") 315 - 316 - var err error 317 - var maxNumber, number int 318 - var timeout time.Duration 319 - if len(maxNumberStr) > 0 { 320 - maxNumber, err = strconv.Atoi(maxNumberStr) 321 - if err != nil { 322 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.settings.maxnumberworkers.error")) 323 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 324 - return 325 - } 326 - if maxNumber < -1 { 327 - maxNumber = -1 328 - } 329 - } else { 330 - maxNumber = mq.MaxNumberOfWorkers() 331 - } 332 - 333 - if len(numberStr) > 0 { 334 - number, err = strconv.Atoi(numberStr) 335 - if err != nil || number < 0 { 336 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.settings.numberworkers.error")) 337 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 338 - return 339 - } 340 - } else { 341 - number = mq.BoostWorkers() 342 - } 343 - 344 - if len(timeoutStr) > 0 { 345 - timeout, err = time.ParseDuration(timeoutStr) 346 - if err != nil { 347 - ctx.Flash.Error(ctx.Tr("admin.monitor.queue.settings.timeout.error")) 348 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 349 - return 350 - } 351 - } else { 352 - timeout = mq.BoostTimeout() 353 - } 354 - 355 - mq.SetPoolSettings(maxNumber, number, timeout) 356 - ctx.Flash.Success(ctx.Tr("admin.monitor.queue.settings.changed")) 357 - ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 358 - }
+59
routers/web/admin/queue.go
··· 1 + // Copyright 2023 The Gitea Authors. All rights reserved. 2 + // SPDX-License-Identifier: MIT 3 + 4 + package admin 5 + 6 + import ( 7 + "net/http" 8 + "strconv" 9 + 10 + "code.gitea.io/gitea/modules/context" 11 + "code.gitea.io/gitea/modules/queue" 12 + "code.gitea.io/gitea/modules/setting" 13 + ) 14 + 15 + // Queue shows details for a specific queue 16 + func Queue(ctx *context.Context) { 17 + qid := ctx.ParamsInt64("qid") 18 + mq := queue.GetManager().GetManagedQueue(qid) 19 + if mq == nil { 20 + ctx.Status(http.StatusNotFound) 21 + return 22 + } 23 + ctx.Data["Title"] = ctx.Tr("admin.monitor.queue", mq.GetName()) 24 + ctx.Data["PageIsAdminMonitor"] = true 25 + ctx.Data["Queue"] = mq 26 + ctx.HTML(http.StatusOK, tplQueueManage) 27 + } 28 + 29 + // QueueSet sets the maximum number of workers and other settings for this queue 30 + func QueueSet(ctx *context.Context) { 31 + qid := ctx.ParamsInt64("qid") 32 + mq := queue.GetManager().GetManagedQueue(qid) 33 + if mq == nil { 34 + ctx.Status(http.StatusNotFound) 35 + return 36 + } 37 + 38 + maxNumberStr := ctx.FormString("max-number") 39 + 40 + var err error 41 + var maxNumber int 42 + if len(maxNumberStr) > 0 { 43 + maxNumber, err = strconv.Atoi(maxNumberStr) 44 + if err != nil { 45 + ctx.Flash.Error(ctx.Tr("admin.monitor.queue.settings.maxnumberworkers.error")) 46 + ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 47 + return 48 + } 49 + if maxNumber < -1 { 50 + maxNumber = -1 51 + } 52 + } else { 53 + maxNumber = mq.GetWorkerMaxNumber() 54 + } 55 + 56 + mq.SetWorkerMaxNumber(maxNumber) 57 + ctx.Flash.Success(ctx.Tr("admin.monitor.queue.settings.changed")) 58 + ctx.Redirect(setting.AppSubURL + "/admin/monitor/queue/" + strconv.FormatInt(qid, 10)) 59 + }
+1 -6
routers/web/web.go
··· 551 551 m.Post("/cancel/{pid}", admin.MonitorCancel) 552 552 m.Group("/queue/{qid}", func() { 553 553 m.Get("", admin.Queue) 554 - m.Post("/set", admin.SetQueueSettings) 555 - m.Post("/add", admin.AddWorkers) 556 - m.Post("/cancel/{pid}", admin.WorkerCancel) 557 - m.Post("/flush", admin.Flush) 558 - m.Post("/pause", admin.Pause) 559 - m.Post("/resume", admin.Resume) 554 + m.Post("/set", admin.QueueSet) 560 555 }) 561 556 }) 562 557
+1 -1
services/actions/init.go
··· 15 15 return 16 16 } 17 17 18 - jobEmitterQueue = queue.CreateUniqueQueue("actions_ready_job", jobEmitterQueueHandle, new(jobUpdate)) 18 + jobEmitterQueue = queue.CreateUniqueQueue("actions_ready_job", jobEmitterQueueHandler) 19 19 go graceful.GetManager().RunWithShutdownFns(jobEmitterQueue.Run) 20 20 21 21 notification.RegisterNotifier(NewNotifier())
+5 -6
services/actions/job_emitter.go
··· 16 16 "xorm.io/builder" 17 17 ) 18 18 19 - var jobEmitterQueue queue.UniqueQueue 19 + var jobEmitterQueue *queue.WorkerPoolQueue[*jobUpdate] 20 20 21 21 type jobUpdate struct { 22 22 RunID int64 ··· 32 32 return err 33 33 } 34 34 35 - func jobEmitterQueueHandle(data ...queue.Data) []queue.Data { 35 + func jobEmitterQueueHandler(items ...*jobUpdate) []*jobUpdate { 36 36 ctx := graceful.GetManager().ShutdownContext() 37 - var ret []queue.Data 38 - for _, d := range data { 39 - update := d.(*jobUpdate) 37 + var ret []*jobUpdate 38 + for _, update := range items { 40 39 if err := checkJobsOfRun(ctx, update.RunID); err != nil { 41 - ret = append(ret, d) 40 + ret = append(ret, update) 42 41 } 43 42 } 44 43 return ret
+8 -10
services/automerge/automerge.go
··· 25 25 ) 26 26 27 27 // prAutoMergeQueue represents a queue to handle update pull request tests 28 - var prAutoMergeQueue queue.UniqueQueue 28 + var prAutoMergeQueue *queue.WorkerPoolQueue[string] 29 29 30 30 // Init runs the task queue to that handles auto merges 31 31 func Init() error { 32 - prAutoMergeQueue = queue.CreateUniqueQueue("pr_auto_merge", handle, "") 32 + prAutoMergeQueue = queue.CreateUniqueQueue("pr_auto_merge", handler) 33 33 if prAutoMergeQueue == nil { 34 34 return fmt.Errorf("Unable to create pr_auto_merge Queue") 35 35 } ··· 38 38 } 39 39 40 40 // handle passed PR IDs and test the PRs 41 - func handle(data ...queue.Data) []queue.Data { 42 - for _, d := range data { 41 + func handler(items ...string) []string { 42 + for _, s := range items { 43 43 var id int64 44 44 var sha string 45 - if _, err := fmt.Sscanf(d.(string), "%d_%s", &id, &sha); err != nil { 46 - log.Error("could not parse data from pr_auto_merge queue (%v): %v", d, err) 45 + if _, err := fmt.Sscanf(s, "%d_%s", &id, &sha); err != nil { 46 + log.Error("could not parse data from pr_auto_merge queue (%v): %v", s, err) 47 47 continue 48 48 } 49 49 handlePull(id, sha) ··· 52 52 } 53 53 54 54 func addToQueue(pr *issues_model.PullRequest, sha string) { 55 - if err := prAutoMergeQueue.PushFunc(fmt.Sprintf("%d_%s", pr.ID, sha), func() error { 56 - log.Trace("Adding pullID: %d to the pull requests patch checking queue with sha %s", pr.ID, sha) 57 - return nil 58 - }); err != nil { 55 + log.Trace("Adding pullID: %d to the pull requests patch checking queue with sha %s", pr.ID, sha) 56 + if err := prAutoMergeQueue.Push(fmt.Sprintf("%d_%s", pr.ID, sha)); err != nil { 59 57 log.Error("Error adding pullID: %d to the pull requests patch checking queue %v", pr.ID, err) 60 58 } 61 59 }
-2
services/convert/utils_test.go
··· 7 7 "testing" 8 8 9 9 "github.com/stretchr/testify/assert" 10 - 11 - _ "github.com/mattn/go-sqlite3" 12 10 ) 13 11 14 12 func TestToCorrectPageSize(t *testing.T) {
+4 -5
services/mailer/mailer.go
··· 378 378 return nil 379 379 } 380 380 381 - var mailQueue queue.Queue 381 + var mailQueue *queue.WorkerPoolQueue[*Message] 382 382 383 383 // Sender sender for sending mail synchronously 384 384 var Sender gomail.Sender ··· 401 401 Sender = &smtpSender{} 402 402 } 403 403 404 - mailQueue = queue.CreateQueue("mail", func(data ...queue.Data) []queue.Data { 405 - for _, datum := range data { 406 - msg := datum.(*Message) 404 + mailQueue = queue.CreateSimpleQueue("mail", func(items ...*Message) []*Message { 405 + for _, msg := range items { 407 406 gomailMsg := msg.ToMessage() 408 407 log.Trace("New e-mail sending request %s: %s", gomailMsg.GetHeader("To"), msg.Info) 409 408 if err := gomail.Send(Sender, gomailMsg); err != nil { ··· 413 412 } 414 413 } 415 414 return nil 416 - }, &Message{}) 415 + }) 417 416 418 417 go graceful.GetManager().RunWithShutdownFns(mailQueue.Run) 419 418
+1 -2
services/migrations/github.go
··· 19 19 base "code.gitea.io/gitea/modules/migration" 20 20 "code.gitea.io/gitea/modules/proxy" 21 21 "code.gitea.io/gitea/modules/structs" 22 - "code.gitea.io/gitea/modules/util" 23 22 24 23 "github.com/google/go-github/v51/github" 25 24 "golang.org/x/oauth2" ··· 164 163 timer := time.NewTimer(time.Until(g.rates[g.curClientIdx].Reset.Time)) 165 164 select { 166 165 case <-g.ctx.Done(): 167 - util.StopTimer(timer) 166 + timer.Stop() 168 167 return 169 168 case <-timer.C: 170 169 }
+3 -4
services/mirror/mirror.go
··· 120 120 return nil 121 121 } 122 122 123 - func queueHandle(data ...queue.Data) []queue.Data { 124 - for _, datum := range data { 125 - req := datum.(*mirror_module.SyncRequest) 123 + func queueHandler(items ...*mirror_module.SyncRequest) []*mirror_module.SyncRequest { 124 + for _, req := range items { 126 125 doMirrorSync(graceful.GetManager().ShutdownContext(), req) 127 126 } 128 127 return nil ··· 130 129 131 130 // InitSyncMirrors initializes a go routine to sync the mirrors 132 131 func InitSyncMirrors() { 133 - mirror_module.StartSyncMirrors(queueHandle) 132 + mirror_module.StartSyncMirrors(queueHandler) 134 133 }
+15 -20
services/pull/check.go
··· 30 30 ) 31 31 32 32 // prPatchCheckerQueue represents a queue to handle update pull request tests 33 - var prPatchCheckerQueue queue.UniqueQueue 33 + var prPatchCheckerQueue *queue.WorkerPoolQueue[string] 34 34 35 35 var ( 36 36 ErrIsClosed = errors.New("pull is closed") ··· 44 44 45 45 // AddToTaskQueue adds itself to pull request test task queue. 46 46 func AddToTaskQueue(pr *issues_model.PullRequest) { 47 - err := prPatchCheckerQueue.PushFunc(strconv.FormatInt(pr.ID, 10), func() error { 48 - pr.Status = issues_model.PullRequestStatusChecking 49 - err := pr.UpdateColsIfNotMerged(db.DefaultContext, "status") 50 - if err != nil { 51 - log.Error("AddToTaskQueue(%-v).UpdateCols.(add to queue): %v", pr, err) 52 - } else { 53 - log.Trace("Adding %-v to the test pull requests queue", pr) 54 - } 55 - return err 56 - }) 47 + pr.Status = issues_model.PullRequestStatusChecking 48 + err := pr.UpdateColsIfNotMerged(db.DefaultContext, "status") 49 + if err != nil { 50 + log.Error("AddToTaskQueue(%-v).UpdateCols.(add to queue): %v", pr, err) 51 + return 52 + } 53 + log.Trace("Adding %-v to the test pull requests queue", pr) 54 + err = prPatchCheckerQueue.Push(strconv.FormatInt(pr.ID, 10)) 57 55 if err != nil && err != queue.ErrAlreadyInQueue { 58 56 log.Error("Error adding %-v to the test pull requests queue: %v", pr, err) 59 57 } ··· 315 313 case <-ctx.Done(): 316 314 return 317 315 default: 318 - if err := prPatchCheckerQueue.PushFunc(strconv.FormatInt(prID, 10), func() error { 319 - log.Trace("Adding PR[%d] to the pull requests patch checking queue", prID) 320 - return nil 321 - }); err != nil { 316 + log.Trace("Adding PR[%d] to the pull requests patch checking queue", prID) 317 + if err := prPatchCheckerQueue.Push(strconv.FormatInt(prID, 10)); err != nil { 322 318 log.Error("Error adding PR[%d] to the pull requests patch checking queue %v", prID, err) 323 319 } 324 320 } ··· 326 322 } 327 323 328 324 // handle passed PR IDs and test the PRs 329 - func handle(data ...queue.Data) []queue.Data { 330 - for _, datum := range data { 331 - id, _ := strconv.ParseInt(datum.(string), 10, 64) 332 - 325 + func handler(items ...string) []string { 326 + for _, s := range items { 327 + id, _ := strconv.ParseInt(s, 10, 64) 333 328 testPR(id) 334 329 } 335 330 return nil ··· 389 384 390 385 // Init runs the task queue to test all the checking status pull requests 391 386 func Init() error { 392 - prPatchCheckerQueue = queue.CreateUniqueQueue("pr_patch_checker", handle, "") 387 + prPatchCheckerQueue = queue.CreateUniqueQueue("pr_patch_checker", handler) 393 388 394 389 if prPatchCheckerQueue == nil { 395 390 return fmt.Errorf("Unable to create pr_patch_checker Queue")
+11 -18
services/pull/check_test.go
··· 12 12 issues_model "code.gitea.io/gitea/models/issues" 13 13 "code.gitea.io/gitea/models/unittest" 14 14 "code.gitea.io/gitea/modules/queue" 15 + "code.gitea.io/gitea/modules/setting" 15 16 16 17 "github.com/stretchr/testify/assert" 17 18 ) ··· 20 21 assert.NoError(t, unittest.PrepareTestDatabase()) 21 22 22 23 idChan := make(chan int64, 10) 23 - 24 - q, err := queue.NewChannelUniqueQueue(func(data ...queue.Data) []queue.Data { 25 - for _, datum := range data { 26 - id, _ := strconv.ParseInt(datum.(string), 10, 64) 24 + testHandler := func(items ...string) []string { 25 + for _, s := range items { 26 + id, _ := strconv.ParseInt(s, 10, 64) 27 27 idChan <- id 28 28 } 29 29 return nil 30 - }, queue.ChannelUniqueQueueConfiguration{ 31 - WorkerPoolConfiguration: queue.WorkerPoolConfiguration{ 32 - QueueLength: 10, 33 - BatchLength: 1, 34 - Name: "temporary-queue", 35 - }, 36 - Workers: 1, 37 - }, "") 30 + } 31 + 32 + cfg, err := setting.GetQueueSettings(setting.CfgProvider, "pr_patch_checker") 33 + assert.NoError(t, err) 34 + prPatchCheckerQueue, err = queue.NewWorkerPoolQueueBySetting("pr_patch_checker", cfg, testHandler, true) 38 35 assert.NoError(t, err) 39 - 40 - queueShutdown := []func(){} 41 - queueTerminate := []func(){} 42 - 43 - prPatchCheckerQueue = q.(queue.UniqueQueue) 44 36 45 37 pr := unittest.AssertExistsAndLoadBean(t, &issues_model.PullRequest{ID: 2}) 46 38 AddToTaskQueue(pr) ··· 54 46 assert.True(t, has) 55 47 assert.NoError(t, err) 56 48 57 - prPatchCheckerQueue.Run(func(shutdown func()) { 49 + var queueShutdown, queueTerminate []func() 50 + go prPatchCheckerQueue.Run(func(shutdown func()) { 58 51 queueShutdown = append(queueShutdown, shutdown) 59 52 }, func(terminate func()) { 60 53 queueTerminate = append(queueTerminate, terminate)
+5 -10
services/repository/archiver/archiver.go
··· 295 295 return doArchive(request) 296 296 } 297 297 298 - var archiverQueue queue.UniqueQueue 298 + var archiverQueue *queue.WorkerPoolQueue[*ArchiveRequest] 299 299 300 300 // Init initlize archive 301 301 func Init() error { 302 - handler := func(data ...queue.Data) []queue.Data { 303 - for _, datum := range data { 304 - archiveReq, ok := datum.(*ArchiveRequest) 305 - if !ok { 306 - log.Error("Unable to process provided datum: %v - not possible to cast to IndexerData", datum) 307 - continue 308 - } 302 + handler := func(items ...*ArchiveRequest) []*ArchiveRequest { 303 + for _, archiveReq := range items { 309 304 log.Trace("ArchiverData Process: %#v", archiveReq) 310 305 if _, err := doArchive(archiveReq); err != nil { 311 - log.Error("Archive %v failed: %v", datum, err) 306 + log.Error("Archive %v failed: %v", archiveReq, err) 312 307 } 313 308 } 314 309 return nil 315 310 } 316 311 317 - archiverQueue = queue.CreateUniqueQueue("repo-archive", handler, new(ArchiveRequest)) 312 + archiverQueue = queue.CreateUniqueQueue("repo-archive", handler) 318 313 if archiverQueue == nil { 319 314 return errors.New("unable to create codes indexer queue") 320 315 }
+4 -5
services/repository/push.go
··· 29 29 ) 30 30 31 31 // pushQueue represents a queue to handle update pull request tests 32 - var pushQueue queue.Queue 32 + var pushQueue *queue.WorkerPoolQueue[[]*repo_module.PushUpdateOptions] 33 33 34 34 // handle passed PR IDs and test the PRs 35 - func handle(data ...queue.Data) []queue.Data { 36 - for _, datum := range data { 37 - opts := datum.([]*repo_module.PushUpdateOptions) 35 + func handler(items ...[]*repo_module.PushUpdateOptions) [][]*repo_module.PushUpdateOptions { 36 + for _, opts := range items { 38 37 if err := pushUpdates(opts); err != nil { 39 38 log.Error("pushUpdate failed: %v", err) 40 39 } ··· 43 42 } 44 43 45 44 func initPushQueue() error { 46 - pushQueue = queue.CreateQueue("push_update", handle, []*repo_module.PushUpdateOptions{}) 45 + pushQueue = queue.CreateSimpleQueue("push_update", handler) 47 46 if pushQueue == nil { 48 47 return errors.New("unable to create push_update Queue") 49 48 }
+4 -5
services/task/task.go
··· 23 23 ) 24 24 25 25 // taskQueue is a global queue of tasks 26 - var taskQueue queue.Queue 26 + var taskQueue *queue.WorkerPoolQueue[*admin_model.Task] 27 27 28 28 // Run a task 29 29 func Run(t *admin_model.Task) error { ··· 37 37 38 38 // Init will start the service to get all unfinished tasks and run them 39 39 func Init() error { 40 - taskQueue = queue.CreateQueue("task", handle, &admin_model.Task{}) 40 + taskQueue = queue.CreateSimpleQueue("task", handler) 41 41 42 42 if taskQueue == nil { 43 43 return fmt.Errorf("Unable to create Task Queue") ··· 48 48 return nil 49 49 } 50 50 51 - func handle(data ...queue.Data) []queue.Data { 52 - for _, datum := range data { 53 - task := datum.(*admin_model.Task) 51 + func handler(items ...*admin_model.Task) []*admin_model.Task { 52 + for _, task := range items { 54 53 if err := Run(task); err != nil { 55 54 log.Error("Run task failed: %v", err) 56 55 }
+1 -1
services/webhook/deliver.go
··· 283 283 }, 284 284 } 285 285 286 - hookQueue = queue.CreateUniqueQueue("webhook_sender", handle, int64(0)) 286 + hookQueue = queue.CreateUniqueQueue("webhook_sender", handler) 287 287 if hookQueue == nil { 288 288 return fmt.Errorf("Unable to create webhook_sender Queue") 289 289 }
+5 -5
services/webhook/webhook.go
··· 77 77 } 78 78 79 79 // hookQueue is a global queue of web hooks 80 - var hookQueue queue.UniqueQueue 80 + var hookQueue *queue.WorkerPoolQueue[int64] 81 81 82 82 // getPayloadBranch returns branch for hook event, if applicable. 83 83 func getPayloadBranch(p api.Payloader) string { ··· 105 105 } 106 106 107 107 // handle delivers hook tasks 108 - func handle(data ...queue.Data) []queue.Data { 108 + func handler(items ...int64) []int64 { 109 109 ctx := graceful.GetManager().HammerContext() 110 110 111 - for _, taskID := range data { 112 - task, err := webhook_model.GetHookTaskByID(ctx, taskID.(int64)) 111 + for _, taskID := range items { 112 + task, err := webhook_model.GetHookTaskByID(ctx, taskID) 113 113 if err != nil { 114 - log.Error("GetHookTaskByID[%d] failed: %v", taskID.(int64), err) 114 + log.Error("GetHookTaskByID[%d] failed: %v", taskID, err) 115 115 continue 116 116 } 117 117
+1 -30
templates/admin/monitor.tmpl
··· 1 1 {{template "admin/layout_head" (dict "ctxData" . "pageClass" "admin monitor")}} 2 2 <div class="admin-setting-content"> 3 3 {{template "admin/cron" .}} 4 - <h4 class="ui top attached header"> 5 - {{.locale.Tr "admin.monitor.queues"}} 6 - </h4> 7 - <div class="ui attached table segment"> 8 - <table class="ui very basic striped table unstackable"> 9 - <thead> 10 - <tr> 11 - <th>{{.locale.Tr "admin.monitor.queue.name"}}</th> 12 - <th>{{.locale.Tr "admin.monitor.queue.type"}}</th> 13 - <th>{{.locale.Tr "admin.monitor.queue.exemplar"}}</th> 14 - <th>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</th> 15 - <th>{{.locale.Tr "admin.monitor.queue.numberinqueue"}}</th> 16 - <th></th> 17 - </tr> 18 - </thead> 19 - <tbody> 20 - {{range .Queues}} 21 - <tr> 22 - <td>{{.Name}}</td> 23 - <td>{{.Type}}</td> 24 - <td>{{.ExemplarType}}</td> 25 - <td>{{$sum := .NumberOfWorkers}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 26 - <td>{{$sum = .NumberInQueue}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 27 - <td><a href="{{$.Link}}/queue/{{.QID}}" class="button">{{if lt $sum 0}}{{$.locale.Tr "admin.monitor.queue.review"}}{{else}}{{$.locale.Tr "admin.monitor.queue.review_add"}}{{end}}</a> 28 - </tr> 29 - {{end}} 30 - </tbody> 31 - </table> 32 - </div> 33 - 4 + {{template "admin/queue" .}} 34 5 {{template "admin/process" .}} 35 6 </div> 36 7
+27 -190
templates/admin/queue.tmpl
··· 1 - {{template "admin/layout_head" (dict "ctxData" . "pageClass" "admin monitor")}} 2 - <div class="admin-setting-content"> 3 - <h4 class="ui top attached header"> 4 - {{.locale.Tr "admin.monitor.queue" .Queue.Name}} 5 - </h4> 6 - <div class="ui attached table segment"> 7 - <table class="ui very basic striped table"> 8 - <thead> 9 - <tr> 10 - <th>{{.locale.Tr "admin.monitor.queue.name"}}</th> 11 - <th>{{.locale.Tr "admin.monitor.queue.type"}}</th> 12 - <th>{{.locale.Tr "admin.monitor.queue.exemplar"}}</th> 13 - <th>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</th> 14 - <th>{{.locale.Tr "admin.monitor.queue.maxnumberworkers"}}</th> 15 - <th>{{.locale.Tr "admin.monitor.queue.numberinqueue"}}</th> 16 - </tr> 17 - </thead> 18 - <tbody> 19 - <tr> 20 - <td>{{.Queue.Name}}</td> 21 - <td>{{.Queue.Type}}</td> 22 - <td>{{.Queue.ExemplarType}}</td> 23 - <td>{{$sum := .Queue.NumberOfWorkers}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 24 - <td>{{if lt $sum 0}}-{{else}}{{.Queue.MaxNumberOfWorkers}}{{end}}</td> 25 - <td>{{$sum = .Queue.NumberInQueue}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 26 - </tr> 27 - </tbody> 28 - </table> 29 - </div> 30 - {{if lt $sum 0}} 31 - <h4 class="ui top attached header"> 32 - {{.locale.Tr "admin.monitor.queue.nopool.title"}} 33 - </h4> 34 - <div class="ui attached segment"> 35 - {{if eq .Queue.Type "wrapped"}} 36 - <p>{{.locale.Tr "admin.monitor.queue.wrapped.desc"}}</p> 37 - {{else if eq .Queue.Type "persistable-channel"}} 38 - <p>{{.locale.Tr "admin.monitor.queue.persistable-channel.desc"}}</p> 39 - {{else}} 40 - <p>{{.locale.Tr "admin.monitor.queue.nopool.desc"}}</p> 41 - {{end}} 42 - </div> 43 - {{else}} 44 - <h4 class="ui top attached header"> 45 - {{.locale.Tr "admin.monitor.queue.settings.title"}} 46 - </h4> 47 - <div class="ui attached segment"> 48 - <p>{{.locale.Tr "admin.monitor.queue.settings.desc"}}</p> 49 - <form method="POST" action="{{.Link}}/set"> 50 - {{$.CsrfTokenHtml}} 51 - <div class="ui form"> 52 - <div class="inline field"> 53 - <label for="max-number">{{.locale.Tr "admin.monitor.queue.settings.maxnumberworkers"}}</label> 54 - <input name="max-number" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.settings.maxnumberworkers.placeholder" .Queue.MaxNumberOfWorkers}}"> 55 - </div> 56 - <div class="inline field"> 57 - <label for="timeout">{{.locale.Tr "admin.monitor.queue.settings.timeout"}}</label> 58 - <input name="timeout" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.settings.timeout.placeholder" .Queue.BoostTimeout}}"> 59 - </div> 60 - <div class="inline field"> 61 - <label for="number">{{.locale.Tr "admin.monitor.queue.settings.numberworkers"}}</label> 62 - <input name="number" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.settings.numberworkers.placeholder" .Queue.BoostWorkers}}"> 63 - </div> 64 - <div class="inline field"> 65 - <label>{{.locale.Tr "admin.monitor.queue.settings.blocktimeout"}}</label> 66 - <span>{{.locale.Tr "admin.monitor.queue.settings.blocktimeout.value" .Queue.BlockTimeout}}</span> 67 - </div> 68 - <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.settings.submit"}}</button> 69 - </div> 70 - </form> 71 - </div> 72 - <h4 class="ui top attached header"> 73 - {{.locale.Tr "admin.monitor.queue.pool.addworkers.title"}} 74 - </h4> 75 - <div class="ui attached segment"> 76 - <p>{{.locale.Tr "admin.monitor.queue.pool.addworkers.desc"}}</p> 77 - <form method="POST" action="{{.Link}}/add"> 78 - {{$.CsrfTokenHtml}} 79 - <div class="ui form"> 80 - <div class="fields"> 81 - <div class="field"> 82 - <label>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</label> 83 - <input name="number" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.pool.addworkers.numberworkers.placeholder"}}"> 84 - </div> 85 - <div class="field"> 86 - <label>{{.locale.Tr "admin.monitor.queue.pool.timeout"}}</label> 87 - <input name="timeout" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.pool.addworkers.timeout.placeholder"}}"> 88 - </div> 89 - </div> 90 - <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.pool.addworkers.submit"}}</button> 91 - </div> 92 - </form> 93 - </div> 94 - {{if .Queue.Pausable}} 95 - {{if .Queue.IsPaused}} 96 - <h4 class="ui top attached header"> 97 - {{.locale.Tr "admin.monitor.queue.pool.resume.title"}} 98 - </h4> 99 - <div class="ui attached segment"> 100 - <p>{{.locale.Tr "admin.monitor.queue.pool.resume.desc"}}</p> 101 - <form method="POST" action="{{.Link}}/resume"> 102 - {{$.CsrfTokenHtml}} 103 - <div class="ui form"> 104 - <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.pool.resume.submit"}}</button> 105 - </div> 106 - </form> 107 - </div> 108 - {{else}} 109 - <h4 class="ui top attached header"> 110 - {{.locale.Tr "admin.monitor.queue.pool.pause.title"}} 111 - </h4> 112 - <div class="ui attached segment"> 113 - <p>{{.locale.Tr "admin.monitor.queue.pool.pause.desc"}}</p> 114 - <form method="POST" action="{{.Link}}/pause"> 115 - {{$.CsrfTokenHtml}} 116 - <div class="ui form"> 117 - <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.pool.pause.submit"}}</button> 118 - </div> 119 - </form> 120 - </div> 121 - {{end}} 1 + <h4 class="ui top attached header"> 2 + {{.locale.Tr "admin.monitor.queues"}} 3 + </h4> 4 + <div class="ui attached table segment"> 5 + <table class="ui very basic striped table unstackable"> 6 + <thead> 7 + <tr> 8 + <th>{{.locale.Tr "admin.monitor.queue.name"}}</th> 9 + <th>{{.locale.Tr "admin.monitor.queue.type"}}</th> 10 + <th>{{.locale.Tr "admin.monitor.queue.exemplar"}}</th> 11 + <th>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</th> 12 + <th>{{.locale.Tr "admin.monitor.queue.numberinqueue"}}</th> 13 + <th></th> 14 + </tr> 15 + </thead> 16 + <tbody> 17 + {{range $qid, $q := .Queues}} 18 + <tr> 19 + <td>{{$q.GetName}}</td> 20 + <td>{{$q.GetType}}</td> 21 + <td>{{$q.GetItemTypeName}}</td> 22 + <td>{{$sum := $q.GetWorkerNumber}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 23 + <td>{{$sum = $q.GetQueueItemNumber}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 24 + <td><a href="{{$.Link}}/queue/{{$qid}}" class="button">{{if lt $sum 0}}{{$.locale.Tr "admin.monitor.queue.review"}}{{else}}{{$.locale.Tr "admin.monitor.queue.review_add"}}{{end}}</a> 25 + </tr> 122 26 {{end}} 123 - <h4 class="ui top attached header"> 124 - {{.locale.Tr "admin.monitor.queue.pool.flush.title"}} 125 - </h4> 126 - <div class="ui attached segment"> 127 - <p>{{.locale.Tr "admin.monitor.queue.pool.flush.desc"}}</p> 128 - <form method="POST" action="{{.Link}}/flush"> 129 - {{$.CsrfTokenHtml}} 130 - <div class="ui form"> 131 - <div class="fields"> 132 - <div class="field"> 133 - <label>{{.locale.Tr "admin.monitor.queue.pool.timeout"}}</label> 134 - <input name="timeout" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.pool.addworkers.timeout.placeholder"}}"> 135 - </div> 136 - </div> 137 - <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.pool.flush.submit"}}</button> 138 - </div> 139 - </form> 140 - </div> 141 - <h4 class="ui top attached header"> 142 - {{.locale.Tr "admin.monitor.queue.pool.workers.title"}} 143 - </h4> 144 - <div class="ui attached table segment"> 145 - <table class="ui very basic striped table"> 146 - <thead> 147 - <tr> 148 - <th>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</th> 149 - <th>{{.locale.Tr "admin.monitor.start"}}</th> 150 - <th>{{.locale.Tr "admin.monitor.queue.pool.timeout"}}</th> 151 - <th></th> 152 - </tr> 153 - </thead> 154 - <tbody> 155 - {{range .Queue.Workers}} 156 - <tr> 157 - <td>{{.Workers}}{{if .IsFlusher}}<span title="{{$.locale.Tr "admin.monitor.queue.flush"}}">{{svg "octicon-sync"}}</span>{{end}}</td> 158 - <td>{{DateTime "full" .Start}}</td> 159 - <td>{{if .HasTimeout}}{{DateTime "full" .Timeout}}{{else}}-{{end}}</td> 160 - <td> 161 - <a class="delete-button" href="" data-url="{{$.Link}}/cancel/{{.PID}}" data-id="{{.PID}}" data-name="{{.Workers}}" title="{{$.locale.Tr "remove"}}">{{svg "octicon-trash"}}</a> 162 - </td> 163 - </tr> 164 - {{else}} 165 - <tr> 166 - <td colspan="4">{{.locale.Tr "admin.monitor.queue.pool.workers.none"}} 167 - </tr> 168 - {{end}} 169 - </tbody> 170 - </table> 171 - </div> 172 - {{end}} 173 - <h4 class="ui top attached header"> 174 - {{.locale.Tr "admin.monitor.queue.configuration"}} 175 - </h4> 176 - <div class="ui attached segment"> 177 - <pre>{{JsonUtils.PrettyIndent .Queue.Configuration}}</pre> 178 - </div> 179 - </div> 180 - 181 - <div class="ui g-modal-confirm delete modal"> 182 - <div class="header"> 183 - {{.locale.Tr "admin.monitor.queue.pool.cancel"}} 184 - </div> 185 - <div class="content"> 186 - <p>{{$.locale.Tr "admin.monitor.queue.pool.cancel_notices" `<span class="name"></span>` | Safe}}</p> 187 - <p>{{$.locale.Tr "admin.monitor.queue.pool.cancel_desc"}}</p> 188 - </div> 189 - {{template "base/modal_actions_confirm" .}} 27 + </tbody> 28 + </table> 190 29 </div> 191 - 192 - {{template "admin/layout_footer" .}}
+48
templates/admin/queue_manage.tmpl
··· 1 + {{template "admin/layout_head" (dict "ctxData" . "pageClass" "admin monitor")}} 2 + <div class="admin-setting-content"> 3 + <h4 class="ui top attached header"> 4 + {{.locale.Tr "admin.monitor.queue" .Queue.GetName}} 5 + </h4> 6 + <div class="ui attached table segment"> 7 + <table class="ui very basic striped table"> 8 + <thead> 9 + <tr> 10 + <th>{{.locale.Tr "admin.monitor.queue.name"}}</th> 11 + <th>{{.locale.Tr "admin.monitor.queue.type"}}</th> 12 + <th>{{.locale.Tr "admin.monitor.queue.exemplar"}}</th> 13 + <th>{{.locale.Tr "admin.monitor.queue.numberworkers"}}</th> 14 + <th>{{.locale.Tr "admin.monitor.queue.maxnumberworkers"}}</th> 15 + <th>{{.locale.Tr "admin.monitor.queue.numberinqueue"}}</th> 16 + </tr> 17 + </thead> 18 + <tbody> 19 + <tr> 20 + <td>{{.Queue.GetName}}</td> 21 + <td>{{.Queue.GetType}}</td> 22 + <td>{{.Queue.GetItemTypeName}}</td> 23 + <td>{{$sum := .Queue.GetWorkerNumber}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 24 + <td>{{if lt $sum 0}}-{{else}}{{.Queue.GetWorkerMaxNumber}}{{end}}</td> 25 + <td>{{$sum = .Queue.GetQueueItemNumber}}{{if lt $sum 0}}-{{else}}{{$sum}}{{end}}</td> 26 + </tr> 27 + </tbody> 28 + </table> 29 + </div> 30 + 31 + <h4 class="ui top attached header"> 32 + {{.locale.Tr "admin.monitor.queue.settings.title"}} 33 + </h4> 34 + <div class="ui attached segment"> 35 + <p>{{.locale.Tr "admin.monitor.queue.settings.desc"}}</p> 36 + <form method="POST" action="{{.Link}}/set"> 37 + {{$.CsrfTokenHtml}} 38 + <div class="ui form"> 39 + <div class="inline field"> 40 + <label for="max-number">{{.locale.Tr "admin.monitor.queue.settings.maxnumberworkers"}}</label> 41 + <input name="max-number" type="text" placeholder="{{.locale.Tr "admin.monitor.queue.settings.maxnumberworkers.placeholder" .Queue.GetWorkerMaxNumber}}"> 42 + </div> 43 + <button class="ui submit button">{{.locale.Tr "admin.monitor.queue.settings.submit"}}</button> 44 + </div> 45 + </form> 46 + </div> 47 + </div> 48 + {{template "admin/layout_footer" .}}
+2 -1
tests/e2e/e2e_test.go
··· 21 21 "code.gitea.io/gitea/modules/graceful" 22 22 "code.gitea.io/gitea/modules/log" 23 23 "code.gitea.io/gitea/modules/setting" 24 + "code.gitea.io/gitea/modules/testlogger" 24 25 "code.gitea.io/gitea/modules/util" 25 26 "code.gitea.io/gitea/modules/web" 26 27 "code.gitea.io/gitea/routers" ··· 58 59 59 60 exitVal := m.Run() 60 61 61 - tests.WriterCloser.Reset() 62 + testlogger.WriterCloser.Reset() 62 63 63 64 if err = util.RemoveAll(setting.Indexer.IssuePath); err != nil { 64 65 fmt.Printf("util.RemoveAll: %v\n", err)
-1
tests/integration/api_branch_test.go
··· 143 143 }, 144 144 } 145 145 for _, test := range testCases { 146 - defer tests.ResetFixtures(t) 147 146 session := ctx.Session 148 147 testAPICreateBranch(t, session, "user2", "my-noo-repo", test.OldBranch, test.NewBranch, test.ExpectedHTTPStatus) 149 148 }
+6 -5
tests/integration/integration_test.go
··· 29 29 "code.gitea.io/gitea/modules/json" 30 30 "code.gitea.io/gitea/modules/log" 31 31 "code.gitea.io/gitea/modules/setting" 32 + "code.gitea.io/gitea/modules/testlogger" 32 33 "code.gitea.io/gitea/modules/util" 33 34 "code.gitea.io/gitea/modules/web" 34 35 "code.gitea.io/gitea/routers" ··· 91 92 // integration test settings... 92 93 if setting.CfgProvider != nil { 93 94 testingCfg := setting.CfgProvider.Section("integration-tests") 94 - tests.SlowTest = testingCfg.Key("SLOW_TEST").MustDuration(tests.SlowTest) 95 - tests.SlowFlush = testingCfg.Key("SLOW_FLUSH").MustDuration(tests.SlowFlush) 95 + testlogger.SlowTest = testingCfg.Key("SLOW_TEST").MustDuration(testlogger.SlowTest) 96 + testlogger.SlowFlush = testingCfg.Key("SLOW_FLUSH").MustDuration(testlogger.SlowFlush) 96 97 } 97 98 98 99 if os.Getenv("GITEA_SLOW_TEST_TIME") != "" { 99 100 duration, err := time.ParseDuration(os.Getenv("GITEA_SLOW_TEST_TIME")) 100 101 if err == nil { 101 - tests.SlowTest = duration 102 + testlogger.SlowTest = duration 102 103 } 103 104 } 104 105 105 106 if os.Getenv("GITEA_SLOW_FLUSH_TIME") != "" { 106 107 duration, err := time.ParseDuration(os.Getenv("GITEA_SLOW_FLUSH_TIME")) 107 108 if err == nil { 108 - tests.SlowFlush = duration 109 + testlogger.SlowFlush = duration 109 110 } 110 111 } 111 112 ··· 130 131 // Instead, "No tests were found", last nonsense log is "According to the configuration, subsequent logs will not be printed to the console" 131 132 exitCode := m.Run() 132 133 133 - tests.WriterCloser.Reset() 134 + testlogger.WriterCloser.Reset() 134 135 135 136 if err = util.RemoveAll(setting.Indexer.IssuePath); err != nil { 136 137 fmt.Printf("util.RemoveAll: %v\n", err)
+1 -1
tests/mssql.ini.tmpl
··· 14 14 REPO_INDEXER_PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mssql/indexers/repos.bleve 15 15 16 16 [queue.issue_indexer] 17 - PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mssql/indexers/issues.bleve 17 + TYPE = level 18 18 DATADIR = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mssql/indexers/issues.queue 19 19 20 20 [queue]
+3 -2
tests/mysql.ini.tmpl
··· 12 12 [indexer] 13 13 REPO_INDEXER_ENABLED = true 14 14 REPO_INDEXER_PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mysql/indexers/repos.bleve 15 + ISSUE_INDEXER_TYPE = elasticsearch 16 + ISSUE_INDEXER_CONN_STR = http://elastic:changeme@elasticsearch:9200 15 17 16 18 [queue.issue_indexer] 17 - TYPE = elasticsearch 18 - CONN_STR = http://elastic:changeme@elasticsearch:9200 19 + TYPE = level 19 20 DATADIR = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mysql/indexers/issues.queue 20 21 21 22 [queue]
+1 -1
tests/mysql8.ini.tmpl
··· 14 14 REPO_INDEXER_PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mysql8/indexers/repos.bleve 15 15 16 16 [queue.issue_indexer] 17 - PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mysql8/indexers/issues.bleve 17 + TYPE = level 18 18 DATADIR = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-mysql8/indexers/issues.queue 19 19 20 20 [queue]
+1 -1
tests/pgsql.ini.tmpl
··· 15 15 REPO_INDEXER_PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-pgsql/indexers/repos.bleve 16 16 17 17 [queue.issue_indexer] 18 - PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-pgsql/indexers/issues.bleve 18 + TYPE = level 19 19 DATADIR = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-pgsql/indexers/issues.queue 20 20 21 21 [queue]
+1 -1
tests/sqlite.ini.tmpl
··· 10 10 REPO_INDEXER_PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-sqlite/indexers/repos.bleve 11 11 12 12 [queue.issue_indexer] 13 - PATH = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-sqlite/indexers/issues.bleve 13 + TYPE = level 14 14 DATADIR = tests/{{TEST_TYPE}}/gitea-{{TEST_TYPE}}-sqlite/indexers/issues.queue 15 15 16 16 [queue]
+14 -41
tests/test_utils.go
··· 20 20 "code.gitea.io/gitea/modules/git" 21 21 "code.gitea.io/gitea/modules/graceful" 22 22 "code.gitea.io/gitea/modules/log" 23 - "code.gitea.io/gitea/modules/queue" 24 23 repo_module "code.gitea.io/gitea/modules/repository" 25 24 "code.gitea.io/gitea/modules/setting" 26 25 "code.gitea.io/gitea/modules/storage" 26 + "code.gitea.io/gitea/modules/testlogger" 27 27 "code.gitea.io/gitea/modules/util" 28 28 "code.gitea.io/gitea/routers" 29 29 ··· 61 61 _ = os.Setenv("GITEA_CONF", giteaConf) 62 62 fmt.Printf("Environment variable $GITEA_CONF not set, use default: %s\n", giteaConf) 63 63 if !setting.EnableSQLite3 { 64 - exitf(`Need to enable SQLite3 for sqlite.ini testing, please set: -tags "sqlite,sqlite_unlock_notify"`) 64 + exitf(`sqlite3 requires: import _ "github.com/mattn/go-sqlite3" or -tags sqlite,sqlite_unlock_notify`) 65 65 } 66 66 } 67 67 ··· 235 235 return deferFn 236 236 } 237 237 238 - // ResetFixtures flushes queues, reloads fixtures and resets test repositories within a single test. 239 - // Most tests should call defer tests.PrepareTestEnv(t)() (or have onGiteaRun do that for them) but sometimes 240 - // within a single test this is required 241 - func ResetFixtures(t *testing.T) { 242 - assert.NoError(t, queue.GetManager().FlushAll(context.Background(), -1)) 238 + func PrintCurrentTest(t testing.TB, skip ...int) func() { 239 + if len(skip) == 1 { 240 + skip = []int{skip[0] + 1} 241 + } 242 + return testlogger.PrintCurrentTest(t, skip...) 243 + } 243 244 244 - // load database fixtures 245 - assert.NoError(t, unittest.LoadFixtures()) 245 + // Printf takes a format and args and prints the string to os.Stdout 246 + func Printf(format string, args ...interface{}) { 247 + testlogger.Printf(format, args...) 248 + } 246 249 247 - // load git repo fixtures 248 - assert.NoError(t, util.RemoveAll(setting.RepoRootPath)) 249 - assert.NoError(t, unittest.CopyDir(path.Join(filepath.Dir(setting.AppPath), "tests/gitea-repositories-meta"), setting.RepoRootPath)) 250 - ownerDirs, err := os.ReadDir(setting.RepoRootPath) 251 - if err != nil { 252 - assert.NoError(t, err, "unable to read the new repo root: %v\n", err) 253 - } 254 - for _, ownerDir := range ownerDirs { 255 - if !ownerDir.Type().IsDir() { 256 - continue 257 - } 258 - repoDirs, err := os.ReadDir(filepath.Join(setting.RepoRootPath, ownerDir.Name())) 259 - if err != nil { 260 - assert.NoError(t, err, "unable to read the new repo root: %v\n", err) 261 - } 262 - for _, repoDir := range repoDirs { 263 - _ = os.MkdirAll(filepath.Join(setting.RepoRootPath, ownerDir.Name(), repoDir.Name(), "objects", "pack"), 0o755) 264 - _ = os.MkdirAll(filepath.Join(setting.RepoRootPath, ownerDir.Name(), repoDir.Name(), "objects", "info"), 0o755) 265 - _ = os.MkdirAll(filepath.Join(setting.RepoRootPath, ownerDir.Name(), repoDir.Name(), "refs", "heads"), 0o755) 266 - _ = os.MkdirAll(filepath.Join(setting.RepoRootPath, ownerDir.Name(), repoDir.Name(), "refs", "tag"), 0o755) 267 - } 268 - } 269 - 270 - // load LFS object fixtures 271 - // (LFS storage can be on any of several backends, including remote servers, so we init it with the storage API) 272 - lfsFixtures, err := storage.NewStorage("", storage.LocalStorageConfig{Path: path.Join(filepath.Dir(setting.AppPath), "tests/gitea-lfs-meta")}) 273 - assert.NoError(t, err) 274 - assert.NoError(t, storage.Clean(storage.LFS)) 275 - assert.NoError(t, lfsFixtures.IterateObjects("", func(path string, _ storage.Object) error { 276 - _, err := storage.Copy(storage.LFS, path, lfsFixtures, path) 277 - return err 278 - })) 250 + func init() { 251 + log.Register("test", testlogger.NewTestLogger) 279 252 }
+45 -34
tests/testlogger.go modules/testlogger/testlogger.go
··· 1 1 // Copyright 2019 The Gitea Authors. All rights reserved. 2 2 // SPDX-License-Identifier: MIT 3 3 4 - package tests 4 + package testlogger 5 5 6 6 import ( 7 7 "context" ··· 36 36 t []*testing.TB 37 37 } 38 38 39 - func (w *testLoggerWriterCloser) setT(t *testing.TB) { 39 + func (w *testLoggerWriterCloser) pushT(t *testing.TB) { 40 40 w.Lock() 41 41 w.t = append(w.t, t) 42 42 w.Unlock() 43 43 } 44 44 45 45 func (w *testLoggerWriterCloser) Write(p []byte) (int, error) { 46 + // There was a data race problem: the logger system could still try to output logs after the runner is finished. 47 + // So we must ensure that the "t" in stack is still valid. 46 48 w.RLock() 49 + defer w.RUnlock() 50 + 47 51 var t *testing.TB 48 52 if len(w.t) > 0 { 49 53 t = w.t[len(w.t)-1] 50 54 } 51 - w.RUnlock() 52 - if t != nil && *t != nil { 53 - if len(p) > 0 && p[len(p)-1] == '\n' { 54 - p = p[:len(p)-1] 55 - } 56 55 57 - defer func() { 58 - err := recover() 59 - if err == nil { 60 - return 61 - } 62 - var errString string 63 - errErr, ok := err.(error) 64 - if ok { 65 - errString = errErr.Error() 66 - } else { 67 - errString, ok = err.(string) 68 - } 69 - if !ok { 70 - panic(err) 71 - } 72 - if !strings.HasPrefix(errString, "Log in goroutine after ") { 73 - panic(err) 74 - } 75 - }() 56 + if len(p) > 0 && p[len(p)-1] == '\n' { 57 + p = p[:len(p)-1] 58 + } 76 59 77 - (*t).Log(string(p)) 78 - return len(p), nil 60 + if t == nil || *t == nil { 61 + return fmt.Fprintf(os.Stdout, "??? [Unknown Test] %s\n", p) 79 62 } 63 + 64 + defer func() { 65 + err := recover() 66 + if err == nil { 67 + return 68 + } 69 + var errString string 70 + errErr, ok := err.(error) 71 + if ok { 72 + errString = errErr.Error() 73 + } else { 74 + errString, ok = err.(string) 75 + } 76 + if !ok { 77 + panic(err) 78 + } 79 + if !strings.HasPrefix(errString, "Log in goroutine after ") { 80 + panic(err) 81 + } 82 + }() 83 + 84 + (*t).Log(string(p)) 80 85 return len(p), nil 81 86 } 82 87 83 - func (w *testLoggerWriterCloser) Close() error { 88 + func (w *testLoggerWriterCloser) popT() { 84 89 w.Lock() 85 90 if len(w.t) > 0 { 86 91 w.t = w.t[:len(w.t)-1] 87 92 } 88 93 w.Unlock() 94 + } 95 + 96 + func (w *testLoggerWriterCloser) Close() error { 89 97 return nil 90 98 } 91 99 ··· 118 126 } else { 119 127 fmt.Fprintf(os.Stdout, "=== %s (%s:%d)\n", t.Name(), strings.TrimPrefix(filename, prefix), line) 120 128 } 121 - WriterCloser.setT(&t) 129 + WriterCloser.pushT(&t) 122 130 return func() { 123 131 took := time.Since(start) 124 132 if took > SlowTest { ··· 135 143 fmt.Fprintf(os.Stdout, "+++ %s ... still flushing after %v ...\n", t.Name(), SlowFlush) 136 144 } 137 145 }) 138 - if err := queue.GetManager().FlushAll(context.Background(), 2*time.Minute); err != nil { 146 + if err := queue.GetManager().FlushAll(context.Background(), time.Minute); err != nil { 139 147 t.Errorf("Flushing queues failed with error %v", err) 140 148 } 141 149 timer.Stop() ··· 147 155 fmt.Fprintf(os.Stdout, "+++ %s had a slow clean-up flush (took %v)\n", t.Name(), flushTook) 148 156 } 149 157 } 150 - _ = WriterCloser.Close() 158 + WriterCloser.popT() 151 159 } 152 160 } 153 161 ··· 195 203 } 196 204 197 205 func init() { 198 - log.Register("test", NewTestLogger) 206 + const relFilePath = "modules/testlogger/testlogger.go" 199 207 _, filename, _, _ := runtime.Caller(0) 200 - prefix = strings.TrimSuffix(filename, "tests/integration/testlogger.go") 208 + if !strings.HasSuffix(filename, relFilePath) { 209 + panic("source code file path doesn't match expected: " + relFilePath) 210 + } 211 + prefix = strings.TrimSuffix(filename, relFilePath) 201 212 }