208 Commits

Author SHA1 Message Date
18dd9c7a1f Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:29 +00:00
1c6d6b1dcc Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:29 +00:00
b3d78f3f06 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:28 +00:00
5d5fbb835e Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:28 +00:00
f259d7da1b Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:27 +00:00
433f68b5a4 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:27 +00:00
3729ee8cd6 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:26 +00:00
261e8aea62 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:25 +00:00
a1dd66ec6a Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:25 +00:00
f579fbc83f Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:24 +00:00
bd2cfbcc3d Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:24 +00:00
9c009dddb5 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:23 +00:00
fd94630e79 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:22 +00:00
c8274bd0a6 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:22 +00:00
4bea3edbeb Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:21 +00:00
3aa73a29a5 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:20 +00:00
5934b7cf4d Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:20 +00:00
39f0b6d406 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:19 +00:00
1c1a16a55a Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:18 +00:00
991507c29a Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:18 +00:00
553f5fa25f Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:17 +00:00
8c5ef8bfd2 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:17 +00:00
4e0580a2b4 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:16 +00:00
451e109b7f Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:15 +00:00
fa79d8c15d Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:15 +00:00
55800608ec Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:14 +00:00
63e66334af Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:14 +00:00
4b7d2f2efc Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:13 +00:00
a7b02a742a Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:12 +00:00
825ad03236 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:12 +00:00
484763b809 Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:11 +00:00
b0d2c5668c Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:11 +00:00
444278accb Tower: upload web_notify 16.0.3.2.0 (via marketplace) 2026-04-27 08:47:10 +00:00
c8b19a8c62 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:09 +00:00
1a3285cdc4 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:08 +00:00
cd55fd9f19 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:08 +00:00
d75d397e6a Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:07 +00:00
4e95aa47de Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:06 +00:00
0911b0d951 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:06 +00:00
1ea59d44f0 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:05 +00:00
b4fcbfdf2a Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:04 +00:00
cca99e065a Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:04 +00:00
ec6e3c8fd2 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:03 +00:00
2c1d9c3ef2 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:03 +00:00
583dd0dd15 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:02 +00:00
66ae014a38 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:01 +00:00
b2f175536a Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:01 +00:00
6794a1b842 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:47:00 +00:00
191f857aff Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:59 +00:00
bf6065aeb7 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:59 +00:00
00e6ff7e78 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:58 +00:00
1f5b011fce Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:58 +00:00
61db219e01 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:57 +00:00
771994f944 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:56 +00:00
def74bd656 Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:56 +00:00
6e4be30e3a Tower: upload rpc_helper 16.0.1.0.0 (via marketplace) 2026-04-27 08:46:55 +00:00
96a2eeda3a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:54 +00:00
a6209db573 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:53 +00:00
bfc350252a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:53 +00:00
64efc9b0b4 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:52 +00:00
8d4ddfb7d2 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:51 +00:00
447b8431e6 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:51 +00:00
007783c1e2 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:50 +00:00
72a4524aed Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:50 +00:00
7e37a29bee Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:49 +00:00
1f0cf23801 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:48 +00:00
999a996df8 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:48 +00:00
8966de83af Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:47 +00:00
403368df7a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:47 +00:00
fef59e7a73 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:46 +00:00
c2285f865e Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:45 +00:00
34d8248b79 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:45 +00:00
f64852997f Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:44 +00:00
fcf45b130e Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:43 +00:00
fd4665364d Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:43 +00:00
91a344cbc2 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:43 +00:00
7b8f5090db Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:42 +00:00
e2039f54f4 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:41 +00:00
445b34f452 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:40 +00:00
c3a4151359 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:40 +00:00
c05ba71bcd Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:39 +00:00
389a32d760 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:39 +00:00
609ef99c44 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:38 +00:00
71e98f5b3f Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:37 +00:00
25052f2e2d Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:37 +00:00
a5c0f76f89 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:36 +00:00
81d2547e9d Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:36 +00:00
a0c172c649 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:35 +00:00
8a65785c52 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:34 +00:00
85fff4657e Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:34 +00:00
114449be53 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:33 +00:00
df1dabb253 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:33 +00:00
65094d2031 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:32 +00:00
9d8a226283 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:31 +00:00
7bff54cb58 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:31 +00:00
4f9f60b121 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:30 +00:00
f0cee69a24 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:30 +00:00
0d6e910d3e Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:29 +00:00
64f515e11b Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:28 +00:00
ef22709eb7 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:28 +00:00
65c6df9940 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:27 +00:00
cbc12f44b8 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:27 +00:00
45eba87eda Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:26 +00:00
510be1ffcb Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:25 +00:00
9ceb54d29c Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:25 +00:00
942da80b9c Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:24 +00:00
3da4cc2dec Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:23 +00:00
b4572fa6f1 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:23 +00:00
01f5ee1c46 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:22 +00:00
952b235888 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:22 +00:00
f98c11412d Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:21 +00:00
a8e27776d3 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:20 +00:00
6038b70592 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:20 +00:00
e259a897fe Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:19 +00:00
05027ef13c Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:19 +00:00
d65b12bc80 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:18 +00:00
905d4a6c04 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:17 +00:00
a213ef10a8 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:17 +00:00
f2b16e50a7 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:16 +00:00
4d25cf4ade Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:16 +00:00
82b2acd792 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:15 +00:00
7522999082 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:14 +00:00
f8e694b71a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:14 +00:00
83cbdf54e9 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:13 +00:00
7744f3212d Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:13 +00:00
b55049d482 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:12 +00:00
54f981fd25 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:11 +00:00
7d753b772a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:11 +00:00
cd8e63eb08 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:10 +00:00
29f5780312 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:10 +00:00
6dd6679e9a Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:09 +00:00
26c795216e Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:08 +00:00
5b40d83c0c Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:08 +00:00
22279e8c98 Tower: upload queue_job 16.0.2.12.0 (via marketplace) 2026-04-27 08:46:07 +00:00
09bc143899 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:06 +00:00
d29af3f5ad Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:05 +00:00
7441874199 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:05 +00:00
068638b20a Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:04 +00:00
5c65820935 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:03 +00:00
748b61b2f6 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:03 +00:00
70d359dd8d Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:02 +00:00
c4d093c497 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:01 +00:00
39ccc6bde5 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:01 +00:00
8df4722e8b Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:46:00 +00:00
fe3a822173 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:59 +00:00
7d9a1eefbb Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:59 +00:00
c74f5414af Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:58 +00:00
98387bc517 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:58 +00:00
a6e739601e Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:57 +00:00
e3b372f3d0 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:56 +00:00
8f8e41943a Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:56 +00:00
7af8e80303 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:55 +00:00
9f86d4807c Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:55 +00:00
1a082b425c Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:54 +00:00
48fcec14c5 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:54 +00:00
d54a6b9d08 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:53 +00:00
26e1be3a4f Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:53 +00:00
10cd0f3bc1 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:52 +00:00
d2ec4529cc Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:51 +00:00
a1bf9980cb Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:50 +00:00
42292618bb Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:50 +00:00
07d598c857 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:49 +00:00
757ec36790 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:49 +00:00
7441e29889 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:48 +00:00
c48a8ddc63 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:47 +00:00
c31ba607e5 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:46 +00:00
97eafd2fcf Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:45 +00:00
b3e06b7bbd Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:44 +00:00
ddc65dc558 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:44 +00:00
8dc88a671f Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:43 +00:00
928a2661bb Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:43 +00:00
7f9278fc8f Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:41 +00:00
bc99107f8e Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:41 +00:00
db6cbffd60 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:40 +00:00
55df443de3 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:39 +00:00
e28e930732 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:38 +00:00
2ffa038703 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:38 +00:00
5c6a987442 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:37 +00:00
5f26a8f675 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:36 +00:00
0f25bd4d77 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:35 +00:00
41a6368228 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:34 +00:00
20ec0b6fd6 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:34 +00:00
71655a3923 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:33 +00:00
06103e090a Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:32 +00:00
f78d7b8d35 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:32 +00:00
b87a626ee7 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:31 +00:00
5c587f8e7d Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:30 +00:00
14645156c6 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:29 +00:00
9af897fa59 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:29 +00:00
6b447e3364 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:28 +00:00
162e2aa3e8 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:27 +00:00
68fa068d8b Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:26 +00:00
d481df1702 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:26 +00:00
2f6ce319ba Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:25 +00:00
8093696ec8 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:24 +00:00
53d1657954 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:24 +00:00
87eae8f9c1 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:23 +00:00
1b5655d1aa Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:23 +00:00
01ec5954bb Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:22 +00:00
0d853abbc3 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:21 +00:00
fada6f30ff Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:20 +00:00
c10bbc8f8a Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:20 +00:00
492d828ca3 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:19 +00:00
343a0700b6 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:19 +00:00
c582038d23 Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:18 +00:00
4c70b26e1d Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:17 +00:00
56b120ae6f Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:16 +00:00
3ef03aea6e Tower: upload cetmix_tower_yaml 16.0.2.0.3 (via marketplace) 2026-04-27 08:45:15 +00:00
195 changed files with 28076 additions and 150 deletions

View File

@@ -7,7 +7,7 @@ Cetmix Tower YAML
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:7f55d44d4d4b9239195643b7169c1a5f98ad8a36c3cc80686d357a9829beb856
!! source digest: sha256:96e8f3f1df3ab25b952a9534d0914149740cc036b62efe2c7795f9d2d9636177
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Beta-yellow.png
@@ -50,16 +50,6 @@ instructions.
Changelog
=========
16.0.3.1.0 (2026-03-30)
-----------------------
- Features: Deferred import of related records. (5323)
16.0.3.0.0 (2026-03-23)
-----------------------
- Features: Jets! (4700)
16.0.2.0.1 (2025-10-29)
-----------------------

View File

@@ -3,7 +3,7 @@
{
"name": "Cetmix Tower YAML",
"summary": "Cetmix Tower YAML export/import",
"version": "16.0.3.1.0",
"version": "16.0.2.0.3",
"development_status": "Beta",
"category": "Productivity",
"website": "https://tower.cetmix.com",
@@ -28,7 +28,6 @@
"views/cx_tower_shortcut_view.xml",
"views/cx_tower_scheduled_task_view.xml",
"views/cx_tower_key_view.xml",
"views/cx_tower_jet_template_view.xml",
"views/cx_tower_yaml_manifest_template_views.xml",
"views/cx_tower_yaml_manifest_author_views.xml",
"wizards/cx_tower_yaml_export_wiz.xml",

View File

@@ -91,12 +91,6 @@ msgstr ""
msgid "Authors"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model.fields,help:cetmix_tower_yaml.field_cx_tower_scheduled_task_cv__reference
msgid ""
"Can contain English letters, digits and '_'. Leave blank to autogenerate"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_command
msgid "Cetmix Tower Command"
@@ -127,31 +121,6 @@ msgstr ""
msgid "Cetmix Tower Flight Plan Line Action"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_jet_action
msgid "Cetmix Tower Jet Action"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_jet_state
msgid "Cetmix Tower Jet State"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_jet_template
msgid "Cetmix Tower Jet Template"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_jet_template_dependency
msgid "Cetmix Tower Jet Template Dependency"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_jet_waypoint_template
msgid "Cetmix Tower Jet Waypoint Template"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_key
msgid "Cetmix Tower Key/Secret Storage"
@@ -307,21 +276,6 @@ msgstr ""
msgid "Custom license text when license type is Custom."
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model,name:cetmix_tower_yaml.model_cx_tower_scheduled_task_cv
msgid "Custom variable values for scheduled tasks"
msgstr ""
#. module: cetmix_tower_yaml
#. odoo-python
#: code:addons/cetmix_tower_yaml/wizards/cx_tower_yaml_import_wiz.py:0
#: code:addons/cetmix_tower_yaml/wizards/cx_tower_yaml_import_wiz.py:0
#, python-format
msgid ""
"Deferred relation resolution failed:\n"
"%(details)s"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_yaml_export_wiz__manifest_description
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_yaml_import_wiz__manifest_description
@@ -381,7 +335,6 @@ msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_command_export_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_file_template_export_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_jet_template_export_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_key_export_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_os_export_yaml
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_plan_export_yaml
@@ -394,7 +347,6 @@ msgstr ""
#: model:ir.actions.act_window,name:cetmix_tower_yaml.action_cx_tower_variable_value_export_yaml
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_command_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_file_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_jet_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_plan_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_view_form
@@ -636,7 +588,6 @@ msgid "Models to create records in"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_scheduled_task_cv__name
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_yaml_manifest_author__name
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_yaml_manifest_tmpl__name
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_yaml_import_wiz_view_form
@@ -712,24 +663,6 @@ msgstr ""
msgid "Provide Custom License Text when License is set to 'Custom'."
msgstr ""
#. module: cetmix_tower_yaml
#. odoo-python
#: code:addons/cetmix_tower_yaml/wizards/cx_tower_yaml_import_wiz.py:0
#, python-format
msgid ""
"Record %(record_model)s '%(record_reference)s': field '%(field)s' could not "
"resolve %(target_model)s '%(target_reference)s'"
msgstr ""
#. module: cetmix_tower_yaml
#. odoo-python
#: code:addons/cetmix_tower_yaml/wizards/cx_tower_yaml_import_wiz.py:0
#, python-format
msgid ""
"Record '%(record)s': field '%(field)s' could not resolve %(target_model)s "
"'%(target_reference)s'"
msgstr ""
#. module: cetmix_tower_yaml
#. odoo-python
#: code:addons/cetmix_tower_yaml/tests/test_yaml_import_wizard.py:0
@@ -762,11 +695,6 @@ msgstr ""
msgid "Records of the following models were created or updated: %(models)s"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_scheduled_task_cv__reference
msgid "Reference"
msgstr ""
#. module: cetmix_tower_yaml
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_yaml_export_wiz__remove_empty_values
msgid "Remove Empty x2m Field Values"
@@ -946,7 +874,6 @@ msgstr ""
#. module: cetmix_tower_yaml
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_command_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_file_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_jet_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_plan_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_view_form
@@ -1026,11 +953,6 @@ msgstr ""
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_command__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_file__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_file_template__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_jet_action__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_jet_state__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_jet_template__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_jet_template_dependency__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_jet_waypoint_template__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_key__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_key_value__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_os__yaml_code
@@ -1038,7 +960,6 @@ msgstr ""
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_plan_line__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_plan_line_action__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_scheduled_task__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_scheduled_task_cv__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_server__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_server_log__yaml_code
#: model:ir.model.fields,field_description:cetmix_tower_yaml.field_cx_tower_server_template__yaml_code
@@ -1082,7 +1003,6 @@ msgstr ""
#. module: cetmix_tower_yaml
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_command_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_file_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_jet_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_plan_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_template_view_form
#: model_terms:ir.ui.view,arch_db:cetmix_tower_yaml.cx_tower_server_view_form

View File

@@ -15,13 +15,7 @@ from . import cx_tower_key_value
from . import cx_tower_server_log
from . import cx_tower_shortcut
from . import cx_tower_scheduled_task
from . import cx_tower_scheduled_task_cv
from . import cx_tower_file
from . import cx_tower_server
from . import cx_tower_yaml_manifest_template
from . import cx_tower_yaml_manifest_author
from . import cx_tower_jet_template
from . import cx_tower_jet_template_dependency
from . import cx_tower_jet_state
from . import cx_tower_jet_action
from . import cx_tower_jet_waypoint_template

View File

@@ -19,25 +19,13 @@ class CxTowerCommand(models.Model):
"tag_ids",
"path",
"file_template_id",
"if_file_exists",
"disconnect_file",
"flight_plan_id",
"jet_template_id",
"jet_action_id",
"waypoint_template_id",
"fly_here",
"code",
"no_split_for_sudo",
"server_status",
"variable_ids",
"secret_ids",
"no_split_for_sudo",
"if_file_exists",
"disconnect_file",
]
return res
def _get_deferred_m2o_import_fields(self):
"""Return m2o command fields resolved after the main import pass."""
return {
"jet_template_id": "cx.tower.jet.template",
"jet_action_id": "cx.tower.jet.action",
"waypoint_template_id": "cx.tower.jet.waypoint.template",
}

View File

@@ -21,20 +21,3 @@ class CxTowerPlan(models.Model):
"line_ids",
]
return res
def _get_deferred_x2m_import_fields(self):
"""Defer plan lines whose command is not resolvable during nested import.
Deep YAML (e.g. a command's waypoint inlines a jet template whose plans
reference that same command) creates a forward reference: plan lines are
prepared before the command exists in the database. Queue those lines
and create them after the main import pass when ``command_id`` can be
resolved.
"""
return {
"line_ids": {
"child_model": "cx.tower.plan.line",
"deferred_field": "command_id",
"target_model": "cx.tower.command",
}
}

View File

@@ -19,24 +19,5 @@ class CxTowerScheduledTask(models.Model):
"interval_type",
"next_call",
"last_call",
"monday",
"tuesday",
"wednesday",
"thursday",
"friday",
"saturday",
"sunday",
"custom_variable_value_ids",
]
return res
def _get_deferred_x2m_import_fields(self):
"""Return scheduled-task child records resolved after import."""
return {
"custom_variable_value_ids": {
"child_model": "cx.tower.scheduled.task.cv",
"deferred_field": "variable_value_id",
"target_model": "cx.tower.variable.value",
"skip_empty": True,
}
}

View File

@@ -0,0 +1,23 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerServerLog(models.Model):
_name = "cx.tower.server.log"
_inherit = [
"cx.tower.server.log",
"cx.tower.yaml.mixin",
]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"name",
"log_type",
"command_id",
"use_sudo",
"file_template_id",
"file_id",
]
return res

View File

@@ -0,0 +1,41 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerServerTemplate(models.Model):
_name = "cx.tower.server.template"
_inherit = [
"cx.tower.server.template",
"cx.tower.yaml.mixin",
]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"name",
"color",
"os_id",
"tag_ids",
"note",
"ssh_port",
"ssh_username",
"ssh_key_id",
"ssh_auth_mode",
"use_sudo",
"variable_value_ids",
"server_log_ids",
"shortcut_ids",
"scheduled_task_ids",
"flight_plan_id",
"plan_delete_id",
]
return res
def _get_force_x2m_resolve_models(self):
res = super()._get_force_x2m_resolve_models()
# Add Flight Plan in order to always try to use existing one
# This is useful to avoid duplicating existing plans
res += ["cx.tower.plan", "cx.tower.shortcut", "cx.tower.scheduled.task"]
return res

View File

@@ -0,0 +1,22 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerShortcut(models.Model):
_name = "cx.tower.shortcut"
_inherit = ["cx.tower.shortcut", "cx.tower.yaml.mixin"]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"name",
"sequence",
"access_level",
"action",
"command_id",
"use_sudo",
"plan_id",
"note",
]
return res

View File

@@ -0,0 +1,16 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerTag(models.Model):
_name = "cx.tower.tag"
_inherit = ["cx.tower.tag", "cx.tower.yaml.mixin"]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"name",
"color",
]
return res

View File

@@ -0,0 +1,23 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerVariable(models.Model):
_name = "cx.tower.variable"
_inherit = ["cx.tower.variable", "cx.tower.yaml.mixin"]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"name",
"access_level",
"variable_type",
"option_ids",
"applied_expression",
"validation_pattern",
"validation_message",
"note",
"tag_ids",
]
return res

View File

@@ -0,0 +1,18 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerVariableOption(models.Model):
_name = "cx.tower.variable.option"
_inherit = ["cx.tower.variable.option", "cx.tower.yaml.mixin"]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"sequence",
"access_level",
"name",
"value_char",
]
return res

View File

@@ -0,0 +1,20 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import models
class CxTowerVariableValue(models.Model):
_name = "cx.tower.variable.value"
_inherit = ["cx.tower.variable.value", "cx.tower.yaml.mixin"]
def _get_fields_for_yaml(self):
res = super()._get_fields_for_yaml()
res += [
"sequence",
"access_level",
"variable_id",
"value_char",
"variable_ids",
"required",
]
return res

View File

@@ -0,0 +1,23 @@
# Copyright (C) 2025 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import fields, models
class CxTowerYamlManifestAuthor(models.Model):
"""Author of a YAML manifest (can be one or many)."""
_name = "cx.tower.yaml.manifest.author"
_sql_constraints = [
(
"yaml_manifest_author_name_uniq",
"unique(name)",
"Author name must be unique.",
)
]
_description = "YAML Manifest Author"
_order = "name"
name = fields.Char(required=True, translate=False)

View File

@@ -0,0 +1,93 @@
# Copyright (C) 2025 Cetmix OÜ
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import re
from odoo import _, api, fields, models
from odoo.exceptions import ValidationError
class CxTowerYamlManifestTemplate(models.Model):
"""Pre-defined YAML manifest template storing common metadata
such as authors, website, license, and currency for reuse
during YAML exports."""
_name = "cx.tower.yaml.manifest.tmpl"
_description = "YAML Manifest Template"
_order = "name"
name = fields.Char(
required=True,
help="Name of the manifest template.",
)
website = fields.Char(help="Website URL for the manifest.")
author_ids = fields.Many2many(
"cx.tower.yaml.manifest.author",
string="Authors",
help="List of author names to include in the YAML manifest.",
)
license = fields.Selection(
selection=lambda self: self._selection_license(),
help="License used for the code snippet.",
)
license_text = fields.Text(
help="Custom license text when license type is Custom.",
)
currency = fields.Selection(
selection=lambda self: self._selection_currency(),
help="Currency for pricing information.",
)
version = fields.Char(
help="Version in Major.Minor.Patch format, e.g. 1.0.0",
default="1.0.0",
)
file_prefix = fields.Char(
string="File prefix",
help="Add prefix to the exported YAML file name when this template is selected",
)
@api.model
def _selection_license(self):
"""Return available license options for manifest."""
return [
("agpl-3", "AGPL-3"),
("lgpl-3", "LGPL-3"),
("mit", "MIT"),
("custom", _("Custom")),
]
@api.model
def _selection_currency(self):
"""Return available currency options for manifest pricing."""
return [
("EUR", _("Euro")),
("USD", _("US Dollar")),
]
@api.constrains("license", "license_text")
def _check_license_text_for_custom(self):
"""Ensure that custom license text is provided when license is 'custom'."""
for rec in self:
if rec.license == "custom" and not (rec.license_text or "").strip():
raise ValidationError(
_("Provide Custom License Text when License is set to 'Custom'.")
)
@api.constrains("version")
def _check_version_format(self):
"""Ensure the template version follows the x.y.z semantic format.
The version must consist of three non-negative integers (major, minor, patch)
separated by dots—for example, “1.2.3”. Raises a ValidationError otherwise.
"""
semver = re.compile(r"^\d+\.\d+\.\d+$")
for rec in self:
if rec.version and not semver.match(rec.version):
raise ValidationError(
_("Version must be in the Major.Minor.Patch format, e.g. 1.2.3")
)

View File

@@ -0,0 +1,577 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import logging
import yaml
from odoo import _, api, fields, models
from odoo.exceptions import AccessError, ValidationError
_logger = logging.getLogger(__name__)
class CustomDumper(yaml.Dumper):
"""Custom dumper to ensures code
is properly dumped in YAML
"""
def represent_scalar(self, tag, value, style=None):
if isinstance(value, str) and "\n" in value:
style = "|"
return super().represent_scalar(tag, value, style)
class YamlExportCollector:
"""
Collector for YAML export.
Tracks unique records by their (model_name, reference) tuple to avoid duplicates.
"""
def __init__(self):
"""
Initialize the collector.
"""
self.added_references = set()
def add(self, key):
"""
Add a record to the collector if its reference is unique.
:param key: tuple, key of the record
"""
if key and key not in self.added_references:
self.added_references.add(key)
def is_added(self, key):
"""
Check by (model, reference) tuple.
:param key: tuple, key of the record
:return: bool
"""
return key in self.added_references
class CxTowerYamlMixin(models.AbstractModel):
"""Used to implement YAML rendering functions.
Inherit in your model in case you want to YAML instance of the records.
"""
_name = "cx.tower.yaml.mixin"
_description = "Cetmix Tower YAML rendering mixin"
# File format version in order to track compatibility
CETMIX_TOWER_YAML_VERSION = 1
# TO_YAML_* used to convert from Odoo field values to YAML
TO_YAML_ACCESS_LEVEL = {"1": "user", "2": "manager", "3": "root"}
# TO_TOWER_* used to convert from YAML field values to Tower ones
TO_TOWER_ACCESS_LEVEL = {"user": "1", "manager": "2", "root": "3"}
yaml_code = fields.Text(
compute="_compute_yaml_code",
inverse="_inverse_yaml_code",
groups="cetmix_tower_yaml.group_export,cetmix_tower_yaml.group_import",
)
def _compute_yaml_code(self):
"""Compute YAML code based on model record data"""
# This is used for the file name.
# Eg cx.tower.command record will have 'command_' prefix.
for record in self:
# We are reading field list for each record
# because list of fields can differ from record to record
record.yaml_code = self._convert_dict_to_yaml(
record._prepare_record_for_yaml()
)
def _inverse_yaml_code(self):
"""Compose record based on provided YAML"""
for record in self:
if record.yaml_code:
record_yaml_dict = yaml.safe_load(record.yaml_code)
record_vals = record._post_process_yaml_dict_values(record_yaml_dict)
record.update(record_vals)
@api.constrains("yaml_code")
def _check_yaml_code_write_access(self):
"""
Check if user has access to create records from YAML.
This is checked only when user already has access to export YAML.
Otherwise, the field is not accessible due to security group.
"""
if self.env.user.has_group("cetmix_tower_yaml.group_export") and (
not self.env.user.has_group("cetmix_tower_yaml.group_import")
and not self.env.user._is_superuser()
):
raise AccessError(_("You are not allowed to create records from YAML"))
@api.model_create_multi
def create(self, vals_list):
# Handle validation error when field values are not valid
try:
return super().create(vals_list)
except ValueError as e:
raise ValidationError(str(e)) from e
def write(self, vals):
# Handle validation error when field values are not valid
try:
return super().write(vals)
except ValueError as e:
raise ValidationError(str(e)) from e
def action_open_yaml_export_wizard(self):
"""Open YAML export wizard"""
return {
"type": "ir.actions.act_window",
"res_model": "cx.tower.yaml.export.wiz",
"view_mode": "form",
"target": "new",
}
def _convert_dict_to_yaml(self, values):
"""Converts Python dictionary to YAML string.
This is a helper function that is designed to be used
by any models that need to convert a dictionary to YAML.
Args:
values (Dict): Dictionary containing data
to be converted to YAML format
Returns:
Text: YAML string
Raises:
ValidationError: If values is not a dictionary
or YAML conversion fails
"""
if not isinstance(values, dict):
raise ValidationError(_("Values must be a dictionary"))
try:
yaml_code = yaml.dump(
values,
Dumper=CustomDumper,
default_flow_style=False,
sort_keys=False,
)
return yaml_code
except (yaml.YAMLError, UnicodeEncodeError) as e:
raise ValidationError(
_(
"Failed to convert dictionary" " to YAML: %(error)s",
error=str(e),
)
) from e
def _prepare_record_for_yaml(self):
"""Reads and processes current record before converting it to YAML
Returns:
dict: values ready for YAML conversion
"""
self.ensure_one()
yaml_keys = self._get_fields_for_yaml()
record_dict = self.read(fields=yaml_keys)[0]
return self._post_process_record_values(record_dict)
def _get_fields_for_yaml(self):
"""Get ist of field to be present in YAML
Set 'no_yaml_service_fields' context key to skip
service fields creation (cetmix_tower_yaml_version, cetmix_tower_model)
Returns:
list(): list of fields to be used as YAML keys
"""
return ["reference"]
def _get_force_x2m_resolve_models(self):
"""List of models that will always try to be resolved
when referenced in x2m related fields.
This is useful for models that should always use existing records
instead of creating new ones when referenced in x2m related fields.
Such as variables or tags.
Returns:
List: list of models that will always try to be resolved
"""
return [
"cx.tower.variable",
"cx.tower.variable.option",
"cx.tower.tag",
"cx.tower.os",
"cx.tower.key",
]
def _post_process_record_values(self, values):
"""Post process record values
before converting them to YAML
Args:
values (dict): values returned by 'read' method
Context:
explode_related_record: if set will return entire record dictionary
not just a reference
remove_empty_values: if set will remove empty values from the record
Returns:
dict(): processed values
"""
collector = self._context.get("yaml_collector")
ref = values.get("reference")
collector_key = (self._name, ref) if ref else None
if collector and collector_key and collector.is_added(collector_key):
return {"reference": ref}
# We don't need id because we are not using it
values.pop("id", None)
# Add YAML format version and model
if not self._context.get("no_yaml_service_fields"):
model_name = self._name.replace("cx.tower.", "").replace(".", "_")
model_values = {
"cetmix_tower_model": model_name,
}
else:
model_values = {}
# Parse access level
access_level = values.pop("access_level", None)
if access_level:
model_values.update(
{"access_level": self.TO_YAML_ACCESS_LEVEL[access_level]}
)
values = {**model_values, **values}
# Copy values to avoid modifying the original values
new_values = values.copy()
# Check if we need to return a record dict or just a reference
# Use context value first, revert to the record setting if not defined
explode_related_record = self._context.get("explode_related_record")
# Check if we need to remove empty values
# Currently only x2m fields are supported
remove_empty_values = self._context.get("remove_empty_values")
# Post process m2o and x2m fields
for key, value in values.items():
# IMPORTANT: Odoo naming patterns must be followed for related fields.
# This is why we are checking for the field name ending here.
# Further checks for the field type are done
# in _process_relation_field_value()
if key.endswith("_id") or key.endswith("_ids"):
if not value and remove_empty_values:
del new_values[key]
else:
processed_value = self.with_context(
explode_related_record=explode_related_record
)._process_relation_field_value(key, value, record_mode=True)
new_values.update({key: processed_value})
if collector and collector_key:
collector.add(collector_key)
return new_values
def _post_process_yaml_dict_values(self, values):
"""Post process dictionary values generated from YAML code
Args:
values (dict): Dictionary generated from YAML
Returns:
dict(): Post-processed values
"""
# Remove model data because it is not a field
if "cetmix_tower_model" in values:
values.pop("cetmix_tower_model")
# Parse access level
if "access_level" in values:
values_access_level = values["access_level"]
access_level = self.TO_TOWER_ACCESS_LEVEL.get(values_access_level)
if access_level:
values.update({"access_level": access_level})
else:
raise ValidationError(
_(
"Wrong value for 'access_level' key: %(acv)s",
acv=values_access_level,
)
)
# Leave supported keys only
supported_keys = self._get_fields_for_yaml()
filtered_values = {k: v for k, v in values.items() if k in supported_keys}
# Post process m2o fields
for key, value in filtered_values.items():
# IMPORTANT: Odoo naming patterns must be followed for related fields.
# This is why we are checking for the field name ending here.
# Further checks for the field type are done
# in _process_relation_field_value()
if key.endswith("_id") or key.endswith("_ids"):
processed_value = self.with_context(
explode_related_record=True
)._process_relation_field_value(key, value, record_mode=False)
filtered_values.update({key: processed_value})
return filtered_values
def _process_relation_field_value(self, field, value, record_mode=False):
"""Post process One2many, Many2many or Many2one value
Args:
field (Char): Field the value belongs to
value (Char): Value to process
record_mode (Bool): If True process value as a record value
else process value as a YAML value
Context:
explode_related_record: if set will return entire record dictionary
not just a reference
Returns:
dict() or Char: record dictionary if fetch_record else reference
"""
# Step 1: Return False if the value is not set or the field is not found
if not value:
return False
field_obj = self._fields.get(field)
if not field_obj:
return False
# Step 2: Return False if the field type doesn't match
# or comodel is not defined
field_type = field_obj.type
if (
field_type not in ["one2many", "many2many", "many2one"]
or not field_obj.comodel_name
):
return False
comodel = self.env[field_obj.comodel_name]
explode_related_record = self._context.get("explode_related_record")
# Step 3: process value based on the field type
if field_type == "many2one":
return self._process_m2o_value(
comodel, value, explode_related_record, record_mode
)
if field_type in ["one2many", "many2many"]:
return self._process_x2m_values(
comodel, field_type, value, explode_related_record, record_mode
)
# Step 4: fall back if field type is not supported
return False
def _process_m2o_value(
self, comodel, value, explode_related_record, record_mode=False
):
"""Post process many2one value
Args:
comodel (BaseClass): Model the value belongs to
value (Char): Value to process
explode_related_record (Bool): If True return entire record dict
instead of a reference
record_mode (Bool): If True process value as a record value
else process value as a YAML value
Returns:
dict() or Char: record dictionary if fetch_record else reference
"""
# -- (Record -> YAML)
if record_mode:
# Retrieve the record based on the ID provided in the value
record = comodel.browse(value[0])
# If the context specifies to explode the related record,
# return its dictionary representation
if explode_related_record:
return (
record.with_context(
no_yaml_service_fields=True
)._prepare_record_for_yaml()
if record
else False
)
# Otherwise, return just the reference (or False if record does not exist)
return record.reference if record else False
# -- (YAML -> Record)
# Step 1: Process value in normal mode
record = False
# If the value is a string, it is treated as a reference
if isinstance(value, str):
reference = value
# If the value is a dictionary, extract the reference from it
elif isinstance(value, dict):
reference = value.get("reference")
record = self._update_or_create_related_record(
comodel, reference, value, create_immediately=True
)
else:
return False
# Step 2: Final fallback: attempt to retrieve the record by reference if set,
# return its ID or False
if not record and reference:
record = comodel.get_by_reference(reference)
return record.id if record else False
def _process_x2m_values(
self, comodel, field_type, values, explode_related_record, record_mode=False
):
"""Post process many2many value
Args:
comodel (BaseClass): Model the value belongs to
field_type (Char): Field type
values (list()): Values to process
explode_related_record (Bool): If True return entire record dict
instead of a reference
record_mode (Bool): If True process value as a record value
else process value as a YAML value
Returns:
dict() or Char: record dictionary if fetch_record else reference
"""
# -- (Record -> YAML)
if record_mode:
record_list = []
for value in values:
# Retrieve the record based on the ID provided in the value
record = comodel.browse(value)
# If the context specifies to explode the related record,
# return its dictionary representation
if explode_related_record:
record_list.append(
record.with_context(
no_yaml_service_fields=True
)._prepare_record_for_yaml()
if record
else False
)
# Otherwise, return just the reference
# (or False if record does not exist)
else:
record_list.append(record.reference if record else False)
return record_list
# -- (YAML -> Record)
# Step 1: Process value in normal mode
record_ids = []
for value in values:
record = False
# If the value is a string, it is treated as a reference
if isinstance(value, str):
reference = value
# If the value is a dictionary, extract the reference from it
elif isinstance(value, dict):
reference = value.get("reference")
record = self._update_or_create_related_record(
comodel,
reference,
value,
create_immediately=field_type == "many2many",
)
# Step 2: Final fallback: attempt to retrieve the record by reference
# Return record ID or False if reference is not defined
if not record and reference:
record = comodel.get_by_reference(reference)
# Save record data
if record:
record_ids.append(
record if isinstance(record, tuple) else (4, record.id)
)
return record_ids
def _update_or_create_related_record(
self, model, reference, values, create_immediately=False
):
"""Update related record with provided values or create a new one
Args:
model (BaseModel): Related record model
values (dict()): Values to update existing/create new record
reference (Char): Record reference
create_immediately (Bool): If True create a new record immediately.
Used for Many2one fields.
Context:
force_create_related_record (Bool): If True, create a new record
even if reference is provided.
Returns:
record: Existing record or new record tuple
"""
# If reference is found, retrieve the corresponding record
if reference and (
model._name in self._get_force_x2m_resolve_models()
or not self._context.get("force_create_related_record")
):
record = model.get_by_reference(reference)
# If the record exists, update it with the values from the dictionary
if record:
# Remove reference from values to avoid possible consequences
values.pop("reference", None)
record.with_context(from_yaml=True).write(
record._post_process_yaml_dict_values(values)
)
# If the record does not exist, create a new one
else:
if create_immediately:
record = model.with_context(from_yaml=True).create(
model._post_process_yaml_dict_values(values)
)
else:
# Use "Create" service command tuple
record = (0, 0, model._post_process_yaml_dict_values(values))
# If there's no reference but value is a dict, create a new record
else:
if create_immediately:
# Only 'reference' provided, no other data: do not create,
# just log warning
if set(values.keys()) == {"reference"}:
_logger.warning(
"Attempted to import a record for model '%s' with reference "
"'%s', but only the 'reference' field was provided. "
"It is possible that this record has already been imported. "
"Creation will be skipped.",
model._name,
reference,
)
return False
record = model.with_context(from_yaml=True).create(
model._post_process_yaml_dict_values(values)
)
else:
# Use "Create" service command tuple
record = (0, 0, model._post_process_yaml_dict_values(values))
# Return the record's ID if it exists, otherwise return False
return record or False

View File

@@ -0,0 +1,3 @@
[build-system]
requires = ["whool"]
build-backend = "whool.buildapi"

View File

@@ -0,0 +1 @@
Please refer to the [official documentation](https://cetmix.com/tower) for detailed configuration instructions.

View File

@@ -0,0 +1,3 @@
This module implements YAML format data import/export for [Cetmix Tower](https://cetmix.com/tower).
Please refer to the [official documentation](https://cetmix.com/tower) for detailed information.

View File

@@ -0,0 +1,69 @@
## 16.0.2.0.1 (2025-10-29)
- Features: Improve the way secrets are listed in the YAML import widget. (5010)
## 16.0.1.4.2 (2025-10-06)
- Bugfixes: Add the missing 'create' function decorator (4980)
## 16.0.1.4.1 (2025-08-26)
- Bugfixes: Make selection values lowercase to simplify their management. (4896)
## 16.0.1.3.0 (2025-07-30)
- Features: Optional behaviour when file uploaded by command already exists on the server. (4740)
## 16.0.1.1.4 (2025-07-08)
- Bugfixes: Fix missing model names in YAML exports when exporting multiple commands with flight plans (4820)
## 16.0.1.1.3 (2025-07-07)
- Bugfixes: Import servers with `Password` ssh authentication mode (4812)
## 16.0.1.1.1 (2025-06-23)
- Features: YAML code optimisation (4728)
## 16.0.1.1.0 (2025-06-20)
- Features: Export/import scheduled tasks to/from YAML. (4650)
## 16.0.1.0.5 (2025-05-21)
- Features: Export/import secret values related to Server. (4696)
## 16.0.1.0.4 (2025-05-16)
- Features: Export/import servers and files to/from YAML. (4670)
## 16.0.1.0.3 (2025-05-09)
- Bugfixes: Non-critical issues and performance improvements. (4663)
## 16.0.1.0.2 (2025-04-30)
- Features: User groups are visible without developer mode. (4642)
## 16.0.1.0.1 (2025-04-21)
- Features: Export additional fields for shortcuts, variables and options.
Add action menu to export keys/secrets. (4602)
## 16.0.1.0.0
Release for Odoo 16.0

View File

@@ -0,0 +1 @@
Please refer to the [official documentation](https://cetmix.com/tower) for detailed usage instructions.

View File

@@ -0,0 +1,30 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="ir_module_category_tower_yaml_export" model="ir.module.category">
<field name="parent_id" ref="cetmix_tower_server.ir_module_category_tower" />
<field name="name">YAML Export</field>
</record>
<record id="ir_module_category_tower_yaml_import" model="ir.module.category">
<field name="parent_id" ref="cetmix_tower_server.ir_module_category_tower" />
<field name="name">YAML Import</field>
</record>
<record id="group_export" model="res.groups">
<field name="name">Allow</field>
<field name="category_id" ref="ir_module_category_tower_yaml_export" />
<field name="comment">
Export data to YAML.
</field>
</record>
<record id="group_import" model="res.groups">
<field name="name">Allow</field>
<field name="category_id" ref="ir_module_category_tower_yaml_import" />
<field name="comment">
Import data from YAML.
</field>
</record>
</odoo>

View File

@@ -0,0 +1,39 @@
<?xml version="1.0" encoding="UTF-8" ?>
<odoo noupdate="1">
<!-- cx.tower.yaml.export.wiz -->
<record id="rule_cx_tower_yaml_export_wiz_creator_only" model="ir.rule">
<field name="name">Creator only</field>
<field name="model_id" ref="model_cx_tower_yaml_export_wiz" />
<field name="global" eval="True" />
<field name="domain_force">[('create_uid', '=', user.id)]</field>
</record>
<!-- cx.tower.yaml.export.wiz.download -->
<record
id="rule_cx_tower_yaml_export_wiz_download_creator_only"
model="ir.rule"
>
<field name="name">Creator only</field>
<field name="model_id" ref="model_cx_tower_yaml_export_wiz_download" />
<field name="global" eval="True" />
<field name="domain_force">[('create_uid', '=', user.id)]</field>
</record>
<!-- cx.tower.yaml.import.wiz -->
<record id="rule_cx_tower_yaml_import_wiz_creator_only" model="ir.rule">
<field name="name">Creator only</field>
<field name="model_id" ref="model_cx_tower_yaml_import_wiz" />
<field name="global" eval="True" />
<field name="domain_force">[('create_uid', '=', user.id)]</field>
</record>
<!-- cx.tower.yaml.import.wiz.upload -->
<record id="rule_cx_tower_yaml_import_wiz_upload_creator_only" model="ir.rule">
<field name="name">Creator only</field>
<field name="model_id" ref="model_cx_tower_yaml_import_wiz_upload" />
<field name="global" eval="True" />
<field name="domain_force">[('create_uid', '=', user.id)]</field>
</record>
</odoo>

View File

@@ -0,0 +1,9 @@
id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink
access_yaml_export_wizard,Export YAML,model_cx_tower_yaml_export_wiz,group_export,1,1,1,1
access_yaml_export_wizard_download,Export YAML File,model_cx_tower_yaml_export_wiz_download,group_export,1,1,1,1
access_yaml_import_wizard_upload,Import YAML,model_cx_tower_yaml_import_wiz_upload,group_import,1,1,1,1
access_yaml_import_wizard,Import YAML,model_cx_tower_yaml_import_wiz,group_import,1,1,1,1
access_manifest_tmpl_read_export,Manifest tmpl read (export),model_cx_tower_yaml_manifest_tmpl,cetmix_tower_yaml.group_export,1,0,0,0
access_manifest_tmpl_admin,Manifest tmpl admin,model_cx_tower_yaml_manifest_tmpl,cetmix_tower_server.group_root,1,1,1,1
access_manifest_author_read_export,Manifest author read (export),model_cx_tower_yaml_manifest_author,cetmix_tower_yaml.group_export,1,0,0,0
access_manifest_author_admin,Manifest author admin,model_cx_tower_yaml_manifest_author,cetmix_tower_server.group_root,1,1,1,1
1 id name model_id:id group_id:id perm_read perm_write perm_create perm_unlink
2 access_yaml_export_wizard Export YAML model_cx_tower_yaml_export_wiz group_export 1 1 1 1
3 access_yaml_export_wizard_download Export YAML File model_cx_tower_yaml_export_wiz_download group_export 1 1 1 1
4 access_yaml_import_wizard_upload Import YAML model_cx_tower_yaml_import_wiz_upload group_import 1 1 1 1
5 access_yaml_import_wizard Import YAML model_cx_tower_yaml_import_wiz group_import 1 1 1 1
6 access_manifest_tmpl_read_export Manifest tmpl read (export) model_cx_tower_yaml_manifest_tmpl cetmix_tower_yaml.group_export 1 0 0 0
7 access_manifest_tmpl_admin Manifest tmpl admin model_cx_tower_yaml_manifest_tmpl cetmix_tower_server.group_root 1 1 1 1
8 access_manifest_author_read_export Manifest author read (export) model_cx_tower_yaml_manifest_author cetmix_tower_yaml.group_export 1 0 0 0
9 access_manifest_author_admin Manifest author admin model_cx_tower_yaml_manifest_author cetmix_tower_server.group_root 1 1 1 1

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -0,0 +1,534 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="generator" content="Docutils: https://docutils.sourceforge.io/" />
<title>Cetmix Tower YAML</title>
<style type="text/css">
/*
:Author: David Goodger (goodger@python.org)
:Id: $Id: html4css1.css 9511 2024-01-13 09:50:07Z milde $
:Copyright: This stylesheet has been placed in the public domain.
Default cascading style sheet for the HTML output of Docutils.
Despite the name, some widely supported CSS2 features are used.
See https://docutils.sourceforge.io/docs/howto/html-stylesheets.html for how to
customize this style sheet.
*/
/* used to remove borders from tables and images */
.borderless, table.borderless td, table.borderless th {
border: 0 }
table.borderless td, table.borderless th {
/* Override padding for "table.docutils td" with "! important".
The right padding separates the table cells. */
padding: 0 0.5em 0 0 ! important }
.first {
/* Override more specific margin styles with "! important". */
margin-top: 0 ! important }
.last, .with-subtitle {
margin-bottom: 0 ! important }
.hidden {
display: none }
.subscript {
vertical-align: sub;
font-size: smaller }
.superscript {
vertical-align: super;
font-size: smaller }
a.toc-backref {
text-decoration: none ;
color: black }
blockquote.epigraph {
margin: 2em 5em ; }
dl.docutils dd {
margin-bottom: 0.5em }
object[type="image/svg+xml"], object[type="application/x-shockwave-flash"] {
overflow: hidden;
}
/* Uncomment (and remove this text!) to get bold-faced definition list terms
dl.docutils dt {
font-weight: bold }
*/
div.abstract {
margin: 2em 5em }
div.abstract p.topic-title {
font-weight: bold ;
text-align: center }
div.admonition, div.attention, div.caution, div.danger, div.error,
div.hint, div.important, div.note, div.tip, div.warning {
margin: 2em ;
border: medium outset ;
padding: 1em }
div.admonition p.admonition-title, div.hint p.admonition-title,
div.important p.admonition-title, div.note p.admonition-title,
div.tip p.admonition-title {
font-weight: bold ;
font-family: sans-serif }
div.attention p.admonition-title, div.caution p.admonition-title,
div.danger p.admonition-title, div.error p.admonition-title,
div.warning p.admonition-title, .code .error {
color: red ;
font-weight: bold ;
font-family: sans-serif }
/* Uncomment (and remove this text!) to get reduced vertical space in
compound paragraphs.
div.compound .compound-first, div.compound .compound-middle {
margin-bottom: 0.5em }
div.compound .compound-last, div.compound .compound-middle {
margin-top: 0.5em }
*/
div.dedication {
margin: 2em 5em ;
text-align: center ;
font-style: italic }
div.dedication p.topic-title {
font-weight: bold ;
font-style: normal }
div.figure {
margin-left: 2em ;
margin-right: 2em }
div.footer, div.header {
clear: both;
font-size: smaller }
div.line-block {
display: block ;
margin-top: 1em ;
margin-bottom: 1em }
div.line-block div.line-block {
margin-top: 0 ;
margin-bottom: 0 ;
margin-left: 1.5em }
div.sidebar {
margin: 0 0 0.5em 1em ;
border: medium outset ;
padding: 1em ;
background-color: #ffffee ;
width: 40% ;
float: right ;
clear: right }
div.sidebar p.rubric {
font-family: sans-serif ;
font-size: medium }
div.system-messages {
margin: 5em }
div.system-messages h1 {
color: red }
div.system-message {
border: medium outset ;
padding: 1em }
div.system-message p.system-message-title {
color: red ;
font-weight: bold }
div.topic {
margin: 2em }
h1.section-subtitle, h2.section-subtitle, h3.section-subtitle,
h4.section-subtitle, h5.section-subtitle, h6.section-subtitle {
margin-top: 0.4em }
h1.title {
text-align: center }
h2.subtitle {
text-align: center }
hr.docutils {
width: 75% }
img.align-left, .figure.align-left, object.align-left, table.align-left {
clear: left ;
float: left ;
margin-right: 1em }
img.align-right, .figure.align-right, object.align-right, table.align-right {
clear: right ;
float: right ;
margin-left: 1em }
img.align-center, .figure.align-center, object.align-center {
display: block;
margin-left: auto;
margin-right: auto;
}
table.align-center {
margin-left: auto;
margin-right: auto;
}
.align-left {
text-align: left }
.align-center {
clear: both ;
text-align: center }
.align-right {
text-align: right }
/* reset inner alignment in figures */
div.align-right {
text-align: inherit }
/* div.align-center * { */
/* text-align: left } */
.align-top {
vertical-align: top }
.align-middle {
vertical-align: middle }
.align-bottom {
vertical-align: bottom }
ol.simple, ul.simple {
margin-bottom: 1em }
ol.arabic {
list-style: decimal }
ol.loweralpha {
list-style: lower-alpha }
ol.upperalpha {
list-style: upper-alpha }
ol.lowerroman {
list-style: lower-roman }
ol.upperroman {
list-style: upper-roman }
p.attribution {
text-align: right ;
margin-left: 50% }
p.caption {
font-style: italic }
p.credits {
font-style: italic ;
font-size: smaller }
p.label {
white-space: nowrap }
p.rubric {
font-weight: bold ;
font-size: larger ;
color: maroon ;
text-align: center }
p.sidebar-title {
font-family: sans-serif ;
font-weight: bold ;
font-size: larger }
p.sidebar-subtitle {
font-family: sans-serif ;
font-weight: bold }
p.topic-title {
font-weight: bold }
pre.address {
margin-bottom: 0 ;
margin-top: 0 ;
font: inherit }
pre.literal-block, pre.doctest-block, pre.math, pre.code {
margin-left: 2em ;
margin-right: 2em }
pre.code .ln { color: gray; } /* line numbers */
pre.code, code { background-color: #eeeeee }
pre.code .comment, code .comment { color: #5C6576 }
pre.code .keyword, code .keyword { color: #3B0D06; font-weight: bold }
pre.code .literal.string, code .literal.string { color: #0C5404 }
pre.code .name.builtin, code .name.builtin { color: #352B84 }
pre.code .deleted, code .deleted { background-color: #DEB0A1}
pre.code .inserted, code .inserted { background-color: #A3D289}
span.classifier {
font-family: sans-serif ;
font-style: oblique }
span.classifier-delimiter {
font-family: sans-serif ;
font-weight: bold }
span.interpreted {
font-family: sans-serif }
span.option {
white-space: nowrap }
span.pre {
white-space: pre }
span.problematic, pre.problematic {
color: red }
span.section-subtitle {
/* font-size relative to parent (h1..h6 element) */
font-size: 80% }
table.citation {
border-left: solid 1px gray;
margin-left: 1px }
table.docinfo {
margin: 2em 4em }
table.docutils {
margin-top: 0.5em ;
margin-bottom: 0.5em }
table.footnote {
border-left: solid 1px black;
margin-left: 1px }
table.docutils td, table.docutils th,
table.docinfo td, table.docinfo th {
padding-left: 0.5em ;
padding-right: 0.5em ;
vertical-align: top }
table.docutils th.field-name, table.docinfo th.docinfo-name {
font-weight: bold ;
text-align: left ;
white-space: nowrap ;
padding-left: 0 }
/* "booktabs" style (no vertical lines) */
table.docutils.booktabs {
border: 0px;
border-top: 2px solid;
border-bottom: 2px solid;
border-collapse: collapse;
}
table.docutils.booktabs * {
border: 0px;
}
table.docutils.booktabs th {
border-bottom: thin solid;
text-align: left;
}
h1 tt.docutils, h2 tt.docutils, h3 tt.docutils,
h4 tt.docutils, h5 tt.docutils, h6 tt.docutils {
font-size: 100% }
ul.auto-toc {
list-style-type: none }
</style>
</head>
<body>
<div class="document" id="cetmix-tower-yaml">
<h1 class="title">Cetmix Tower YAML</h1>
<!-- !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:96e8f3f1df3ab25b952a9534d0914149740cc036b62efe2c7795f9d2d9636177
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! -->
<p><a class="reference external image-reference" href="https://odoo-community.org/page/development-status"><img alt="Beta" src="https://img.shields.io/badge/maturity-Beta-yellow.png" /></a> <a class="reference external image-reference" href="http://www.gnu.org/licenses/agpl-3.0-standalone.html"><img alt="License: AGPL-3" src="https://img.shields.io/badge/license-AGPL--3-blue.png" /></a> <a class="reference external image-reference" href="https://github.com/cetmix/cetmix-tower/tree/16.0/cetmix_tower_yaml"><img alt="cetmix/cetmix-tower" src="https://img.shields.io/badge/github-cetmix%2Fcetmix--tower-lightgray.png?logo=github" /></a></p>
<p>This module implements YAML format data import/export for <a class="reference external" href="https://cetmix.com/tower">Cetmix
Tower</a>.</p>
<p>Please refer to the <a class="reference external" href="https://cetmix.com/tower">official
documentation</a> for detailed information.</p>
<p><strong>Table of contents</strong></p>
<div class="contents local topic" id="contents">
<ul class="simple">
<li><a class="reference internal" href="#configuration" id="toc-entry-1">Configuration</a></li>
<li><a class="reference internal" href="#usage" id="toc-entry-2">Usage</a></li>
<li><a class="reference internal" href="#changelog" id="toc-entry-3">Changelog</a><ul>
<li><a class="reference internal" href="#section-1" id="toc-entry-4">16.0.2.0.1 (2025-10-29)</a></li>
<li><a class="reference internal" href="#section-2" id="toc-entry-5">16.0.1.4.2 (2025-10-06)</a></li>
<li><a class="reference internal" href="#section-3" id="toc-entry-6">16.0.1.4.1 (2025-08-26)</a></li>
<li><a class="reference internal" href="#section-4" id="toc-entry-7">16.0.1.3.0 (2025-07-30)</a></li>
<li><a class="reference internal" href="#section-5" id="toc-entry-8">16.0.1.1.4 (2025-07-08)</a></li>
<li><a class="reference internal" href="#section-6" id="toc-entry-9">16.0.1.1.3 (2025-07-07)</a></li>
<li><a class="reference internal" href="#section-7" id="toc-entry-10">16.0.1.1.1 (2025-06-23)</a></li>
<li><a class="reference internal" href="#section-8" id="toc-entry-11">16.0.1.1.0 (2025-06-20)</a></li>
<li><a class="reference internal" href="#section-9" id="toc-entry-12">16.0.1.0.5 (2025-05-21)</a></li>
<li><a class="reference internal" href="#section-10" id="toc-entry-13">16.0.1.0.4 (2025-05-16)</a></li>
<li><a class="reference internal" href="#section-11" id="toc-entry-14">16.0.1.0.3 (2025-05-09)</a></li>
<li><a class="reference internal" href="#section-12" id="toc-entry-15">16.0.1.0.2 (2025-04-30)</a></li>
<li><a class="reference internal" href="#section-13" id="toc-entry-16">16.0.1.0.1 (2025-04-21)</a></li>
<li><a class="reference internal" href="#section-14" id="toc-entry-17">16.0.1.0.0</a></li>
</ul>
</li>
<li><a class="reference internal" href="#bug-tracker" id="toc-entry-18">Bug Tracker</a></li>
<li><a class="reference internal" href="#credits" id="toc-entry-19">Credits</a><ul>
<li><a class="reference internal" href="#authors" id="toc-entry-20">Authors</a></li>
<li><a class="reference internal" href="#maintainers" id="toc-entry-21">Maintainers</a></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="configuration">
<h1><a class="toc-backref" href="#toc-entry-1">Configuration</a></h1>
<p>Please refer to the <a class="reference external" href="https://cetmix.com/tower">official
documentation</a> for detailed configuration
instructions.</p>
</div>
<div class="section" id="usage">
<h1><a class="toc-backref" href="#toc-entry-2">Usage</a></h1>
<p>Please refer to the <a class="reference external" href="https://cetmix.com/tower">official
documentation</a> for detailed usage
instructions.</p>
</div>
<div class="section" id="changelog">
<h1><a class="toc-backref" href="#toc-entry-3">Changelog</a></h1>
<div class="section" id="section-1">
<h2><a class="toc-backref" href="#toc-entry-4">16.0.2.0.1 (2025-10-29)</a></h2>
<ul class="simple">
<li>Features: Improve the way secrets are listed in the YAML import
widget. (5010)</li>
</ul>
</div>
<div class="section" id="section-2">
<h2><a class="toc-backref" href="#toc-entry-5">16.0.1.4.2 (2025-10-06)</a></h2>
<ul class="simple">
<li>Bugfixes: Add the missing create function decorator (4980)</li>
</ul>
</div>
<div class="section" id="section-3">
<h2><a class="toc-backref" href="#toc-entry-6">16.0.1.4.1 (2025-08-26)</a></h2>
<ul class="simple">
<li>Bugfixes: Make selection values lowercase to simplify their
management. (4896)</li>
</ul>
</div>
<div class="section" id="section-4">
<h2><a class="toc-backref" href="#toc-entry-7">16.0.1.3.0 (2025-07-30)</a></h2>
<ul class="simple">
<li>Features: Optional behaviour when file uploaded by command already
exists on the server. (4740)</li>
</ul>
</div>
<div class="section" id="section-5">
<h2><a class="toc-backref" href="#toc-entry-8">16.0.1.1.4 (2025-07-08)</a></h2>
<ul class="simple">
<li>Bugfixes: Fix missing model names in YAML exports when exporting
multiple commands with flight plans (4820)</li>
</ul>
</div>
<div class="section" id="section-6">
<h2><a class="toc-backref" href="#toc-entry-9">16.0.1.1.3 (2025-07-07)</a></h2>
<ul class="simple">
<li>Bugfixes: Import servers with <tt class="docutils literal">Password</tt> ssh authentication mode
(4812)</li>
</ul>
</div>
<div class="section" id="section-7">
<h2><a class="toc-backref" href="#toc-entry-10">16.0.1.1.1 (2025-06-23)</a></h2>
<ul class="simple">
<li>Features: YAML code optimisation (4728)</li>
</ul>
</div>
<div class="section" id="section-8">
<h2><a class="toc-backref" href="#toc-entry-11">16.0.1.1.0 (2025-06-20)</a></h2>
<ul class="simple">
<li>Features: Export/import scheduled tasks to/from YAML. (4650)</li>
</ul>
</div>
<div class="section" id="section-9">
<h2><a class="toc-backref" href="#toc-entry-12">16.0.1.0.5 (2025-05-21)</a></h2>
<ul class="simple">
<li>Features: Export/import secret values related to Server. (4696)</li>
</ul>
</div>
<div class="section" id="section-10">
<h2><a class="toc-backref" href="#toc-entry-13">16.0.1.0.4 (2025-05-16)</a></h2>
<ul class="simple">
<li>Features: Export/import servers and files to/from YAML. (4670)</li>
</ul>
</div>
<div class="section" id="section-11">
<h2><a class="toc-backref" href="#toc-entry-14">16.0.1.0.3 (2025-05-09)</a></h2>
<ul class="simple">
<li>Bugfixes: Non-critical issues and performance improvements. (4663)</li>
</ul>
</div>
<div class="section" id="section-12">
<h2><a class="toc-backref" href="#toc-entry-15">16.0.1.0.2 (2025-04-30)</a></h2>
<ul class="simple">
<li>Features: User groups are visible without developer mode. (4642)</li>
</ul>
</div>
<div class="section" id="section-13">
<h2><a class="toc-backref" href="#toc-entry-16">16.0.1.0.1 (2025-04-21)</a></h2>
<ul class="simple">
<li>Features: Export additional fields for shortcuts, variables and
options. Add action menu to export keys/secrets. (4602)</li>
</ul>
</div>
<div class="section" id="section-14">
<h2><a class="toc-backref" href="#toc-entry-17">16.0.1.0.0</a></h2>
<p>Release for Odoo 16.0</p>
</div>
</div>
<div class="section" id="bug-tracker">
<h1><a class="toc-backref" href="#toc-entry-18">Bug Tracker</a></h1>
<p>Bugs are tracked on <a class="reference external" href="https://github.com/cetmix/cetmix-tower/issues">GitHub Issues</a>.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
<a class="reference external" href="https://github.com/cetmix/cetmix-tower/issues/new?body=module:%20cetmix_tower_yaml%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**">feedback</a>.</p>
<p>Do not contact contributors directly about support or help with technical issues.</p>
</div>
<div class="section" id="credits">
<h1><a class="toc-backref" href="#toc-entry-19">Credits</a></h1>
<div class="section" id="authors">
<h2><a class="toc-backref" href="#toc-entry-20">Authors</a></h2>
<ul class="simple">
<li>Cetmix</li>
</ul>
</div>
<div class="section" id="maintainers">
<h2><a class="toc-backref" href="#toc-entry-21">Maintainers</a></h2>
<p>This module is part of the <a class="reference external" href="https://github.com/cetmix/cetmix-tower/tree/16.0/cetmix_tower_yaml">cetmix/cetmix-tower</a> project on GitHub.</p>
<p>You are welcome to contribute.</p>
</div>
</div>
</div>
</body>
</html>

View File

@@ -0,0 +1,8 @@
from . import test_command
from . import test_tower_yaml_mixin
from . import test_file_template
from . import test_plan
from . import test_yaml_export_wizard
from . import test_yaml_import_wizard
from . import test_server_log
from . import test_server_yaml

View File

@@ -0,0 +1,347 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import yaml
from odoo.exceptions import ValidationError
from odoo.tests import TransactionCase
class TestTowerCommand(TransactionCase):
@classmethod
def setUpClass(cls, *args, **kwargs):
super().setUpClass(*args, **kwargs)
cls.Command = cls.env["cx.tower.command"]
# Expected YAML content of the test command
cls.command_test_yaml = """cetmix_tower_model: command
access_level: manager
reference: test_yaml_in_tests
name: Test YAML
action: ssh_command
allow_parallel_run: false
note: |-
Test YAML command conversion.
Ensure all fields are rendered properly.
os_ids: false
tag_ids: false
path: false
file_template_id: false
flight_plan_id: false
code: |-
cd /home/{{ tower.server.ssh_username }} \\
&& ls -lha
server_status: false
variable_ids: false
secret_ids: false
no_split_for_sudo: false
if_file_exists: skip
disconnect_file: false
"""
# YAML content translated into Python dict
cls.command_test_yaml_dict = yaml.safe_load(cls.command_test_yaml)
def test_yaml_from_command(self):
"""Test if YAML is generated properly from a command"""
# -- 0 --
# Create test command
# Test command
command_test = self.Command.create(
{
"name": "Test YAML",
"reference": "test_yaml_in_tests",
"action": "ssh_command",
"code": """cd /home/{{ tower.server.ssh_username }} \\
&& ls -lha""",
"note": """Test YAML command conversion.
Ensure all fields are rendered properly.""",
}
)
# -- 1 --
# Check it YAML generated by the command matches
# YAML from the template file
self.assertEqual(
command_test.yaml_code,
self.command_test_yaml,
"YAML generated from command doesn't match template file one",
)
# -- 2 --
# Check if YAML key values match Cetmix Tower ones
self.assertEqual(
command_test.access_level,
self.Command.TO_TOWER_ACCESS_LEVEL[
self.command_test_yaml_dict["access_level"]
],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.action,
self.command_test_yaml_dict["action"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.allow_parallel_run,
self.command_test_yaml_dict["allow_parallel_run"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.code,
self.command_test_yaml_dict["code"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.name,
self.command_test_yaml_dict["name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.note,
self.command_test_yaml_dict["note"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.path,
self.command_test_yaml_dict["path"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.reference,
self.command_test_yaml_dict["reference"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.if_file_exists,
self.command_test_yaml_dict["if_file_exists"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command_test.disconnect_file,
self.command_test_yaml_dict["disconnect_file"],
"YAML value doesn't match Cetmix Tower one",
)
def test_command_from_yaml(self):
"""Test if YAML is generated properly from a command"""
def test_yaml(command):
"""Checks if yaml values are inserted correctly
Args:
command(cx.tower.command): _description_
"""
self.assertEqual(
command.access_level,
self.Command.TO_TOWER_ACCESS_LEVEL[
self.command_test_yaml_dict["access_level"]
],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.action,
self.command_test_yaml_dict["action"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.allow_parallel_run,
self.command_test_yaml_dict["allow_parallel_run"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.code,
self.command_test_yaml_dict["code"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.name,
self.command_test_yaml_dict["name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.note,
self.command_test_yaml_dict["note"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.path,
self.command_test_yaml_dict["path"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.reference,
self.command_test_yaml_dict["reference"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.if_file_exists,
self.command_test_yaml_dict["if_file_exists"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
command.disconnect_file,
self.command_test_yaml_dict["disconnect_file"],
"YAML value doesn't match Cetmix Tower one",
)
# Create test command
command_test = self.Command.create(
{"name": "New Command", "action": "python_code"}
)
# -- 1 --
# Insert YAML into the command and
# check if YAML key values match Cetmix Tower ones
command_test.yaml_code = self.command_test_yaml
test_yaml(command_test)
# -- 2 --
# Insert some non supported keys and ensure nothing bad happens
yaml_with_non_supported_keys = """access_level: manager
action: ssh_command
doge: wow
memes: much nice!
allow_parallel_run: false
cetmix_tower_model: command
code: |-
cd /home/{{ tower.server.ssh_username }} \\
&& ls -lha
file_template_id: false
flight_plan_id: false
name: Test YAML
note: |-
Test YAML command conversion.
Ensure all fields are rendered properly.
path: false
reference: test_yaml_in_tests
tag_ids: false
"""
command_test.yaml_code = yaml_with_non_supported_keys
test_yaml(command_test)
# -- 3 --
# Insert non existing selection field value and exception is raised
yaml_with_non_supported_keys = """access_level: manager
action: non_existing_action
doge: wow
memes: much nice!
allow_parallel_run: false
cetmix_tower_model: command
code: |-
cd /home/{{ tower.server.ssh_username }} \\
&& ls -lha
file_template_id: false
flight_plan_id: false
name: Test YAML
note: |-
Test YAML command conversion.
Ensure all fields are rendered properly.
path: false
reference: test_yaml_in_tests
tag_ids: false
"""
with self.assertRaises(ValidationError) as e:
command_test.yaml_code = yaml_with_non_supported_keys
self.assertIn("non_existing_action", str(e.exception))
self.assertEqual(
str(e),
"Wrong value for cx.tower.command.action: 'non_existing_action'",
"Exception message doesn't match",
)
def test_command_with_action_file_template(self):
"""Test command with 'File from template' action"""
yaml_with_reference = """cetmix_tower_model: command
access_level: manager
reference: such_much_test_command
name: Such Much Command
action: file_using_template
allow_parallel_run: false
note: Just a note
os_ids: false
tag_ids: false
path: false
file_template_id: my_custom_test_template
flight_plan_id: false
code: false
server_status: false
variable_ids: false
secret_ids: false
no_split_for_sudo: false
if_file_exists: skip
disconnect_file: false
"""
# Add file template
file_template = self.env["cx.tower.file.template"].create(
{
"name": "Such much demo",
"reference": "my_custom_test_template",
"file_name": "much_logs.txt",
"server_dir": "/var/log/my/files",
"source": "tower",
"file_type": "text",
"note": "Hey!",
"keep_when_deleted": False,
}
)
command_with_template = self.Command.create(
{
"name": "Such Much Command",
"reference": "such_much_test_command",
"action": "file_using_template",
"note": "Just a note",
"file_template_id": file_template.id,
}
)
# -- 1 --
# Check if final YAML composed correctly
self.assertEqual(
command_with_template.yaml_code,
yaml_with_reference,
"YAML is not composed correctly",
)
# -- 2 --
# Explode related record and check the YAML
yaml_with_reference_exploded = """cetmix_tower_model: command
access_level: manager
reference: such_much_test_command
name: Such Much Command
action: file_using_template
allow_parallel_run: false
note: Just a note
os_ids: false
tag_ids: false
path: false
file_template_id:
reference: my_custom_test_template
name: Such much demo
source: tower
file_type: text
server_dir: /var/log/my/files
file_name: much_logs.txt
keep_when_deleted: false
tag_ids: false
note: Hey!
code: false
variable_ids: false
secret_ids: false
flight_plan_id: false
code: false
server_status: false
variable_ids: false
secret_ids: false
no_split_for_sudo: false
if_file_exists: skip
disconnect_file: false
"""
command_with_template.invalidate_recordset(["yaml_code"])
self.assertEqual(
command_with_template.with_context(explode_related_record=True).yaml_code,
yaml_with_reference_exploded,
"YAML is not composed correctly",
)

View File

@@ -0,0 +1,320 @@
import yaml
from odoo.tests import TransactionCase
class TestTowerFileTemplate(TransactionCase):
@classmethod
def setUpClass(cls, *args, **kwargs):
super().setUpClass(*args, **kwargs)
cls.FileTemplate = cls.env["cx.tower.file.template"]
# Expected YAML content of the test file template
cls.file_template_test_yaml = """cetmix_tower_model: file_template
reference: dockerfile_unit_test
name: Dockerfile Test
source: tower
file_type: text
server_dir: /opt
file_name: Dockerfile
keep_when_deleted: true
tag_ids: false
note: |-
Used to build Odoo addons image.
Depends on Odoo core image.
code: |-
FROM odoo:{{ odoo_test_version }}
# Install git-aggregator and tools for requirements generation
RUN pip3 install --upgrade pip && pip install manifestoo setuptools-odoo git-aggregator
# Let's go!
USER odoo
variable_ids: false
secret_ids: false
""" # noqa
# Expected YAML content of the test file template
# without empty x2mvalues
cls.file_template_test_yaml_no_empty_values = """cetmix_tower_model: file_template
reference: dockerfile_unit_test
name: Dockerfile Test
source: tower
file_type: text
server_dir: /opt
file_name: Dockerfile
keep_when_deleted: true
note: |-
Used to build Odoo addons image.
Depends on Odoo core image.
code: |-
FROM odoo:{{ odoo_test_version }}
# Install git-aggregator and tools for requirements generation
RUN pip3 install --upgrade pip && pip install manifestoo setuptools-odoo git-aggregator
# Let's go!
USER odoo
""" # noqa
# YAML content translated into Python dict
cls.file_template_test_yaml_dict = yaml.safe_load(cls.file_template_test_yaml)
cls.file_template_test_yaml_dict_no_empty_values = yaml.safe_load(
cls.file_template_test_yaml_no_empty_values
)
def test_yaml_from_file_template(self):
"""Test if YAML is generated properly from a file"""
# -- 0 --
# Create test file
# Test file
file_template_test = self.FileTemplate.create(
{
"name": "Dockerfile Test",
"reference": "dockerfile_unit_test",
"file_name": "Dockerfile",
"server_dir": "/opt",
"source": "tower",
"keep_when_deleted": True,
"file_type": "text",
"code": """FROM odoo:{{ odoo_test_version }}
# Install git-aggregator and tools for requirements generation
RUN pip3 install --upgrade pip && pip install manifestoo setuptools-odoo git-aggregator
# Let's go!
USER odoo""",
"note": """Used to build Odoo addons image.
Depends on Odoo core image.""",
}
)
# -- 1 --
# Check it YAML generated by the file matches
# YAML from the template file
self.assertEqual(
file_template_test.yaml_code,
self.file_template_test_yaml,
"YAML generated from file doesn't match template file one",
)
# -- 2 --
# Check if YAML key values match Cetmix Tower ones
self.assertEqual(
file_template_test.source,
self.file_template_test_yaml_dict["source"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.file_name,
self.file_template_test_yaml_dict["file_name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.code,
self.file_template_test_yaml_dict["code"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.name,
self.file_template_test_yaml_dict["name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.note,
self.file_template_test_yaml_dict["note"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.server_dir,
self.file_template_test_yaml_dict["server_dir"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.reference,
self.file_template_test_yaml_dict["reference"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.file_type,
self.file_template_test_yaml_dict["file_type"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.keep_when_deleted,
self.file_template_test_yaml_dict["keep_when_deleted"],
"YAML value doesn't match Cetmix Tower one",
)
def test_yaml_from_file_template_no_empty_values(self):
"""Test if YAML is generated properly from a file"""
# -- 0 --
# Create test file
# Test file
file_template_test = self.FileTemplate.with_context(
remove_empty_values=True
).create(
{
"name": "Dockerfile Test",
"reference": "dockerfile_unit_test",
"file_name": "Dockerfile",
"server_dir": "/opt",
"source": "tower",
"keep_when_deleted": True,
"file_type": "text",
"code": """FROM odoo:{{ odoo_test_version }}
# Install git-aggregator and tools for requirements generation
RUN pip3 install --upgrade pip && pip install manifestoo setuptools-odoo git-aggregator
# Let's go!
USER odoo""",
"note": """Used to build Odoo addons image.
Depends on Odoo core image.""",
}
)
# -- 1 --
# Check it YAML generated by the file matches
# YAML from the template file
self.assertEqual(
file_template_test.yaml_code,
self.file_template_test_yaml_no_empty_values,
"YAML generated from file doesn't match template file one",
)
# -- 2 --
# Check if YAML key values match Cetmix Tower ones
self.assertEqual(
file_template_test.source,
self.file_template_test_yaml_dict_no_empty_values["source"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.file_name,
self.file_template_test_yaml_dict_no_empty_values["file_name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.code,
self.file_template_test_yaml_dict_no_empty_values["code"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.name,
self.file_template_test_yaml_dict_no_empty_values["name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.note,
self.file_template_test_yaml_dict_no_empty_values["note"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.server_dir,
self.file_template_test_yaml_dict_no_empty_values["server_dir"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.reference,
self.file_template_test_yaml_dict_no_empty_values["reference"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.file_type,
self.file_template_test_yaml_dict_no_empty_values["file_type"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template_test.keep_when_deleted,
self.file_template_test_yaml_dict_no_empty_values["keep_when_deleted"],
"YAML value doesn't match Cetmix Tower one",
)
def test_file_template_from_yaml(self):
"""Test if YAML is generated properly from a file"""
def test_yaml(file_template):
"""Checks if yaml values are inserted correctly
Args:
file_template (cx.tower.file.template): File template
"""
self.assertEqual(
file_template.source,
self.file_template_test_yaml_dict["source"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.file_name,
self.file_template_test_yaml_dict["file_name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.code,
self.file_template_test_yaml_dict["code"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.name,
self.file_template_test_yaml_dict["name"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.note,
self.file_template_test_yaml_dict["note"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.server_dir,
self.file_template_test_yaml_dict["server_dir"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.reference,
self.file_template_test_yaml_dict["reference"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.file_type,
self.file_template_test_yaml_dict["file_type"],
"YAML value doesn't match Cetmix Tower one",
)
self.assertEqual(
file_template.keep_when_deleted,
self.file_template_test_yaml_dict["keep_when_deleted"],
"YAML value doesn't match Cetmix Tower one",
)
# Create test file template
file_template_test = self.FileTemplate.create({"name": "New file template"})
# -- 1 --
# Insert YAML into the file and
# check if YAML key values match Cetmix Tower ones
file_template_test.yaml_code = self.file_template_test_yaml
test_yaml(file_template_test)
# -- 2 --
# Insert some non supported keys and ensure nothing bad happens
yaml_with_non_supported_keys = """cetmix_tower_model: file_template
code: |-
FROM odoo:{{ odoo_test_version }}
# Install git-aggregator and tools for requirements generation
RUN pip3 install --upgrade pip && pip install manifestoo setuptools-odoo git-aggregator
# Let's go!
USER odoo
doge: SoMuch style!
file_name: Dockerfile
file_type: text
keep_when_deleted: true
name: Dockerfile Test
note: |-
Used to build Odoo addons image.
Depends on Odoo core image.
reference: dockerfile_unit_test
server_dir: /opt
source: tower
tag_ids: false
""" # noqa
file_template_test.yaml_code = yaml_with_non_supported_keys
test_yaml(file_template_test)

View File

@@ -0,0 +1,179 @@
from odoo.tests import TransactionCase
class TestTowerPlan(TransactionCase):
@classmethod
def setUpClass(cls, *args, **kwargs):
super().setUpClass(*args, **kwargs)
cls.Plan = cls.env["cx.tower.plan"]
def test_plan_create_from_yaml(self):
"""Test plan creation from YAML."""
plan_yaml = """cetmix_tower_model: plan
access_level: manager
reference: test_plan_from_yaml
name: 'Test Plan From Yaml'
allow_parallel_run: false
color: 0
tag_ids:
- reference: doge_test_plan_tag
name: Doge Test Plan Tag
color: 1
on_error_action: e
custom_exit_code: 0
line_ids:
- sequence: 5
condition: false
use_sudo: false
path: /such/much/{{ test_plan_dir }}
command_id:
access_level: manager
reference: very_much_command_test
name: Very much command
action: ssh_command
allow_parallel_run: false
note: false
code: Such much code
variable_ids:
- cetmix_tower_model: variable
reference: test_plan_dir
name: Test Plan Directory
action_ids:
- sequence: 1
condition: ==
value_char: '0'
action: n
custom_exit_code: 0
variable_value_ids:
- cetmix_tower_model: variable_value
variable_id:
cetmix_tower_yaml_version: 1
cetmix_tower_model: variable
reference: test_plan_branch
name: Test Plan Branch
value_char: production
- cetmix_tower_model: variable_value
variable_id:
cetmix_tower_yaml_version: 1
cetmix_tower_model: variable
reference: test_plan_some_unique_variable
name: Test Plan Some Unique Variable
value_char: 'Final Value'
- cetmix_tower_model: plan_line_action
access_level: manager
sequence: 2
condition: '>'
value_char: '0'
action: ec
custom_exit_code: 255
variable_value_ids: false
variable_ids: false
"""
# -- 1 --
# Create plan from YAML
plan_form_yaml = self.Plan.create(
{"name": "Name Placeholder", "yaml_code": plan_yaml}
)
self.assertEqual(
plan_form_yaml.reference,
"test_plan_from_yaml",
"Reference is not set from YAML",
)
# Name should be set from YAML
self.assertEqual(
plan_form_yaml.name, "Test Plan From Yaml", "Name is not set from YAML"
)
# -- 2 --
# Check plan tags
plan_tags = plan_form_yaml.tag_ids
self.assertEqual(len(plan_tags), 1)
self.assertEqual(plan_tags.name, "Doge Test Plan Tag")
# -- 3 --
# Check plan lines
plan_lines = plan_form_yaml.line_ids
self.assertEqual(len(plan_lines), 1, "Line count is not 1")
self.assertFalse(plan_lines.condition, "Condition is not false")
self.assertEqual(
plan_lines.path,
"/such/much/{{ test_plan_dir }}",
"Path is not set from YAML",
)
self.assertEqual(
plan_lines.command_id.reference,
"very_much_command_test",
"Command reference is not set from YAML",
)
self.assertEqual(
plan_lines.command_id.name,
"Very much command",
"Command name is not set from YAML",
)
self.assertEqual(
plan_lines.command_id.action,
"ssh_command",
"Command action is not set from YAML",
)
self.assertFalse(
plan_lines.command_id.allow_parallel_run,
"Command allow parallel run is not set from YAML",
)
self.assertFalse(
plan_lines.command_id.note, "Command note is not set from YAML"
)
self.assertEqual(
plan_lines.command_id.variable_ids.mapped("reference"),
["test_plan_dir"],
"Command variable ids is not set from YAML",
)
self.assertEqual(
plan_lines.command_id.access_level,
"2",
"Command access level is not set from YAML",
)
# -- 4 --
# Check plan line actions
plan_actions = plan_form_yaml.line_ids.action_ids
self.assertEqual(len(plan_actions), 2, "Action count is not 2")
self.assertEqual(
plan_actions[0].condition, "==", "First action condition is not equal"
)
self.assertEqual(
plan_actions[0].value_char, "0", "First action value char is not 0"
)
self.assertEqual(plan_actions[0].action, "n", "First action action is not n")
self.assertEqual(
plan_actions[0].custom_exit_code,
0,
"First action custom exit code is not 0",
)
self.assertEqual(
len(plan_actions[0].variable_value_ids),
2,
"Number of variable value ids is not correct",
)
self.assertEqual(
plan_actions[0].variable_value_ids.mapped("value_char"),
["production", "Final Value"],
"Variable value chars are not correct",
)
self.assertEqual(
plan_actions[1].condition, ">", "Second action condition is not greater"
)
self.assertEqual(
plan_actions[1].value_char, "0", "Second action value char is not 0"
)
self.assertEqual(plan_actions[1].action, "ec", "Second action action is not ec")
self.assertEqual(
plan_actions[1].custom_exit_code,
255,
"Second action custom exit code is not 255",
)
self.assertFalse(
plan_actions[1].variable_value_ids,
"Second action variable value ids is not false",
)

View File

@@ -0,0 +1,127 @@
# Copyright (C) 2025 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
"""
Tests for the cx.tower.server.log model YAML export/import.
Covers:
1. YAML export of a file-type log must include `file_id` and allow suffixes.
2. A full round-trip (export → delete → import) preserves the `file_id` relation.
3. Exporting a non-file log must include a falsy `file_id`.
4. Importing YAML with a bogus `file_id` reference raises ValidationError.
"""
import yaml
from odoo.tests import TransactionCase, tagged
@tagged("post_install", "-at_install")
class TestServerLog(TransactionCase):
"""YAML export/import tests for cx.tower.server.log."""
@classmethod
def setUpClass(cls):
super().setUpClass()
env = cls.env
cls.File = env["cx.tower.file"]
cls.Server = env["cx.tower.server"]
cls.ServerLog = env["cx.tower.server.log"]
# Create a file to reference from the log
cls.file = cls.File.create(
{
"name": "repos.yaml",
"reference": "reposyaml",
"source": "tower",
"file_type": "text",
"server_dir": "/tmp",
"code": "# Example\nHello, Tower!",
}
)
# Create a server (use password auth to satisfy constraints)
cls.server = cls.Server.create(
{
"name": "Srv-YAML-Test",
"reference": "srv_yaml_test",
"ip_v4_address": "127.0.0.1",
"ssh_username": "admin",
"ssh_port": 22,
"ssh_auth_mode": "p",
"ssh_password": "dummy",
"use_sudo": False,
}
)
# Create a file-type log linked to the file above
cls.log = cls.ServerLog.create(
{
"name": "Log from file",
"reference": "log_from_file",
"log_type": "file",
"file_id": cls.file.id,
"server_id": cls.server.id,
"use_sudo": False,
}
)
def test_yaml_export_contains_file_id(self):
"""Exported YAML must include a file_id starting with the file's reference."""
data = yaml.safe_load(self.log.yaml_code)
# Ensure file_id is present
self.assertIn("file_id", data, "`file_id` is missing from YAML export")
# Allow for auto-appended suffixes, so only check prefix
self.assertTrue(
data["file_id"].startswith(self.file.reference),
f"`file_id` value '{data['file_id']}' should start with "
f"'{self.file.reference}'",
)
def test_yaml_roundtrip_restores_file_id(self):
"""A full export→delete→import cycle must restore the file_id relation."""
yaml_dict = yaml.safe_load(self.log.yaml_code)
# Remove the original log
self.log.unlink()
# Recreate from YAML
vals = self.ServerLog._post_process_yaml_dict_values(yaml_dict)
restored = self.ServerLog.with_context(from_yaml=True).create(vals)
# Verify relation restored
self.assertEqual(
restored.file_id.id,
self.file.id,
"`file_id` was not restored after round-trip",
)
def test_yaml_export_without_file_id(self):
"""Logs of non-file type should not include file_id in YAML."""
cmd_log = self.ServerLog.create(
{
"name": "Log no file",
"reference": "log_no_file",
"log_type": "command",
"server_id": self.server.id,
"use_sudo": False,
}
)
data = yaml.safe_load(cmd_log.yaml_code)
# key is present, but must be falsy
self.assertIn("file_id", data, "`file_id` key is missing")
self.assertFalse(
data["file_id"],
"`file_id` for non-file log must be False/empty",
)
def test_yaml_import_with_missing_file_reference(self):
"""Missing file reference is accepted, but file_id stays empty."""
yaml_dict = yaml.safe_load(self.log.yaml_code)
yaml_dict["file_id"] = "does_not_exist"
vals = self.ServerLog._post_process_yaml_dict_values(yaml_dict)
new_log = self.ServerLog.with_context(from_yaml=True).create(vals)
# Log is created, but the relation is not resolved
self.assertFalse(
new_log.file_id,
"file_id should be empty when reference cannot be resolved",
)

View File

@@ -0,0 +1,124 @@
# Copyright (C) 2025 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
"""
Tests for cx.tower.server YAML export/import covering command_ids and plan_ids.
"""
import yaml
from odoo.tests import TransactionCase, tagged
@tagged("post_install", "-at_install")
class TestServerYAML(TransactionCase):
"""YAML export/import tests for cx.tower.server with commands and plans."""
@classmethod
def setUpClass(cls):
super().setUpClass()
env = cls.env
cls.Server = env["cx.tower.server"]
cls.Command = env["cx.tower.command"]
cls.Plan = env["cx.tower.plan"]
# Create a command to attach (use defaults for access_level)
cls.command = cls.Command.create(
{
"name": "Test Command",
"reference": "test_command",
"action": "ssh_command",
"allow_parallel_run": False,
}
)
# Create a flight plan to attach
cls.plan = cls.Plan.create(
{
"name": "Test Flight Plan",
"reference": "test_plan",
"allow_parallel_run": False,
"color": 0,
}
)
# Create server and link command and plan
cls.server = cls.Server.create(
{
"name": "Server YAML Test",
"reference": "srv_yaml_test",
"ip_v4_address": "127.0.0.1",
"ssh_username": "admin",
"ssh_port": 22,
"ssh_auth_mode": "p",
"ssh_password": "dummy",
"use_sudo": False,
# Link the m2m fields
"command_ids": [(6, 0, [cls.command.id])],
"plan_ids": [(6, 0, [cls.plan.id])],
}
)
def test_yaml_export_contains_command_and_plan(self):
"""Exported YAML include command_ids and plan_ids with correct references."""
data = yaml.safe_load(self.server.yaml_code)
# Check command_ids
self.assertIn(
"command_ids",
data,
"`command_ids` is missing from YAML export",
)
self.assertIsInstance(
data["command_ids"], list, "`command_ids` should be a list in YAML"
)
self.assertTrue(
data["command_ids"],
"`command_ids` list should not be empty",
)
# Only reference should be exported
self.assertEqual(
data["command_ids"][0],
self.command.reference,
"Exported command reference does not match",
)
# Check plan_ids
self.assertIn(
"plan_ids",
data,
"`plan_ids` is missing from YAML export",
)
self.assertIsInstance(
data["plan_ids"], list, "`plan_ids` should be a list in YAML"
)
self.assertTrue(
data["plan_ids"],
"`plan_ids` list should not be empty",
)
self.assertEqual(
data["plan_ids"][0],
self.plan.reference,
"Exported plan reference does not match",
)
def test_yaml_roundtrip_restores_command_and_plan(self):
"""A full export→delete→import cycle must restore the m2m relations."""
yaml_dict = yaml.safe_load(self.server.yaml_code)
# Remove original server
self.server.unlink()
# Prepare values and import
vals = self.Server._post_process_yaml_dict_values(yaml_dict)
restored = self.Server.with_context(
from_yaml=True, skip_ssh_settings_check=True
).create(vals)
# Verify m2m links restored
self.assertEqual(
restored.command_ids.ids,
[self.command.id],
"`command_ids` were not restored correctly",
)
self.assertEqual(
restored.plan_ids.ids,
[self.plan.id],
"`plan_ids` were not restored correctly",
)

View File

@@ -0,0 +1,525 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import _
from odoo.exceptions import AccessError, ValidationError
from odoo.tests import TransactionCase, tagged
class TestTowerYamlMixin(TransactionCase):
@classmethod
def setUpClass(cls, *args, **kwargs):
super().setUpClass(*args, **kwargs)
cls.Users = cls.env["res.users"].with_context(no_reset_password=True)
cls.YamlMixin = cls.env["cx.tower.yaml.mixin"]
TowerTag = cls.env["cx.tower.tag"]
cls.tag_doge = TowerTag.create({"name": "Doge", "reference": "doge"})
cls.tag_pepe = TowerTag.create({"name": "Pepe", "reference": "pepe"})
def test_convert_dict_to_yaml(self):
# -- 1 --
# Test regular flow
self.assertEqual(
self.YamlMixin._convert_dict_to_yaml({"a": 1, "b": 2}),
"a: 1\nb: 2\n",
"Dictionary was not converted to YAML correctly",
)
# -- 2 --
# Test flow with exception due to wrong values
with self.assertRaises(ValidationError) as e:
self.YamlMixin._convert_dict_to_yaml("not_a_dict")
self.assertEqual(
str(e.exception),
_("Values must be a dictionary"),
"Exception message doesn't match",
)
def test_yaml_field_access(self):
# Create Root user with no access to the 'yaml_code field
user_root = self.Users.create(
{
"name": "Root User",
"login": "root@example.com",
"groups_id": [
(4, self.env.ref("base.group_user").id),
(4, self.env.ref("cetmix_tower_server.group_root").id),
],
}
)
with self.assertRaises(AccessError):
self.tag_doge.with_user(user_root).read(["yaml_code"])
# Add user to the 'cetmix_tower_yaml.group_export' group
# and check if access is granted
user_root.write(
{"groups_id": [(4, self.env.ref("cetmix_tower_yaml.group_export").id)]}
)
yaml_code = (
self.tag_doge.with_user(user_root).read(["yaml_code"])[0].get("yaml_code")
)
# Modify YAML code and check if it's saved
yaml_code = yaml_code.replace("Doge", "WowDoge")
with self.assertRaises(AccessError):
self.tag_doge.with_user(user_root).write({"yaml_code": yaml_code})
# Add user to the 'cetmix_tower_yaml.group_import' group
# and check if access is granted
user_root.write(
{"groups_id": [(4, self.env.ref("cetmix_tower_yaml.group_import").id)]}
)
self.tag_doge.with_user(user_root).write({"yaml_code": yaml_code})
self.assertEqual(
self.tag_doge.with_user(user_root).yaml_code,
yaml_code,
"YAML code was not saved",
)
def test_post_process_record_values(self):
"""Test value post processing.
We test common fields only because this method can be overridden
in models inheriting this mixin.
"""
# Patch method to return "access_level" field too
def _get_fields_for_yaml(self):
return ["access_level", "name", "reference"]
self.YamlMixin._patch_method("_get_fields_for_yaml", _get_fields_for_yaml)
source_values = {
"access_level": "3",
"id": 22332,
"name": "Doge Much Like",
"reference": "such_much_doge",
}
result_values = self.YamlMixin._post_process_record_values(source_values.copy())
self.assertNotIn("id", result_values, "ID must be removed")
self.assertEqual(
result_values["access_level"],
self.YamlMixin.TO_YAML_ACCESS_LEVEL[source_values["access_level"]],
"Access level is not parsed correctly",
)
self.assertEqual(
result_values["name"],
source_values["name"],
"Other values should remain unchanged",
)
self.assertEqual(
result_values["reference"],
source_values["reference"],
"Other values should remain unchanged",
)
# Restore original method
self.YamlMixin._revert_method("_get_fields_for_yaml")
def test_post_process_yaml_dict_values(self):
"""Test YAML dict value post processing.
We test common fields only because this method can be overridden
in models inheriting this mixin.
"""
# Patch method to return "access_level" field too
def _get_fields_for_yaml(self):
return ["access_level", "name", "reference"]
self.YamlMixin._patch_method("_get_fields_for_yaml", _get_fields_for_yaml)
# -- 1 --
# Test regular flow
source_values = {
"access_level": "user",
"name": "Doge Much Like",
"reference": "such_much_doge",
"some_doge_field": "some_meme",
}
result_values = self.YamlMixin._post_process_yaml_dict_values(
source_values.copy()
)
self.assertNotIn(
"some_doge_field", result_values, "Non listed fields must be removed"
)
self.assertEqual(
result_values["access_level"],
self.YamlMixin.TO_TOWER_ACCESS_LEVEL[source_values["access_level"]],
"Access level is not parsed correctly",
)
self.assertEqual(
result_values["name"],
source_values["name"],
"Other values should remain unchanged",
)
self.assertEqual(
result_values["reference"],
source_values["reference"],
"Other values should remain unchanged",
)
# -- Test 2 --
# Submit wrong value for access level
source_values.update(
{
"access_level": "doge",
}
)
with self.assertRaises(ValidationError) as e:
result_values = self.YamlMixin._post_process_yaml_dict_values(
source_values.copy()
)
self.assertEqual(
str(e.exception),
_(
"Wrong value for 'access_level' key: %(acv)s",
acv="doge",
),
"Exception message doesn't match",
)
# Restore original method
self.YamlMixin._revert_method("_get_fields_for_yaml")
def test_process_relation_field_value_no_explode(self):
"""Test non exploded related field values.
Non exploded values represent related record with reference only.
Covers the following child functions:
- _process_m2o_value(..)
- _process_x2m_values(..)
"""
# We are using command with file template for that
file_template = self.env["cx.tower.file.template"].create(
{"name": "Test m2o", "reference": "test_m2o"}
)
command = self.env["cx.tower.command"].create(
{
"name": "Command test m2o",
"action": "file_using_template",
"file_template_id": file_template.id,
"tag_ids": [(4, self.tag_doge.id), (4, self.tag_pepe.id)],
}
)
# -- 1 --
# Record -> Yaml
# -- 1.1 --
# Many2one
result = command._process_relation_field_value(
field="file_template_id",
value=(command.file_template_id.id, command.file_template_id.name),
record_mode=True,
)
self.assertEqual(
result, file_template.reference, "Reference was not resolved correctly"
)
# -- 1.2 --
# Many2many
result = command._process_relation_field_value(
field="tag_ids",
value=[self.tag_doge.id, self.tag_pepe.id],
record_mode=True,
)
self.assertEqual(len(result), 2, "Must be 2 references")
self.assertIn(
self.tag_doge.reference, result, "Reference was not resolved correctly"
)
self.assertIn(
self.tag_pepe.reference, result, "Reference was not resolved correctly"
)
# -- 2 --
# Yaml -> Record
# -- 2.1. --
# Many2one
result = command._process_relation_field_value(
field="file_template_id", value=file_template.reference, record_mode=False
)
self.assertEqual(
result, file_template.id, "Record ID was not resolved correctly"
)
# -- 2.2 --
# Many2many
result = command._process_relation_field_value(
field="tag_ids",
value=[self.tag_doge.reference, self.tag_pepe.reference],
record_mode=False,
)
self.assertEqual(len(result), 2, "Must be 2 records")
self.assertIn(
(4, self.tag_doge.id), result, "Record ID was not resolved correctly"
)
self.assertIn(
(4, self.tag_pepe.id), result, "Record ID was not resolved correctly"
)
# -- 3 --
# Yaml with non existing reference -> Record
result = command._process_relation_field_value(
field="file_template_id", value="such_much_not_reference", record_mode=False
)
self.assertFalse(result, "Must be 'False'")
# -- 4 --
# No record -> Yaml
result = command._process_relation_field_value(
field="file_template_id",
value=self.env["cx.tower.file.template"],
record_mode=True,
)
self.assertFalse(result, "Result must be 'False'")
def test_process_relation_field_value_explode(self):
"""Test exploded related field values.
Exploded values represent related record with a child YAML structure.
Covers the following child functions:
- _process_m2o_value(..)
- _process_x2m_values(..)
"""
# We are using command with file template for that
file_template = self.env["cx.tower.file.template"].create(
{"name": "Test m2o", "reference": "test_m2o"}
)
file_template_values = file_template.with_context(
no_yaml_service_fields=True
)._prepare_record_for_yaml()
tag_doge_values = self.tag_doge.with_context(
no_yaml_service_fields=True
)._prepare_record_for_yaml()
tag_pepe_values = self.tag_pepe.with_context(
no_yaml_service_fields=True
)._prepare_record_for_yaml()
command = (
self.env["cx.tower.command"]
.create(
{
"name": "Command test m2o",
"action": "file_using_template",
"file_template_id": file_template.id,
"tag_ids": [(4, self.tag_doge.id), (4, self.tag_pepe.id)],
}
)
.with_context(explode_related_record=True)
) # and this is the actual trigger
# -- 1 --
# Record -> Yaml
# -- 1.1 --
# Many2one
result = command._process_relation_field_value(
field="file_template_id",
value=(command.file_template_id.id, command.file_template_id.name),
record_mode=True,
)
self.assertEqual(
result, file_template_values, "Reference was not resolved correctly"
)
# -- 1.2 --
# Many2many
result = command._process_relation_field_value(
field="tag_ids",
value=[self.tag_doge.id, self.tag_pepe.id],
record_mode=True,
)
self.assertEqual(len(result), 2, "Must be 2 records")
self.assertIn(tag_doge_values, result, "Record ID was not resolved correctly")
self.assertIn(tag_pepe_values, result, "Record ID was not resolved correctly")
# -- 2 --
# Yaml -> Record
# -- 2.1 --
# Many2one
result = command._process_relation_field_value(
field="file_template_id", value=file_template_values, record_mode=False
)
self.assertEqual(
result, file_template.id, "Record ID was not resolved correctly"
)
# -- 2.2 --
# Many2many
result = command._process_relation_field_value(
field="tag_ids", value=[tag_doge_values, tag_pepe_values], record_mode=False
)
self.assertEqual(len(result), 2, "Must be 2 records")
self.assertIn(
(4, self.tag_doge.id), result, "Record ID was not resolved correctly"
)
self.assertIn(
(4, self.tag_pepe.id), result, "Record ID was not resolved correctly"
)
# -- 3 --
# Yaml with non existing reference -> Record
file_template_values.update(
{
"name": "Very new name",
"reference": "such_much_not_reference",
"source": "server",
"file_type": "binary",
}
)
result = command._process_relation_field_value(
field="file_template_id", value=file_template_values, record_mode=False
)
# New record must be created
record = self.env["cx.tower.file.template"].browse(result)
self.assertEqual(
record.name, file_template_values["name"], "New record value doesn't match"
)
self.assertEqual(
record.reference,
file_template_values["reference"],
"New record value doesn't match",
)
self.assertEqual(
record.source,
file_template_values["source"],
"New record value doesn't match",
)
self.assertEqual(
record.file_type,
file_template_values["file_type"],
"New record value doesn't match",
)
# -- 4 --
# Yaml with no reference at all -> Record
values_with_no_references = {
"name": "Sorry no reference here",
"source": "tower",
"file_type": "binary",
}
result = command._process_relation_field_value(
field="file_template_id", value=values_with_no_references, record_mode=False
)
# New record must be created
record = self.env["cx.tower.file.template"].browse(result)
self.assertEqual(
record.name,
values_with_no_references["name"],
"New record value doesn't match",
)
self.assertEqual(
record.source,
values_with_no_references["source"],
"New record value doesn't match",
)
self.assertEqual(
record.file_type,
values_with_no_references["file_type"],
"New record value doesn't match",
)
# -- 5 --
# No record -> Yaml
result = command._process_relation_field_value(
field="file_template_id",
value=self.env["cx.tower.file.template"],
record_mode=True,
)
self.assertFalse(result, "Result must be 'False'")
def test_update_or_create_related_record(self):
"""Test if related record is updated or created correctly"""
# -- 1 --
# Update existing values
# We are using file template for that
FileTemplateModel = self.env["cx.tower.file.template"]
file_template = self.env["cx.tower.file.template"].create(
{"name": "Test m2o", "reference": "test_m2o"}
)
values_to_update = {"name": "Much new name"}
record = FileTemplateModel._update_or_create_related_record(
model=FileTemplateModel,
reference=file_template.reference,
values=values_to_update,
)
self.assertEqual(
record.name, values_to_update["name"], "Value was not updated properly"
)
self.assertEqual(record.id, file_template.id, "Same record must be updated")
# -- 2 --
# Reference not found. Must create a new record
values_to_update = {"name": "Doge file"}
record = FileTemplateModel._update_or_create_related_record(
model=FileTemplateModel,
reference="doge_file",
values=values_to_update,
create_immediately=True,
)
self.assertEqual(
record.name, values_to_update["name"], "Value was not updated properly"
)
self.assertNotEqual(record.id, file_template.id, "New record must be created")
# -- 2 --
# Reference not provided. Must create a new record
values_to_update = {"name": "Doge file"}
record = FileTemplateModel._update_or_create_related_record(
model=FileTemplateModel,
reference=False,
values=values_to_update,
create_immediately=True,
)
self.assertEqual(
record.name, values_to_update["name"], "Value was not updated properly"
)
self.assertNotEqual(record.id, file_template.id, "New record must be created")
@tagged("post_install", "-at_install")
def test_prepare_record_truncates_code_for_server_files(self):
"""Mixin must set code=False for cx.tower.file when source=='server'."""
File = self.env["cx.tower.file"]
srv_file = File.create(
{
"name": "srv.log",
"reference": "srvlog",
"source": "server",
"file_type": "text",
"server_dir": "/tmp",
"code": "BIG DATA",
}
)
rec = srv_file._prepare_record_for_yaml()
self.assertIn("code", rec)
self.assertFalse(rec["code"], "Expected code=False for server-sourced files")
@tagged("post_install", "-at_install")
def test_prepare_record_keeps_code_for_tower_files(self):
"""Mixin must keep code for cx.tower.file when source=='tower'."""
File = self.env["cx.tower.file"]
tw_file = File.create(
{
"name": "local.txt",
"reference": "localtxt",
"source": "tower",
"file_type": "text",
"server_dir": "/etc",
"code": "SMALL DATA",
}
)
rec = tw_file._prepare_record_for_yaml()
self.assertEqual(
rec["code"],
"SMALL DATA",
"Expected original code for tower-sourced files",
)

View File

@@ -0,0 +1,375 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import base64
import yaml
from odoo.exceptions import AccessError, ValidationError
from odoo.addons.base.tests.common import BaseCommon
class TestYamlExportWizard(BaseCommon):
@classmethod
def setUpClass(cls, *args, **kwargs):
super().setUpClass(*args, **kwargs)
# Used to ensure that the file header
# is present in the YAML code
cls.file_header = """
# This file is generated with Cetmix Tower.
# Details and documentation: https://cetmix.com/tower
"""
# Create a command
cls.TowerCommand = cls.env["cx.tower.command"]
cls.command_test_wizard = cls.TowerCommand.create(
{
"reference": "test_command_from_yaml",
"name": "Test Command From Yaml",
"code": "echo 'Test Command From Yaml'",
}
)
cls.command_test_wizard_2 = cls.TowerCommand.create(
{
"reference": "test_command_from_yaml_2",
"name": "Test Command From Yaml 2",
"code": "echo 'Test Command From Yaml 2'",
}
)
# Create a flight plan
cls.FlightPlan = cls.env["cx.tower.plan"]
cls.flight_plan_test_wizard = cls.FlightPlan.create(
{
"name": "Test Flight Plan From Yaml",
"line_ids": [
(
0,
0,
{
"command_id": cls.command_test_wizard.id,
},
)
],
}
)
# Create a server template
cls.ServerTemplate = cls.env["cx.tower.server.template"]
cls.server_template_test_wizard = cls.ServerTemplate.create(
{
"name": "Test Server Template From Yaml",
"flight_plan_id": cls.flight_plan_test_wizard.id,
}
)
# Create a wizard and trigger onchange
cls.YamlExportWizard = cls.env["cx.tower.yaml.export.wiz"]
cls.test_wizard = cls.YamlExportWizard.with_context(
active_model="cx.tower.server.template",
active_ids=[cls.server_template_test_wizard.id],
).create({})
cls.test_wizard.onchange_explode_child_records()
def test_user_without_export_group_cannot_export(self):
"""Test if user without export group cannot export"""
# Tower manager user without export group
self.user_yaml_export = self.env["res.users"].create(
{
"name": "No Yaml Export User",
"login": "no_yaml_export_user",
"groups_id": [
(4, self.env.ref("cetmix_tower_server.group_manager").id)
],
}
)
with self.assertRaises(AccessError):
self.test_wizard.with_user(self.user_yaml_export).read([])
def test_yaml_export_wizard_yaml_generation(self):
"""Test code generation of YAML export wizard."""
wizard_yaml = """
# This file is generated with Cetmix Tower.
# Details and documentation: https://cetmix.com/tower
cetmix_tower_yaml_version: 1
records:
- cetmix_tower_model: command
access_level: manager
reference: test_command_from_yaml
name: Test Command From Yaml
action: ssh_command
allow_parallel_run: false
note: false
path: false
code: echo 'Test Command From Yaml'
server_status: false
no_split_for_sudo: false
if_file_exists: skip
disconnect_file: false
- cetmix_tower_model: command
access_level: manager
reference: test_command_from_yaml_2
name: Test Command From Yaml 2
action: ssh_command
allow_parallel_run: false
note: false
path: false
code: echo 'Test Command From Yaml 2'
server_status: false
no_split_for_sudo: false
if_file_exists: skip
disconnect_file: false
"""
# -- 1 --
# Test with two commands
context = {
"default_explode_child_records": True,
"default_remove_empty_values": True,
"active_model": "cx.tower.command",
"active_ids": [self.command_test_wizard.id, self.command_test_wizard_2.id],
}
wizard = self.YamlExportWizard.with_context(context).create({}) # pylint: disable=context-overridden # new need a new clean context
wizard.onchange_explode_child_records()
self.assertEqual(wizard.yaml_code, wizard_yaml)
def test_yaml_export_wizard(self):
"""Test the YAML export wizard."""
# -- 1 --
# Test wizard action
result = self.test_wizard.action_generate_yaml_file()
self.assertEqual(
result["type"], "ir.actions.act_window", "Action should be a window"
)
self.assertEqual(
result["res_model"],
"cx.tower.yaml.export.wiz.download",
"Result model should be the download wizard",
)
self.assertTrue(result["res_id"], "Wizard should have been created")
# -- 2 --
# Ensure download wizard file name is generated
# based on the record reference
download_wizard = self.env["cx.tower.yaml.export.wiz.download"].browse(
result["res_id"]
)
self.assertEqual(
download_wizard.yaml_file_name,
f"server_template_{self.server_template_test_wizard.reference}.yaml",
"YAML file name should be generated based on record reference",
)
# -- 3 --
# Decode YAML file and check if it's valid
yaml_file_content = base64.decodebytes(download_wizard.yaml_file).decode(
"utf-8"
)
self.assertEqual(
yaml_file_content,
self.test_wizard.yaml_code,
"YAML file content should be the same as the original YAML code",
)
# -- 4 --
# Test if empty YAML code is handled correctly
self.test_wizard.yaml_code = ""
with self.assertRaises(ValidationError):
self.test_wizard.action_generate_yaml_file()
def test_reference_object_uniqueness(self):
"""
Ensure each reference is exported as a full object only once
(other times only as ref).
"""
# Prepare YAML export for flight_plan with two same commands
self.flight_plan_test_wizard.line_ids = [
(0, 0, {"command_id": self.command_test_wizard.id}),
(0, 0, {"command_id": self.command_test_wizard.id}),
]
# Prepare YAML code
self.test_wizard.onchange_explode_child_records()
yaml_data = yaml.safe_load(self.test_wizard.yaml_code)
# reference counters
ref_full = set()
ref_refs = set()
# Recursively walk through the YAML data and count references
def walk(obj):
if isinstance(obj, dict):
ref = obj.get("reference")
# dict only with "reference" = ref, otherwise — full object
if ref:
if list(obj.keys()) == ["reference"]:
ref_refs.add(ref)
else:
ref_full.add(ref)
for v in obj.values():
walk(v)
elif isinstance(obj, list):
for v in obj:
walk(v)
# Walk through the YAML data
walk(yaml_data["records"])
# Each reference as a full object — only once
for ref in ref_full:
self.assertEqual(
list(ref_full).count(ref),
1,
f"Reference '{ref}' appears as a full object more than once",
)
# Check that no full objects appear more than once
self.assertEqual(
len(ref_full),
len(set(ref_full)),
"Some full objects appear more than once",
)
# Check that for each ref there is no only reference, but no full object
for ref in ref_refs:
self.assertIn(
ref,
ref_full,
f"Reference '{ref}' is used only as a reference, "
"but no full object present",
)
def test_export_required_model_name_in_yaml(self):
"""
Test that the model name is required in the YAML file for each record
"""
# create a command to run flight plan
command_run_flight_plan = self.TowerCommand.create(
{
"name": "Run Flight Plan",
"action": "plan",
"flight_plan_id": self.flight_plan_test_wizard.id,
}
)
# export 2 commands: command_run_flight_plan and command_test_wizard
wizard = self.YamlExportWizard.with_context(
active_model="cx.tower.command",
active_ids=[command_run_flight_plan.id, self.command_test_wizard.id],
).create({})
wizard.onchange_explode_child_records()
yaml_data = yaml.safe_load(wizard.yaml_code)
# check that the model name is present in the YAML file for each record
for record in yaml_data["records"]:
self.assertIn("cetmix_tower_model", record)
def test_default_yaml_file_name_is_used(self):
"""
Wizard should pre-fill `yaml_file_name` with the auto-generated
value that ends with '.yaml' and contains the model prefix.
"""
wiz = self.YamlExportWizard.with_context(
active_model="cx.tower.command",
active_ids=[self.command_test_wizard.id],
).create({})
default_name = wiz.yaml_file_name
self.assertFalse(
default_name.endswith(".yaml"),
"Default file name must NO have .yaml suffix",
)
self.assertIn(
"command_",
default_name,
"Default file name should include model prefix",
)
def test_yaml_file_name_is_auto_fixed(self):
"""
When the user assigns an invalid name, wizard should auto-sanitise
it to a safe *basename* (lowercase, underscores, no extension).
"""
wiz = self.YamlExportWizard.with_context(
active_model="cx.tower.command",
active_ids=[self.command_test_wizard.id],
).create({})
# user enters a 'dirty' name with spaces, capitals, symbols
wiz.write({"yaml_file_name": "My File!@# .YAML"})
# write() override strips to a basename WITHOUT '.yaml'
self.assertEqual(
wiz.yaml_file_name,
"my_file",
"Wizard field must hold only the cleaned basename, without extension",
)
def test_action_generate_appends_extension(self):
"""
When generating the download record, the system must append
the `.yaml` extension to the sanitized basename.
"""
wiz = self.YamlExportWizard.with_context(
active_model="cx.tower.command",
active_ids=[self.command_test_wizard.id],
).create({})
wiz.onchange_explode_child_records()
act = wiz.action_generate_yaml_file()
download = self.env["cx.tower.yaml.export.wiz.download"].browse(act["res_id"])
self.assertTrue(download.yaml_file_name.endswith(".yaml"))
def test_custom_requires_text(self):
"""Creating a template with license 'custom' but no text must fail"""
with self.assertRaises(ValidationError):
self.env["cx.tower.yaml.manifest.tmpl"].create(
{
"name": "Bad Manifest",
"license": "custom",
}
)
tmpl_ok = self.env["cx.tower.yaml.manifest.tmpl"].create(
{
"name": "Good Manifest",
"license": "custom",
"license_text": "Custom license terms",
}
)
self.assertEqual(tmpl_ok.license, "custom")
self.assertEqual(tmpl_ok.license_text, "Custom license terms")
with self.assertRaises(ValidationError):
self.env["cx.tower.yaml.manifest.tmpl"].create(
{
"name": "Bad Manifest 2",
"license": "custom",
"license_text": " ",
}
)
def test_wizard_resets_price_on_license_change(self):
"""Wizard must reset price/currency when license changes away from 'custom'"""
wiz = self.YamlExportWizard.new(
{
"manifest_license": "custom",
"manifest_price": 42.0,
"manifest_currency": "EUR",
}
)
wiz.manifest_license = "agpl-3"
wiz._onchange_manifest_license()
self.assertEqual(wiz.manifest_price, 0.0)
self.assertFalse(wiz.manifest_currency)
wiz.manifest_price = 7.5
wiz.manifest_currency = "USD"
wiz.manifest_license = "custom"
wiz._onchange_manifest_license()
self.assertEqual(wiz.manifest_price, 7.5)
self.assertEqual(wiz.manifest_currency, "USD")

View File

@@ -0,0 +1,703 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import base64
import yaml
from odoo import _
from odoo.exceptions import ValidationError
from odoo.tests import TransactionCase
from odoo.tools import mute_logger
class TestTowerYamlImportWizUpload(TransactionCase):
"""Test Tower YAML Import Wizard Upload"""
@classmethod
def setUpClass(cls):
super().setUpClass()
# Variables
cls.Variable = cls.env["cx.tower.variable"]
cls.variable_yaml_test = cls.Variable.create(
{"name": "YAML Test", "reference": "yaml_test"}
)
cls.variable_yaml_url = cls.Variable.create(
{"name": "YAML URL", "reference": "yaml_url"}
)
# Tags
cls.Tag = cls.env["cx.tower.tag"]
cls.tag_yaml_test = cls.Tag.create(
{"name": "YAML Test", "reference": "yaml_test"}
)
cls.tag_another_yaml_test = cls.Tag.create(
{"name": "Another YAML Test", "reference": "another_yaml_test"}
)
# Commands
cls.Command = cls.env["cx.tower.command"]
cls.command_yaml_test = cls.Command.create(
{"name": "Test Yaml Command", "reference": "test_yaml_command"}
)
# Flight Plan
cls.FlightPlan = cls.env["cx.tower.plan"]
cls.flight_plan_yaml_test = cls.FlightPlan.create(
{
"name": "Test Yaml Flight Plan",
"reference": "test_yaml_flight_plan",
"line_ids": [
(
0,
0,
{
"condition": False,
"use_sudo": False,
"command_id": cls.command_yaml_test.id,
},
),
],
}
)
# Create Server Template used for testing
cls.server_template_yaml_test = cls.env["cx.tower.server.template"].create(
{
"name": "Test Server Template",
"tag_ids": [
(4, cls.tag_yaml_test.id),
(4, cls.tag_another_yaml_test.id),
],
"variable_value_ids": [
(
0,
0,
{
"variable_id": cls.variable_yaml_test.id,
"value_char": "Some Test Value",
},
),
(
0,
0,
{
"variable_id": cls.variable_yaml_url.id,
"value_char": "https://cetmix.com",
},
),
],
"flight_plan_id": cls.flight_plan_yaml_test.id,
}
)
# Server Logs
cls.ServerLog = cls.env["cx.tower.server.log"]
cls.server_log_yaml_test = cls.ServerLog.create(
{
"name": "Test Server Log",
"reference": "test_server_log",
"command_id": cls.command_yaml_test.id,
"log_type": "command",
"server_template_id": cls.server_template_yaml_test.id,
}
)
# Create an export wizard and generate YAML code
context = {
"active_model": "cx.tower.server.template",
"active_ids": [cls.server_template_yaml_test.id],
}
cls.export_wizard = (
cls.env["cx.tower.yaml.export.wiz"].with_context(context).create({}) # pylint: disable=context-overridden # new need a new clean context
)
cls.export_wizard.onchange_explode_child_records()
cls.export_wizard.action_generate_yaml_file()
cls.yaml_code = cls.export_wizard.yaml_code
cls.yaml_file = base64.b64encode(cls.yaml_code.encode("utf-8"))
# YAML import upload wizard
cls.YamlImportWizUpload = cls.env["cx.tower.yaml.import.wiz.upload"]
cls.yaml_upload_wizard = cls.YamlImportWizUpload.create(
{"yaml_file": cls.yaml_file, "file_name": "test_yaml_file.yaml"}
)
# YAML import wizard
cls.import_wizard_action = cls.yaml_upload_wizard.action_import_yaml()
cls.import_wizard = cls.env[cls.import_wizard_action["res_model"]].browse(
cls.import_wizard_action["res_id"]
)
cls.import_wizard.if_record_exists = "update"
def test_extract_yaml_data(self):
"""Test extract YAML data from file"""
# -- 1 --
# Test if YAML file is valid
extracted_yaml_data = self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
extracted_yaml_data,
self.yaml_code,
"YAML code is not extracted correctly",
)
# -- 2 --
# Test if invalid model is handled properly
# Replace model name with invalid model
self.invalid_yaml_code = self.yaml_code.replace(
"server_template", "invalid_model"
)
self.invalid_yaml_file = base64.b64encode(
self.invalid_yaml_code.encode("utf-8")
)
self.yaml_upload_wizard.yaml_file = self.invalid_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_("'invalid_model' is not a valid model"),
"Exception message does not match",
)
# -- 3 --
# Test if non YAML supported model is handled properly
# Replace model name with non YAML supported model
self.non_yaml_supported_yaml_code = self.yaml_code.replace(
"server_template", "command_run_wizard"
)
self.non_yaml_supported_yaml_file = base64.b64encode(
self.non_yaml_supported_yaml_code.encode("utf-8")
)
self.yaml_upload_wizard.yaml_file = self.non_yaml_supported_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_("Model 'command_run_wizard' does not support YAML import"),
"Exception message does not match",
)
# -- 4 --
# Test if YAML that is not a dictionary is handled properly
self.invalid_yaml_file = base64.b64encode(b"Invalid YAML file")
self.yaml_upload_wizard.yaml_file = self.invalid_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_("Yaml file doesn't contain valid data"),
"Exception message does not match",
)
# -- 5 --
# Test if TypeError is handled properly
self.non_unicode_yaml_file = base64.b64encode(b"\x80")
self.yaml_upload_wizard.yaml_file = self.non_unicode_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_("YAML file cannot be decoded properly"),
"Exception message does not match",
)
# -- 6 --
# Test if YAML file is empty
self.empty_yaml_file = ""
self.yaml_upload_wizard.yaml_file = self.empty_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_("File is empty"),
"Exception message does not match",
)
# -- 7 --
# Test if YAML file with unsupported YAML version is handled properly
yaml_with_unsupported_version = self.yaml_code.replace(
f"cetmix_tower_yaml_version: {self.FlightPlan.CETMIX_TOWER_YAML_VERSION}",
f"cetmix_tower_yaml_version: {self.FlightPlan.CETMIX_TOWER_YAML_VERSION + 1}", # noqa: E501
)
self.unsupported_yaml_version_yaml_file = base64.b64encode(
yaml_with_unsupported_version.encode("utf-8")
)
self.yaml_upload_wizard.yaml_file = self.unsupported_yaml_version_yaml_file
with self.assertRaises(ValidationError) as e:
self.yaml_upload_wizard._extract_yaml_data()
self.assertEqual(
str(e.exception),
_(
"YAML version is higher than version"
" supported by your Cetmix Tower instance."
" %(code_version)s > %(tower_version)s",
code_version=self.FlightPlan.CETMIX_TOWER_YAML_VERSION + 1,
tower_version=self.FlightPlan.CETMIX_TOWER_YAML_VERSION,
),
"Exception message does not match",
)
# -- 8 --
# Test YAML file with no records
self.import_wizard.yaml_code = "cetmix_tower_yaml_version: 1"
with self.assertRaises(ValidationError) as e:
self.import_wizard.action_import_yaml()
self.assertEqual(
str(e.exception),
_("YAML file doesn't contain any records"),
"Exception message does not match",
)
def test_action_import_yaml_skip_if_exists(self):
"""Test YAML import wizard action when skipping an existing record"""
self.import_wizard.if_record_exists = "skip"
# Run import wizard action
import_wizard_result_action = self.import_wizard.action_import_yaml()
# Test if action is composed properly
self.assertEqual(
import_wizard_result_action["type"],
"ir.actions.client",
"Import wizard action type is not correct",
)
self.assertEqual(
import_wizard_result_action["tag"],
"display_notification",
"Import wizard action tag is not correct",
)
self.assertEqual(
import_wizard_result_action["params"]["title"],
_("Record Import"),
"Import wizard action title is not correct",
)
self.assertEqual(
import_wizard_result_action["params"]["message"],
_("No records were created or updated"),
"Import wizard action message is not correct",
)
self.assertEqual(
import_wizard_result_action["params"]["sticky"],
True,
"Import wizard action sticky is not correct",
)
self.assertEqual(
import_wizard_result_action["params"]["type"],
"warning",
"Import wizard action type is not correct",
)
def test_action_import_yaml_update_existing_record(self):
"""Test YAML import wizard action when updating an existing record"""
# -- 1 --
# Test if new import wizard record is created properly
self.assertEqual(
self.import_wizard_action["res_model"],
"cx.tower.yaml.import.wiz",
"Import wizard action model is not correct",
)
self.assertEqual(
self.import_wizard_action["view_mode"],
"form",
"Import wizard action view mode is not correct",
)
# -- 2 --
# Modify Server Template name and variable value
self.import_wizard.yaml_code = self.import_wizard.yaml_code.replace(
"name: Test Server Template",
"name: Updated Test Server Template",
).replace(
"value_char: Some Test Value",
"value_char: Updated Test Value",
)
variable_value_to_update = (
self.server_template_yaml_test.variable_value_ids.filtered(
lambda v: v.value_char == "Some Test Value"
)
)
# Run import wizard action another time
import_wizard_result_action = self.import_wizard.action_import_yaml()
# -- 3 --
# Test if record is updated properly
self.assertEqual(
import_wizard_result_action["res_model"],
"cx.tower.server.template",
"Import wizard action model is not correct",
)
self.assertEqual(
import_wizard_result_action["domain"],
[("id", "in", self.server_template_yaml_test.ids)],
"ID must match existing record ID",
)
self.assertEqual(
self.server_template_yaml_test.name,
"Updated Test Server Template",
"Record is not updated properly",
)
self.assertEqual(
variable_value_to_update.value_char,
"Updated Test Value",
"Variable value is not updated properly",
)
# -- 4 --
# Test if server log remains the same
self.assertEqual(
len(self.server_template_yaml_test.server_log_ids),
1,
"Server Log must remain the same",
)
self.assertEqual(
self.server_log_yaml_test.id,
self.server_template_yaml_test.server_log_ids.id,
"Server Log must remain the same",
)
def test_action_import_yaml_create_new_record(self):
"""Test YAML import wizard action when creating a new record"""
self.import_wizard.if_record_exists = "create"
with mute_logger("odoo.addons.cetmix_tower_yaml.models.cx_tower_yaml_mixin"):
import_wizard_result_action = self.import_wizard.action_import_yaml()
# -- 1 --
# Test if new record is created instead of updating existing one
self.assertEqual(
import_wizard_result_action["res_model"],
"cx.tower.server.template",
"Import wizard action model is not correct",
)
self.assertNotEqual(
import_wizard_result_action["domain"],
f"[('id', '=', {self.server_template_yaml_test.ids})]",
"ID must not match existing record ID",
)
# -- 2 --
# Ensure that existing flight plan is used instead of creating a new one
new_server_template = self.env[import_wizard_result_action["res_model"]].search(
import_wizard_result_action["domain"]
)
self.assertEqual(
new_server_template.flight_plan_id,
self.flight_plan_yaml_test,
"Existing flight plan must be used",
)
# -- 3 --
# Ensure that existing tags are used instead of creating new ones
for tag in self.server_template_yaml_test.tag_ids:
self.assertIn(
tag,
new_server_template.tag_ids,
"Existing tag must be used",
)
# -- 4 --
# Ensure that new variable values are created
for variable_value in self.server_template_yaml_test.variable_value_ids:
self.assertNotIn(
variable_value,
new_server_template.variable_value_ids,
"New variable value must be created instead of updating existing one",
)
# -- 5 --
# Test if server log is created instead of updated
for server_log in self.server_template_yaml_test.server_log_ids:
self.assertNotIn(
server_log,
new_server_template.server_log_ids,
"New Server Log must be created instead of updating existing one",
)
def test_extract_secret_names(self):
"""Test extract secret names from YAML data"""
# NB: this is not a real model, it's just for testing
yaml_code = """cetmix_tower_yaml_version: 1
records:
- cetmix_tower_model: test_model
access_level: manager
reference: such_much_test_record
name: Such Much Command
action: file_using_template
allow_parallel_run: false
note: Just a note
os_ids: false
tag_ids: false
path: false
file_template_id: false
flight_plan_id: false
code: false
variable_ids: false
secret_ids: false
ssh_key_id:
reference: test_ssh_key
name: Test SSH Key
key_type: k
note: false
- cetmix_tower_model: another_test_model
reference: such_much_test_record_2
name: Such Much Test Record 2
note: Just a note 2
ssh_key_id:
reference: test_ssh_key
name: Test SSH Key
key_type: k
note: false
secret_ids:
- reference: secret_2
name: Secret 2
key_type: s
note: false
- reference: secret_3
name: Secret 3
key_type: s
note: false
- cetmix_tower_model: another_test_model
reference: such_much_test_record_3
name: Such Much Test Record 3
note: Just a note 3
ssh_key_id:
reference: another_ssh_key
name: Another SSH Key
sub_record:
reference: such_much_test_record_4
name: Such Much Test Record 4
note: Just a note 4
secret_ids:
- reference: secret_1
name: Secret 3
key_type: s
note: false
- reference: secret_2
name: Secret 4
key_type: s
note: false
file_template_id:
reference: my_custom_test_template
name: Such much demo
source: tower
file_type: text
server_dir: /var/log/my/files
file_name: much_logs.txt
keep_when_deleted: false
tag_ids: false
note: Hey!
code: false
variable_ids: false
secret_ids: false
flight_plan_id: false
code: false
variable_ids: false
secret_ids:
- reference: secret_1
name: Secret 1
key_type: s
note: false
- reference: secret_2
name: Secret 2
key_type: s
note: false
"""
secret_list = self.env["cx.tower.yaml.import.wiz"]._extract_secret_names(
yaml.safe_load(yaml_code)
)
# We expect 6 secrets in the list:
# 2 keys: 'Test SSH Key', 'Another SSH Key'
# 4 secrets: 'Secret 3', 'Secret 4', 'Secret 1', 'Secret 2'
self.assertEqual(len(secret_list), 6, "Secret list length is not correct")
self.assertIn("Test SSH Key", secret_list, "Key is not in the list")
self.assertIn("Another SSH Key", secret_list, "Key is not in the list")
self.assertIn("Secret 3", secret_list, "Key is not in the list")
self.assertIn("Secret 4", secret_list, "Key is not in the list")
self.assertIn("Secret 1", secret_list, "Key is not in the list")
self.assertIn("Secret 2", secret_list, "Key is not in the list")
def test_extract_secret_names_with_key_id(self):
"""Test extract secret names when secrets are nested under key_id"""
yaml_code = """cetmix_tower_yaml_version: 1
records:
- cetmix_tower_model: test_model
reference: rec_1
name: Test Record
secret_ids:
- key_id:
reference: secret_1
name: Nested Secret 1
- key_id:
reference: secret_2
name: Nested Secret 2
ssh_key_id:
name: SSH Key Nested
"""
secret_list = self.env["cx.tower.yaml.import.wiz"]._extract_secret_names(
yaml.safe_load(yaml_code)
)
# We expect 3 secrets total:
# - SSH Key Nested (from ssh_key_id)
# - Nested Secret 1
# - Nested Secret 2
self.assertCountEqual(
secret_list,
["Nested Secret 1", "Nested Secret 2", "SSH Key Nested"],
"Unexpected secrets extracted for nested structure",
)
def test_create_records_different_models(self):
"""Test create records with different models"""
yaml_code = """cetmix_tower_yaml_version: 1
records:
- cetmix_tower_model: command
access_level: manager
reference: much_much_command
name: Much Much Command
action: file_using_template
allow_parallel_run: false
note: Just a note
os_ids: false
tag_ids: false
path: false
file_template_id: false
flight_plan_id: false
code: false
variable_ids: false
secret_ids: false
ssh_key_id:
reference: test_ssh_key
name: Test SSH Key
key_type: k
note: false
- cetmix_tower_model: server_template
reference: wow_much_server_template
name: Wow Much Server Template
note: Just a note 2
- cetmix_tower_model: tag
reference: such_much_tag
name: Such Much Tag
"""
# Create a new command record
self.import_wizard.if_record_exists = "update"
self.import_wizard.yaml_code = yaml_code
action = self.import_wizard.action_import_yaml()
# Check if action is composed properly
self.assertEqual(
action["type"],
"ir.actions.client",
"Import wizard action type is not correct",
)
self.assertEqual(
action["tag"],
"display_notification",
"Import wizard action tag is not correct",
)
self.assertEqual(
action["params"]["title"],
_("Record Import"),
"Import wizard action title is not correct",
)
self.assertEqual(
action["params"]["type"],
"success",
"Import wizard action type is not correct",
)
self.assertEqual(
action["params"]["sticky"],
True,
"Import wizard action sticky is not correct",
)
# Check command
self.assertTrue(
self.Command.get_by_reference("much_much_command"),
"Command must be created",
)
# Check server template
self.assertTrue(
self.env["cx.tower.server.template"].get_by_reference(
"wow_much_server_template"
),
"Server template must be created",
)
# Check tag
self.assertTrue(
self.Tag.get_by_reference("such_much_tag"), "Tag must be created"
)
def test_yaml_import_server_without_password(self):
"""Wizard should import server without ssh_password."""
yaml_code = (
"cetmix_tower_yaml_version: 1\n"
"records:\n"
"- reference: srv_nopass\n"
" cetmix_tower_model: server\n"
" name: YAML NoPass\n"
" ssh_auth_mode: p\n"
" ssh_username: root\n"
" ip_v4_address: 10.0.0.3\n"
)
wiz = self.env["cx.tower.yaml.import.wiz"].create(
{
"yaml_code": yaml_code,
"if_record_exists": "create",
}
)
wiz.action_import_yaml()
srv = self.env["cx.tower.server"].get_by_reference("srv_nopass")
self.assertTrue(srv, "Server was not created")
self.assertFalse(
srv._get_secret_value("ssh_password"),
"ssh_password must stay empty after import",
)
def test_orm_create_server_requires_password(self):
"""Creating a server via ORM/UI must fail when ssh_password is missing."""
with self.assertRaises(ValidationError) as err:
self.env["cx.tower.server"].create(
{
"reference": "srv_ui",
"name": "UI NoPass",
"ssh_auth_mode": "p",
"ssh_username": "root",
"ip_v4_address": "10.0.0.2",
}
)
self.assertIn("Please provide SSH password", str(err.exception))
def test_yaml_import_server_with_skip_ssh_check(self):
"""Explicit skip_ssh_settings_check also bypasses password validation."""
yaml_code = (
"cetmix_tower_yaml_version: 1\n"
"records:\n"
"- reference: srv_skip\n"
" cetmix_tower_model: server\n"
" name: YAML Skip Check\n"
" ssh_auth_mode: p\n"
" ssh_username: root\n"
" ip_v4_address: 10.0.0.4\n"
)
wiz = self.env["cx.tower.yaml.import.wiz"].create(
{
"yaml_code": yaml_code,
"if_record_exists": "create",
}
)
wiz.with_context(skip_ssh_settings_check=True).action_import_yaml()
srv = self.env["cx.tower.server"].get_by_reference("srv_skip")
self.assertTrue(
srv, "Server must be created when skip_ssh_settings_check is set"
)

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_command_view_form" model="ir.ui.view">
<field name="name">cx.tower.command.yaml.view.form</field>
<field name="model">cx.tower.command</field>
<field name="inherit_id" ref="cetmix_tower_server.cx_tower_command_view_form" />
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record id="action_cx_tower_command_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_command" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,41 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_file_template_view_form" model="ir.ui.view">
<field name="name">cx.tower.file.template.yaml.view.form</field>
<field name="model">cx.tower.file.template</field>
<field
name="inherit_id"
ref="cetmix_tower_server.cx_tower_file_template_view_form"
/>
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record
id="action_cx_tower_file_template_export_yaml"
model="ir.actions.act_window"
>
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_file_template" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="action_cx_tower_key_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_key" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="action_cx_tower_os_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_os" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_plan_view_form" model="ir.ui.view">
<field name="name">cx.tower.plan.view.form</field>
<field name="model">cx.tower.plan</field>
<field name="inherit_id" ref="cetmix_tower_server.cx_tower_plan_view_form" />
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record id="action_cx_tower_plan_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_plan" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,43 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="view_cx_tower_scheduled_task_view_form" model="ir.ui.view">
<field name="name">cx.tower.scheduled.task.view.form</field>
<field name="model">cx.tower.scheduled.task</field>
<field
name="inherit_id"
ref="cetmix_tower_server.view_cx_tower_scheduled_task_view_form"
/>
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record
id="action_cx_tower_scheduled_task_export_yaml"
model="ir.actions.act_window"
>
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_scheduled_task" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,41 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_server_template_view_form" model="ir.ui.view">
<field name="name">cx.tower.server.template.yaml.view.form</field>
<field name="model">cx.tower.server.template</field>
<field
name="inherit_id"
ref="cetmix_tower_server.cx_tower_server_template_view_form"
/>
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record
id="action_cx_tower_server_template_export_yaml"
model="ir.actions.act_window"
>
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_server_template" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_server_view_form" model="ir.ui.view">
<field name="name">cx.tower.server.yaml.view.form</field>
<field name="model">cx.tower.server</field>
<field name="inherit_id" ref="cetmix_tower_server.cx_tower_server_view_form" />
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record id="action_cx_tower_server_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_server" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,39 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_shortcut_view_form" model="ir.ui.view">
<field name="name">cx.tower.shortcut.view.form</field>
<field name="model">cx.tower.shortcut</field>
<field
name="inherit_id"
ref="cetmix_tower_server.cx_tower_shortcut_view_form"
/>
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page name="yaml" string="YAML">
<div groups="!cetmix_tower_yaml.group_export">
<h3
>You must be a member of the "YAML/Export" group to export data as YAML.</h3>
</div>
<button
type="object"
groups="cetmix_tower_yaml.group_export"
class="oe_highlight"
name="action_open_yaml_export_wizard"
string="Export YAML"
/>
</page>
</xpath>
</field>
</record>
<record id="action_cx_tower_shortcut_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_shortcut" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="action_cx_tower_tag_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_tag" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,15 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record
id="action_cx_tower_variable_value_export_yaml"
model="ir.actions.act_window"
>
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_variable_value" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="action_cx_tower_variable_export_yaml" model="ir.actions.act_window">
<field name="name">Export YAML</field>
<field name="res_model">cx.tower.yaml.export.wiz</field>
<field name="view_mode">form</field>
<field name="target">new</field>
<field name="binding_model_id" ref="model_cx_tower_variable" />
<field name="binding_view_types">list</field>
<field name="groups_id" eval="[(4, ref('cetmix_tower_yaml.group_export'))]" />
</record>
</odoo>

View File

@@ -0,0 +1,33 @@
<odoo>
<record id="view_yaml_manifest_author_tree" model="ir.ui.view">
<field name="name">yaml.manifest.author.tree</field>
<field name="model">cx.tower.yaml.manifest.author</field>
<field name="arch" type="xml">
<tree>
<field name="name" />
</tree>
</field>
</record>
<record id="view_yaml_manifest_author_form" model="ir.ui.view">
<field name="name">yaml.manifest.author.form</field>
<field name="model">cx.tower.yaml.manifest.author</field>
<field name="arch" type="xml">
<form>
<sheet>
<group>
<field name="name" />
</group>
</sheet>
</form>
</field>
</record>
<record id="action_yaml_manifest_author" model="ir.actions.act_window">
<field name="name">YAML Manifest Authors</field>
<field name="res_model">cx.tower.yaml.manifest.author</field>
<field name="view_mode">tree,form</field>
<field name="target">current</field>
</record>
</odoo>

View File

@@ -0,0 +1,51 @@
<odoo>
<record id="view_yaml_manifest_template_tree" model="ir.ui.view">
<field name="name">cx.tower.yaml.manifest.tmpl.tree</field>
<field name="model">cx.tower.yaml.manifest.tmpl</field>
<field name="arch" type="xml">
<tree>
<field name="name" />
<field name="file_prefix" />
<field name="author_ids" widget="many2many_tags" />
<field name="version" />
<field name="website" />
<field name="license" />
<field name="currency" />
</tree>
</field>
</record>
<record id="view_yaml_manifest_template_form" model="ir.ui.view">
<field name="name">cx.tower.yaml.manifest.tmpl.form</field>
<field name="model">cx.tower.yaml.manifest.tmpl</field>
<field name="arch" type="xml">
<form>
<sheet>
<group>
<field name="name" />
<field name="file_prefix" />
<field name="author_ids" widget="many2many_tags" />
<field name="version" />
<field name="website" />
<field name="license" />
<field
name="license_text"
attrs="{'invisible': [('license', '!=', 'custom')]}"
/>
<field
name="currency"
attrs="{'invisible': [('license', '!=', 'custom')]}"
/>
</group>
</sheet>
</form>
</field>
</record>
<record id="action_yaml_manifest_template" model="ir.actions.act_window">
<field name="name">YAML Manifest Templates</field>
<field name="type">ir.actions.act_window</field>
<field name="res_model">cx.tower.yaml.manifest.tmpl</field>
<field name="view_mode">tree,form</field>
<field name="view_id" ref="view_yaml_manifest_template_tree" />
<field name="target">current</field>
</record>
</odoo>

View File

@@ -0,0 +1,33 @@
<odoo>
<!-- Import YAML -> Tools -->
<menuitem
id="menu_cetmix_tower_yaml_import"
name="Import YAML"
parent="cetmix_tower_server.menu_tools"
sequence="10"
groups="group_import"
action="action_cx_tower_yaml_import_wiz_upload"
/>
<!-- YAML Manifest Settings -> Settings -->
<menuitem
id="menu_yaml_settings_root"
name="YAML Export/Import"
parent="cetmix_tower_server.menu_settings"
sequence="60"
/>
<menuitem
id="menu_yaml_manifest_author_action"
name="Manifest Authors"
parent="menu_yaml_settings_root"
action="action_yaml_manifest_author"
sequence="1"
/>
<menuitem
id="menu_yaml_manifest_template"
name="Manifest Templates"
parent="menu_yaml_settings_root"
action="action_yaml_manifest_template"
sequence="2"
/>
</odoo>

View File

@@ -0,0 +1,4 @@
from . import cx_tower_yaml_export_wiz
from . import cx_tower_yaml_export_wiz_download
from . import cx_tower_yaml_import_wiz
from . import cx_tower_yaml_import_wiz_upload

View File

@@ -0,0 +1,367 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import base64
import re
from odoo import _, api, fields, models
from odoo.exceptions import ValidationError
from ..models.cx_tower_yaml_mixin import YamlExportCollector
FILE_HEADER = """
# This file is generated with Cetmix Tower.
# Details and documentation: https://cetmix.com/tower
"""
CLEAN_STR = re.compile(r"[^a-z0-9_]")
class CxTowerYamlExportWiz(models.TransientModel):
"""Cetmix Tower YAML Export Wizard"""
_name = "cx.tower.yaml.export.wiz"
_description = "Cetmix Tower YAML Export Wizard"
yaml_code = fields.Text()
yaml_file_name = fields.Char(
string="YAML File Name",
size=255,
default=lambda self: self._default_yaml_file_name(),
help="Snippet file name without extension, eg 'my_snippet'",
)
explode_child_records = fields.Boolean(
default=True,
help="Add entire child record definitions to the exported YAML file. "
"Otherwise only references to child records will be added.",
)
remove_empty_values = fields.Boolean(
string="Remove Empty x2m Field Values",
default=True,
help="Remove empty Many2one, Many2many and One2many"
" field values from the exported YAML file.",
)
preview_code = fields.Boolean()
add_manifest = fields.Boolean()
MANIFEST_FIELDS = [
"manifest_template_id",
"manifest_name",
"manifest_author_ids",
"manifest_version",
"manifest_summary",
"manifest_description",
"manifest_website",
"manifest_license",
"manifest_license_text",
"manifest_currency",
"manifest_price",
]
@api.model
def _get_manifest_license_selection(self):
return self.env["cx.tower.yaml.manifest.tmpl"]._selection_license()
@api.model
def _get_manifest_currency_selection(self):
return self.env["cx.tower.yaml.manifest.tmpl"]._selection_currency()
manifest_template_id = fields.Many2one(
"cx.tower.yaml.manifest.tmpl",
)
manifest_name = fields.Char(
compute="_compute_manifest",
readonly=False,
store=True,
string="Snippet Name",
help="Leave this field blank if you don't want to create a manifest",
)
manifest_website = fields.Char(
compute="_compute_manifest",
readonly=False,
string="Website",
store=True,
)
manifest_license = fields.Selection(
selection="_get_manifest_license_selection",
compute="_compute_manifest",
readonly=False,
string="License",
store=True,
)
manifest_author_ids = fields.Many2many(
"cx.tower.yaml.manifest.author",
compute="_compute_manifest",
readonly=False,
string="Authors",
store=True,
)
manifest_license_text = fields.Text(
compute="_compute_manifest", readonly=False, string="License Text", store=True
)
manifest_currency = fields.Selection(
selection="_get_manifest_currency_selection",
compute="_compute_manifest",
string="Currency",
readonly=False,
store=True,
)
manifest_summary = fields.Char(
string="Summary",
size=160,
help="Short summary that includes core information. 160 symbols max",
)
manifest_description = fields.Text("Description")
manifest_price = fields.Float("Price")
manifest_version = fields.Char(
compute="_compute_manifest",
readonly=False,
store=True,
string="Version",
help="Use the Major.Minor.Patch format, e.g. 1.2.3",
)
def _clean_yaml_basename(self, name: str) -> str:
"""
Return *always-valid* basename (no extension) built from arbitrary *name*.
"""
raw = (name or "").strip().lower()
base = raw[:-5] if raw.endswith(".yaml") else raw
base = CLEAN_STR.sub("_", base)
base = re.sub(r"_+", "_", base).strip("_") or "snippet"
return base
def _default_yaml_file_name(self):
"""
Build the *initial* file name shown to the user.
Pattern: <model>_<reference>, without “.yaml” suffix.
"""
records = self._get_model_record()
prefix = records._name.replace("cx.tower.", "").replace(".", "_")
ref = records.reference if len(records) == 1 else "selected"
return f"{prefix}_{ref}"
@api.depends("manifest_template_id")
def _compute_manifest(self):
mapping = {
"manifest_author_ids": "author_ids",
"manifest_website": "website",
"manifest_license": "license",
"manifest_license_text": "license_text",
"manifest_currency": "currency",
"manifest_version": "version",
}
for rec in self:
tmpl = rec.manifest_template_id
if not tmpl:
continue
for wiz_field, tmpl_field in mapping.items():
if not rec[wiz_field]:
rec[wiz_field] = tmpl[tmpl_field]
# prepend template's file prefix to YAML file name
prefix = (tmpl.file_prefix or "").strip()
if prefix:
# sanitize prefix without defaulting to a placeholder like "snippet"
raw = prefix.lower()
sanitized_prefix = re.sub(r"_+", "_", CLEAN_STR.sub("_", raw)).strip(
"_"
)
if sanitized_prefix:
# use current or default base name, then clean it
current = rec.yaml_file_name or rec._default_yaml_file_name()
base = rec._clean_yaml_basename(current)
# avoid double-prefixing
if not base.startswith(f"{sanitized_prefix}_"):
rec.yaml_file_name = rec._clean_yaml_basename(
f"{sanitized_prefix}_{base}"
)
@api.onchange("manifest_license")
def _onchange_manifest_license(self):
"""Drop price and currency when user switches off the 'custom' license.
If manifest_license != 'custom', reset manifest_price to 0.0 and
manifest_currency to False so they wont appear in the generated YAML.
"""
for rec in self:
if rec.manifest_license != "custom":
rec.manifest_price = 0.0
rec.manifest_currency = False
@api.onchange("explode_child_records", "remove_empty_values", *MANIFEST_FIELDS)
def onchange_explode_child_records(self):
"""Compute YAML code and file content."""
self.ensure_one()
# Get model records
records = self._get_model_record()
if not records:
raise ValidationError(_("No valid records selected"))
explode_related_record = self.explode_child_records
remove_empty_values = self.remove_empty_values
# Prepare YAML header
yaml_header = FILE_HEADER.rstrip("\n")
# Use the YAML export collector for unique records
collector = YamlExportCollector()
record_list = []
for rec in records:
record_yaml_dict = rec.with_context(
explode_related_record=explode_related_record,
remove_empty_values=remove_empty_values,
yaml_collector=collector,
)._prepare_record_for_yaml()
if not record_yaml_dict:
continue
if isinstance(record_yaml_dict, dict) and list(record_yaml_dict) == [
"reference"
]:
continue
if "cetmix_tower_model" not in record_yaml_dict:
record_yaml_dict["cetmix_tower_model"] = rec._name.replace(
"cx.tower.", ""
).replace(".", "_")
record_list.append(record_yaml_dict)
if not record_list:
self.yaml_code = f"{yaml_header}\n"
return
if not self.manifest_name:
manifest = {}
else:
lic = (self.manifest_license or "").lower()
fields_order = [
("name", self.manifest_name),
("summary", self.manifest_summary),
("description", self.manifest_description),
("author", self.manifest_author_ids.mapped("name")),
("version", self.manifest_version),
("website", self.manifest_website),
("license", self.manifest_license),
(
"license_text",
(self.manifest_license_text or "").strip()
if lic == "custom"
else None,
),
("price", self.manifest_price),
(
"currency",
self.manifest_currency if lic == "custom" else None,
),
]
manifest = {k: v for k, v in fields_order if v not in (False, None, "", [])}
result_dict = {
"cetmix_tower_yaml_version": self.env[
"cx.tower.yaml.mixin"
].CETMIX_TOWER_YAML_VERSION,
}
if manifest:
result_dict["manifest"] = manifest
result_dict["records"] = record_list
self.yaml_code = f"{yaml_header}\n{records._convert_dict_to_yaml(result_dict)}"
@api.onchange("yaml_file_name")
def _onchange_yaml_file_name(self):
"""
Live-clean the YAML file name as the user types:
- lowercase, trim whitespace
- replace invalid characters with “_”
- collapse repeated underscores
- ensure a single “.yaml” suffix
"""
for rec in self:
rec.yaml_file_name = rec._clean_yaml_basename(rec.yaml_file_name)
@api.constrains("manifest_version")
def _check_manifest_version_format(self):
"""
Ensure the user types a semantic version (x.y.z) in the wizard itself.
"""
semver = re.compile(r"^\d+\.\d+\.\d+$")
for rec in self:
if rec.manifest_version and not semver.match(rec.manifest_version):
raise ValidationError(
_("Version must be in format Major.Minor.Patch, e.g. 1.2.3")
)
def _validate_manifest(self):
"""Logical cross-checks before saving YAML."""
if self.manifest_price and not self.manifest_currency:
raise ValidationError(_("Currency is required when price is specified"))
if (self.manifest_license or "").lower() == "custom" and not (
self.manifest_license_text or ""
).strip():
raise ValidationError(_("License text is required for a custom license"))
def write(self, vals):
"""
Override write to always sanitize `yaml_file_name`
before persisting, making programmatic assignments safe.
"""
if "yaml_file_name" in vals:
vals["yaml_file_name"] = self._clean_yaml_basename(vals["yaml_file_name"])
return super().write(vals)
def action_generate_yaml_file(self):
"""Save YAML file"""
self.ensure_one()
self._validate_manifest()
if not self.yaml_code:
raise ValidationError(_("No YAML code is present."))
# Generate YAML file
try:
yaml_file = base64.encodebytes(self.yaml_code.encode("utf-8"))
yaml_file_name = (
f"{self.yaml_file_name or self._default_yaml_file_name()}.yaml"
)
except Exception as exc:
raise ValidationError(
_(
"Failed to encode YAML content. Please ensure all characters are UTF-8 compatible." # noqa: E501
)
) from exc
download_wizard = self.env["cx.tower.yaml.export.wiz.download"].create(
{
"yaml_file": yaml_file,
"yaml_file_name": yaml_file_name,
}
)
return {
"type": "ir.actions.act_window",
"res_model": "cx.tower.yaml.export.wiz.download",
"res_id": download_wizard.id,
"target": "new",
"view_mode": "form",
}
def _get_model_record(self):
"""Get model records based on context values
Raises:
ValidationError: in case no model or records selected
Returns:
ModelRecords: a recordset of selected records
"""
model_name = self.env.context.get("active_model")
record_ids = self.env.context.get("active_ids")
if not model_name or not record_ids:
raise ValidationError(_("No model or records selected"))
return self.env[model_name].browse(record_ids)

View File

@@ -0,0 +1,130 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_yaml_export_wiz_view_form" model="ir.ui.view">
<field name="name">cx.tower.yaml.export.wiz.view.form</field>
<field name="model">cx.tower.yaml.export.wiz</field>
<field name="arch" type="xml">
<form>
<group>
<field name="yaml_file_name" placeholder="my_snippet.yaml" />
</group>
<group>
<group>
<field name="explode_child_records" />
<field name="remove_empty_values" />
</group>
<group>
<field name="add_manifest" />
<field name="preview_code" />
</group>
</group>
<group
string="Manifest"
attrs="{'invisible': [('add_manifest','=',False)]}"
>
<field
name="manifest_template_id"
placeholder="Select a pre-defined template"
help="Select a template to auto-populate manifest fields"
/>
<group string="Information">
<field
name="manifest_name"
attrs="{'required': [('add_manifest','!=',False)]}"
/>
<field
name="manifest_summary"
attrs="{'required': [('manifest_name','!=',False)]}"
placeholder="Short summary, 160 symbols max"
/>
<field
name="manifest_author_ids"
widget="many2many_tags"
attrs="{'required': [('manifest_name','!=',False)]}"
/>
<field
name="manifest_version"
placeholder="Use the Major.Minor.Patch format, e.g. 1.2.3"
/>
<field name="manifest_website" />
</group>
<group string="License and pricing">
<field
name="manifest_license"
attrs="{'required': [('manifest_name','!=',False)]}"
/>
<field
name="manifest_price"
attrs="{'invisible': [('manifest_license', '!=', 'custom')]}"
/>
<field
name="manifest_currency"
attrs="{'invisible': [('manifest_price', '=', 0)]}"
/>
</group>
</group>
<notebook>
<page
string="Description"
attrs="{'invisible': [('add_manifest','=',False)]}"
>
<field
name="manifest_description"
widget="text"
nolabel="1"
colspan="4"
placeholder="Detailed description (optional)"
/>
</page>
<page
string="License text"
attrs="{'invisible': [('manifest_license', '!=', 'custom')]}"
>
<field
name="manifest_license_text"
widget="text"
nolabel="1"
colspan="4"
placeholder="License text"
attrs="{'required': [('manifest_license', '=', 'custom')]}"
/>
</page>
<page
string="Preview code"
attrs="{'invisible': [('preview_code','=',False)]}"
>
<field
name="yaml_code"
widget="ace"
options="{'mode': 'yaml'}"
force_save="1"
nolabel="1"
colspan="4"
readonly="1"
/>
</page>
</notebook>
<footer>
<button
string="Generate YAML file"
type="object"
name="action_generate_yaml_file"
class="oe_highlight"
/>
<button string="Close" special="cancel" />
</footer>
</form>
</field>
</record>
</odoo>

View File

@@ -0,0 +1,11 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import fields, models
class CxTowerYamlExportWizDownload(models.TransientModel):
_name = "cx.tower.yaml.export.wiz.download"
_description = "Cetmix Tower YAML Export File Download"
yaml_file = fields.Binary(readonly=True, attachment=False)
yaml_file_name = fields.Char(readonly=True)

View File

@@ -0,0 +1,20 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_yaml_export_wiz_download_view_form" model="ir.ui.view">
<field name="name">cx.tower.yaml.export.wiz.download.view.form</field>
<field name="model">cx.tower.yaml.export.wiz.download</field>
<field name="arch" type="xml">
<form>
<group>
<field name="yaml_file" filename="yaml_file_name" />
<field name="yaml_file_name" invisible="1" />
</group>
<footer>
<button string="Close" special="cancel" class="oe_highlight" />
</footer>
</form>
</field>
</record>
</odoo>

View File

@@ -0,0 +1,314 @@
# Copyright (C) 2024 Cetmix OÜ
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import logging
import yaml
from markupsafe import escape
from odoo import _, api, fields, models
from odoo.exceptions import ValidationError
_logger = logging.getLogger(__name__)
class CxTowerYamlImportWiz(models.TransientModel):
"""
Process YAML data and create records in Odoo.
"""
_name = "cx.tower.yaml.import.wiz"
_description = "Cetmix Tower YAML Import Wizard"
yaml_code = fields.Text(readonly=True)
model_names = fields.Char(readonly=True, help="Models to create records in")
if_record_exists = fields.Selection(
selection=[
("skip", "Skip record"),
("update", "Update existing record"),
("create", "Create a new record"),
],
default="skip",
required=True,
help="What to do if record with the same reference already exists",
)
secret_list = fields.Html(
help="List of secrets present in the YAML file (formatted as HTML list)",
compute="_compute_secret_list",
)
preview_code = fields.Boolean(
help="Toggle to show or hide YAML code preview",
)
manifest_name = fields.Char(
readonly=True, compute="_compute_yaml_data", string="Snippet Name"
)
manifest_summary = fields.Char(
readonly=True, compute="_compute_yaml_data", string="Summary"
)
manifest_description = fields.Text(
readonly=True, compute="_compute_yaml_data", string="Description"
)
manifest_author_string = fields.Char(
readonly=True,
compute="_compute_yaml_data",
help="Comma-separated list",
string="Author",
)
manifest_version = fields.Char(
readonly=True, compute="_compute_yaml_data", string="Version"
)
manifest_website = fields.Char(
readonly=True, compute="_compute_yaml_data", string="Website"
)
manifest_license = fields.Char(
readonly=True, compute="_compute_yaml_data", string="License"
)
manifest_license_text = fields.Text(
readonly=True, compute="_compute_yaml_data", string="License text"
)
manifest_price = fields.Float(
readonly=True, compute="_compute_yaml_data", string="Price"
)
manifest_currency = fields.Char(
readonly=True, compute="_compute_yaml_data", string="Currency"
)
@api.depends("yaml_code")
def _compute_secret_list(self):
"""Compute list of secrets present in the YAML file"""
for record in self:
yaml_data = yaml.safe_load(record.yaml_code or "{}")
secret_list = self._extract_secret_names(yaml_data)
if not secret_list:
record.secret_list = False
continue
# Build deterministic HTML list of secrets
items = "".join(f"<li>{escape(name)}</li>" for name in sorted(secret_list))
secrets_html = f"<ul>{items}</ul>"
record.secret_list = _(
"Following secrets are used in the code:<br/>%(secrets)s",
secrets=secrets_html,
)
@api.depends("yaml_code")
def _compute_yaml_data(self):
for record in self:
data = yaml.safe_load(record.yaml_code or "{}")
manifest = data.get("manifest", {}) if isinstance(data, dict) else {}
authors = manifest.get("author")
if isinstance(authors, list | tuple):
manifest_author_string = ", ".join(authors)
elif isinstance(authors, str):
manifest_author_string = authors
else:
manifest_author_string = False
record.update(
{
"manifest_name": manifest.get("name"),
"manifest_summary": manifest.get("summary"),
"manifest_description": manifest.get("description"),
"manifest_author_string": manifest_author_string,
"manifest_version": manifest.get("version"),
"manifest_website": manifest.get("website"),
"manifest_license": manifest.get("license"),
"manifest_license_text": manifest.get("license_text"),
"manifest_price": manifest.get("price"),
"manifest_currency": manifest.get("currency"),
}
)
def action_import_yaml(self):
"""Process YAML data and create records in Odoo"""
self.ensure_one()
# Parse YAML code
yaml_data = yaml.safe_load(self.yaml_code)
records = yaml_data.get("records")
if not records:
raise ValidationError(_("YAML file doesn't contain any records"))
# Cache models
model_cache = {}
odoo_record_ids = []
# Process each record
for record in records:
record_reference = record.get("reference")
if not record_reference:
raise ValidationError(_("Record reference is missing"))
model_name = record.get("cetmix_tower_model")
if not model_name:
raise ValidationError(
_("Record model is missing for record %s", record_reference)
)
# Get model from cache or create new one
model = model_cache.get(model_name)
if not model:
model = self.env[
f"cx.tower.{model_name.replace('_', '.')}"
].with_context(skip_ssh_settings_check=(model_name == "server"))
model_cache[model_name] = model
# Get existing record by reference
# NOTE: we don't validate models here because they are
# already validated in the file upload wizard.
odoo_record = model.get_by_reference(record_reference)
# Skip
if self.if_record_exists == "skip" and odoo_record:
_logger.info(
"Skipping record '%s' in model '%s'" " because it already exists",
record_reference,
model_name,
)
continue
# Update existing record
elif self.if_record_exists == "update" and odoo_record:
try:
record_values = model.with_context(
force_create_related_record=False,
)._post_process_yaml_dict_values(record)
odoo_record.with_context(
from_yaml=True,
).write(record_values)
odoo_record_ids.append(odoo_record.id)
except Exception as e:
raise ValidationError(
_(
"Error updating record %(reference)s: %(error)s",
reference=record_reference,
error=e,
)
) from e
_logger.info(
f"Updated record '{record_reference}' in model '{model_name}'"
)
continue
# Or create a new record
record_values = model.with_context(
force_create_related_record=self.if_record_exists == "create",
)._post_process_yaml_dict_values(record)
try:
odoo_record = model.with_context(
from_yaml=True,
).create(record_values)
odoo_record_ids.append(odoo_record.id)
except Exception as e:
raise ValidationError(
_(
"Error creating record '%(reference)s' in model"
" '%(model)s': %(error)s",
reference=record_reference,
model=model_name,
error=e,
)
) from e
_logger.info(f"Created record '{record_reference}' in model '{model_name}'")
# No records were created or updated
if not odoo_record_ids:
action = {
"type": "ir.actions.client",
"tag": "display_notification",
"params": {
"title": _("Record Import"),
"message": _("No records were created or updated"),
"sticky": True,
"type": "warning",
"next": {"type": "ir.actions.act_window_close"},
},
}
# All records from the same model
elif len(model_cache) == 1:
model = list(model_cache.values())[0]
action = {
"name": _("Import result: %(model)s", model=model._description),
"type": "ir.actions.act_window",
"res_model": model._name,
"target": "current",
"domain": [("id", "in", odoo_record_ids)],
}
if len(odoo_record_ids) == 1:
# Open single record in form view
action["res_id"] = odoo_record_ids[0]
action["view_mode"] = "form"
else:
# Open list view of all records
action["view_mode"] = "list,form"
# Records from different models
else:
model_names = ", ".join(
f"'{model._description}'" for model in model_cache.values()
)
action = {
"type": "ir.actions.client",
"tag": "display_notification",
"params": {
"title": _("Record Import"),
"message": _(
"Records of the following models were created "
"or updated: %(models)s",
models=model_names,
),
"sticky": True,
"type": "success",
"next": {"type": "ir.actions.act_window_close"},
},
}
return action
def _extract_secret_names(self, data: dict) -> list:
"""Extract names of secrets from YAML data.
Supports both formats:
- secret_ids -> [{name: ...}]
- secret_ids -> [{key_id: {name: ...}}]
"""
secret_names = set()
def _recursive_extract(node):
"""Recursively extract secret names from nested structures."""
if isinstance(node, dict):
if "secret_ids" in node and isinstance(node["secret_ids"], list):
for item in node["secret_ids"]:
if not isinstance(item, dict):
continue
# Format 1: direct name
if "name" in item:
secret_names.add(item["name"])
# Format 2: nested key_id -> name
elif (
"key_id" in item
and isinstance(item["key_id"], dict)
and "name" in item["key_id"]
):
secret_names.add(item["key_id"]["name"])
# Handle single ssh_key_id
if "ssh_key_id" in node and isinstance(node["ssh_key_id"], dict):
if "name" in node["ssh_key_id"]:
secret_names.add(node["ssh_key_id"]["name"])
# Recursively process the rest of the dictionary
for value in node.values():
_recursive_extract(value)
elif isinstance(node, list):
for item in node:
_recursive_extract(item)
_recursive_extract(data)
return list(secret_names)

View File

@@ -0,0 +1,153 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_yaml_import_wiz_view_form" model="ir.ui.view">
<field name="name">cx.tower.yaml.import.wiz.view.form</field>
<field name="model">cx.tower.yaml.import.wiz</field>
<field name="arch" type="xml">
<form>
<group>
<field name="if_record_exists" />
</group>
<div
class="alert alert-info"
role="alert"
attrs="{'invisible': [('secret_list', '=', False)]}"
>
<field name="secret_list" nolabel="1" />
</div>
<group>
<field name="preview_code" widget="boolean_toggle" />
</group>
<group
attrs="{'invisible': [
('manifest_name', '=', False),
]}"
>
<group string="Information">
<field name="manifest_name" string="Name" />
<field name="manifest_summary" string="Summary" />
<field name="manifest_author_string" string="Author" />
<field name="manifest_version" string="Version" />
<field
name="manifest_website"
string="Website"
attrs="{'invisible': [('manifest_website', '=', False)]}"
/>
</group>
<group string="License and pricing">
<field name="manifest_license" string="License" />
<field
name="manifest_price"
string="Price"
attrs="{'invisible': [('manifest_price', '=', False)]}"
/>
<field
name="manifest_currency"
string="Currency"
attrs="{'invisible': [('manifest_currency', '=', False)]}"
/>
</group>
</group>
<notebook>
<page
string="Description"
attrs="{'invisible':[('manifest_description','=',False)]}"
>
<field
name="manifest_description"
widget="text"
nolabel="1"
colspan="4"
/>
</page>
<page
string="License text"
attrs="{'invisible':[('manifest_license_text','=',False)]}"
>
<field
name="manifest_license_text"
widget="text"
nolabel="1"
colspan="4"
/>
</page>
<page
string="Code preview"
attrs="{'invisible': [('preview_code', '=', False)]}"
>
<group>
<field
name="yaml_code"
widget="ace"
options="{'mode': 'yaml'}"
force_save="1"
nolabel="1"
colspan="4"
/>
</group>
</page>
</notebook>
<div
class="alert alert-warning"
role="alert"
attrs="{'invisible': [('if_record_exists', '!=', 'create')]}"
style="margin-bottom:0px;"
>
<p>
<strong
>Important:</strong> To maintain data consistency, the following
model records will always be updated if they exist in Odoo:
</p>
<ul>
<li>Variables</li>
<li>Variable Options</li>
<li>Key/Secrets</li>
<li>Tags</li>
<li>OSs</li>
</ul>
<p>
To create new entities instead of updating existing ones, remove or modify
the <code
>reference</code> field in the YAML code for those entities.
</p>
</div>
<div
class="alert alert-warning"
role="alert"
attrs="{'invisible': [('if_record_exists', '!=', 'update')]}"
style="margin-bottom:0px;"
>
<p>
Existing record will be updated with the new data. Related records, present in the YAML code, will be updated too.
If any of those related records doesn't exist, it will be created automatically.
</p>
</div>
<footer>
<button
string="Import"
type="object"
name="action_import_yaml"
class="oe_highlight"
attrs="{'invisible': [('if_record_exists', '!=', 'update')]}"
confirm="This may overwrite existing records. Proceed?"
/>
<button
string="Import"
type="object"
name="action_import_yaml"
class="oe_highlight"
attrs="{'invisible': [('if_record_exists', '=', 'update')]}"
/>
<button string="Close" special="cancel" />
</footer>
</form>
</field>
</record>
</odoo>

View File

@@ -0,0 +1,137 @@
import binascii
from base64 import b64decode
import yaml
from odoo import _, fields, models
from odoo.exceptions import ValidationError
class CxTowerYamlImportWizUpload(models.TransientModel):
"""
Upload YAML file and perform initial validation.
Submit YAML data to import wizard for further processing.
"""
_name = "cx.tower.yaml.import.wiz.upload"
_description = "Cetmix Tower YAML Import Wizard Upload"
file_name = fields.Char()
yaml_file = fields.Binary(required=True)
def action_import_yaml(self):
"""Parse YAML data to the import wizard
Returns:
Action Window: Action to open the import wizard
"""
decoded_file = self._extract_yaml_data()
import_wizard = self.env["cx.tower.yaml.import.wiz"].create(
{
"yaml_code": decoded_file,
}
)
return {
"type": "ir.actions.act_window",
"res_model": "cx.tower.yaml.import.wiz",
"res_id": import_wizard.id,
"view_mode": "form",
"target": "new",
}
def _extract_yaml_data(self):
"""Extract data from YAML file and validate them
Returns:
decoded_file (Text): YAML code
Raises:
ValidationError: If the YAML file is invalid
or contains unsupported data
"""
self.ensure_one()
# Decode base64 file
try:
raw_bytes = b64decode(self.yaml_file or b"")
except (TypeError, binascii.Error) as e:
# Not a valid base-64 payload
raise ValidationError(_("File is not a valid base64-encoded file")) from e
if not raw_bytes:
raise ValidationError(_("File is empty"))
try:
decoded_file = raw_bytes.decode("utf-8")
except UnicodeDecodeError as e:
raise ValidationError(_("YAML file cannot be decoded properly")) from e
# Parse YAML file
try:
yaml_data = yaml.safe_load(decoded_file)
except yaml.YAMLError as e:
raise ValidationError(_("Invalid YAML file")) from e
if not yaml_data or not isinstance(yaml_data, dict):
raise ValidationError(_("Yaml file doesn't contain valid data"))
# Check Cetmix Tower YAML version
yaml_version = yaml_data.pop("cetmix_tower_yaml_version", None)
supported_version = self.env["cx.tower.yaml.mixin"].CETMIX_TOWER_YAML_VERSION
if (
yaml_version
and isinstance(yaml_version, int)
and yaml_version > supported_version
):
raise ValidationError(
_(
"YAML version is higher than version"
" supported by your Cetmix Tower instance."
" %(code_version)s > %(tower_version)s",
code_version=yaml_version,
tower_version=supported_version,
)
)
# Get records from YAML
records = yaml_data.get("records")
if not records:
raise ValidationError(_("YAML file doesn't contain any records"))
# Collect and validate all record models
ir_model_obj = self.env["ir.model"]
unique_models = {}
# First pass: check all records have models and collect unique models
for record in records:
record_model = record.get("cetmix_tower_model")
if not record_model:
raise ValidationError(
_(
"Record model is missing for record %s",
record.get("reference", ""),
)
)
if record_model not in unique_models:
odoo_model = f"cx.tower.{record_model}".replace("_", ".")
unique_models[record_model] = odoo_model
# Second pass: validate all unique models in a single query
odoo_models = list(unique_models.values())
valid_models = {
model.model: model
for model in ir_model_obj.search([("model", "in", odoo_models)])
}
# Third pass: check models exist and support YAML import
for record_model, odoo_model in unique_models.items():
if odoo_model not in valid_models:
raise ValidationError(_("'%s' is not a valid model", record_model))
if not hasattr(self.env[odoo_model], "yaml_code"):
raise ValidationError(
_("Model '%s' does not support YAML import", record_model)
)
return decoded_file

View File

@@ -0,0 +1,39 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<record id="cx_tower_yaml_import_wiz_upload_view_form" model="ir.ui.view">
<field name="name">cx.tower.yaml.import.wiz.upload.view.form</field>
<field name="model">cx.tower.yaml.import.wiz.upload</field>
<field name="arch" type="xml">
<form>
<group>
<field name="file_name" invisible="1" />
<field
name="yaml_file"
filename="file_name"
options="{'accepted_file_extensions': '.yaml,.yml'}"
/>
</group>
<footer>
<button
string="Process"
type="object"
name="action_import_yaml"
class="oe_highlight"
attrs="{'invisible': [('yaml_file', '=', False)]}"
/>
<button string="Close" special="cancel" />
</footer>
</form>
</field>
</record>
<record id="action_cx_tower_yaml_import_wiz_upload" model="ir.actions.act_window">
<field name="name">Import YAML</field>
<field name="type">ir.actions.act_window</field>
<field name="res_model">cx.tower.yaml.import.wiz.upload</field>
<field name="view_mode">form</field>
<field name="target">new</field>
</record>
</odoo>

707
addons/queue_job/README.rst Normal file
View File

@@ -0,0 +1,707 @@
.. image:: https://odoo-community.org/readme-banner-image
:target: https://odoo-community.org/get-involved?utm_source=readme
:alt: Odoo Community Association
=========
Job Queue
=========
..
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! This file is generated by oca-gen-addon-readme !!
!! changes will be overwritten. !!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! source digest: sha256:b92d06dbbf161572f2bf02e0c6a59282cea11cc5e903378094bead986f0125de
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
.. |badge1| image:: https://img.shields.io/badge/maturity-Mature-brightgreen.png
:target: https://odoo-community.org/page/development-status
:alt: Mature
.. |badge2| image:: https://img.shields.io/badge/license-LGPL--3-blue.png
:target: http://www.gnu.org/licenses/lgpl-3.0-standalone.html
:alt: License: LGPL-3
.. |badge3| image:: https://img.shields.io/badge/github-OCA%2Fqueue-lightgray.png?logo=github
:target: https://github.com/OCA/queue/tree/16.0/queue_job
:alt: OCA/queue
.. |badge4| image:: https://img.shields.io/badge/weblate-Translate%20me-F47D42.png
:target: https://translation.odoo-community.org/projects/queue-16-0/queue-16-0-queue_job
:alt: Translate me on Weblate
.. |badge5| image:: https://img.shields.io/badge/runboat-Try%20me-875A7B.png
:target: https://runboat.odoo-community.org/builds?repo=OCA/queue&target_branch=16.0
:alt: Try me on Runboat
|badge1| |badge2| |badge3| |badge4| |badge5|
This addon adds an integrated Job Queue to Odoo.
It allows to postpone method calls executed asynchronously.
Jobs are executed in the background by a ``Jobrunner``, in their own transaction.
Example:
.. code-block:: python
from odoo import models, fields, api
class MyModel(models.Model):
_name = 'my.model'
def my_method(self, a, k=None):
_logger.info('executed with a: %s and k: %s', a, k)
class MyOtherModel(models.Model):
_name = 'my.other.model'
def button_do_stuff(self):
self.env['my.model'].with_delay().my_method('a', k=2)
In the snippet of code above, when we call ``button_do_stuff``, a job **capturing
the method and arguments** will be postponed. It will be executed as soon as the
Jobrunner has a free bucket, which can be instantaneous if no other job is
running.
Features:
* Views for jobs, jobs are stored in PostgreSQL
* Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL's NOTIFY
* Channels: give a capacity for the root channel and its sub-channels and
segregate jobs in them. Allow for instance to restrict heavy jobs to be
executed one at a time while little ones are executed 4 at a times.
* Retries: Ability to retry jobs by raising a type of exception
* Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next tries,
retry after 1 minutes, ...
* Job properties: priorities, estimated time of arrival (ETA), custom
description, number of retries
* Related Actions: link an action on the job view, such as open the record
concerned by the job
**Table of contents**
.. contents::
:local:
Installation
============
Be sure to have the ``requests`` library.
Configuration
=============
* Using environment variables and command line:
* Adjust environment variables (optional):
- ``ODOO_QUEUE_JOB_CHANNELS=root:4`` or any other channels configuration.
The default is ``root:1``
- if ``xmlrpc_port`` is not set: ``ODOO_QUEUE_JOB_PORT=8069``
* Start Odoo with ``--load=web,queue_job``
and ``--workers`` greater than 1. [1]_
* Keep in mind that the number of workers should be greater than the number of
channels. ``queue_job`` will reuse normal Odoo workers to process jobs. It
will not spawn its own workers.
* Using the Odoo configuration file:
.. code-block:: ini
[options]
(...)
workers = 6
server_wide_modules = web,queue_job
(...)
[queue_job]
channels = root:2
* Environment variables have priority over the configuration file.
* Confirm the runner is starting correctly by checking the odoo log file:
.. code-block::
...INFO...queue_job.jobrunner.runner: starting
...INFO...queue_job.jobrunner.runner: initializing database connections
...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname>
...INFO...queue_job.jobrunner.runner: database connections ready
* Create jobs (eg using ``base_import_async``) and observe they
start immediately and in parallel.
* Tip: to enable debug logging for the queue job, use
``--log-handler=odoo.addons.queue_job:DEBUG``
.. [1] It works with the threaded Odoo server too, although this way
of running Odoo is obviously not for production purposes.
* Jobs that remain in `enqueued` or `started` state (because, for instance, their worker has been killed) will be automatically re-queued.
Usage
=====
To use this module, you need to:
#. Go to ``Job Queue`` menu
Developers
~~~~~~~~~~
Delaying jobs
-------------
The fast way to enqueue a job for a method is to use ``with_delay()`` on a record
or model:
.. code-block:: python
def button_done(self):
self.with_delay().print_confirmation_document(self.state)
self.write({"state": "done"})
return True
Here, the method ``print_confirmation_document()`` will be executed asynchronously
as a job. ``with_delay()`` can take several parameters to define more precisely how
the job is executed (priority, ...).
All the arguments passed to the method being delayed are stored in the job and
passed to the method when it is executed asynchronously, including ``self``, so
the current record is maintained during the job execution (warning: the context
is not kept).
Dependencies can be expressed between jobs. To start a graph of jobs, use ``delayable()``
on a record or model. The following is the equivalent of ``with_delay()`` but using the
long form:
.. code-block:: python
def button_done(self):
delayable = self.delayable()
delayable.print_confirmation_document(self.state)
delayable.delay()
self.write({"state": "done"})
return True
Methods of Delayable objects return itself, so it can be used as a builder pattern,
which in some cases allow to build the jobs dynamically:
.. code-block:: python
def button_generate_simple_with_delayable(self):
self.ensure_one()
# Introduction of a delayable object, using a builder pattern
# allowing to chain jobs or set properties. The delay() method
# on the delayable object actually stores the delayable objects
# in the queue_job table
(
self.delayable()
.generate_thumbnail((50, 50))
.set(priority=30)
.set(description=_("generate xxx"))
.delay()
)
The simplest way to define a dependency is to use ``.on_done(job)`` on a Delayable:
.. code-block:: python
def button_chain_done(self):
self.ensure_one()
job1 = self.browse(1).delayable().generate_thumbnail((50, 50))
job2 = self.browse(1).delayable().generate_thumbnail((50, 50))
job3 = self.browse(1).delayable().generate_thumbnail((50, 50))
# job 3 is executed when job 2 is done which is executed when job 1 is done
job1.on_done(job2.on_done(job3)).delay()
Delayables can be chained to form more complex graphs using the ``chain()`` and
``group()`` primitives.
A chain represents a sequence of jobs to execute in order, a group represents
jobs which can be executed in parallel. Using ``chain()`` has the same effect as
using several nested ``on_done()`` but is more readable. Both can be combined to
form a graph, for instance we can group [A] of jobs, which blocks another group
[B] of jobs. When and only when all the jobs of the group [A] are executed, the
jobs of the group [B] are executed. The code would look like:
.. code-block:: python
from odoo.addons.queue_job.delay import group, chain
def button_done(self):
group_a = group(self.delayable().method_foo(), self.delayable().method_bar())
group_b = group(self.delayable().method_baz(1), self.delayable().method_baz(2))
chain(group_a, group_b).delay()
self.write({"state": "done"})
return True
When a failure happens in a graph of jobs, the execution of the jobs that depend on the
failed job stops. They remain in a state ``wait_dependencies`` until their "parent" job is
successful. This can happen in two ways: either the parent job retries and is successful
on a second try, either the parent job is manually "set to done" by a user. In these two
cases, the dependency is resolved and the graph will continue to be processed. Alternatively,
the failed job and all its dependent jobs can be canceled by a user. The other jobs of the
graph that do not depend on the failed job continue their execution in any case.
Note: ``delay()`` must be called on the delayable, chain, or group which is at the top
of the graph. In the example above, if it was called on ``group_a``, then ``group_b``
would never be delayed (but a warning would be shown).
It is also possible to split a job into several jobs, each one processing a part of the
work. This can be useful to avoid very long jobs, parallelize some task and get more specific
errors. Usage is as follows:
.. code-block:: python
def button_split_delayable(self):
(
self # Can be a big recordset, let's say 1000 records
.delayable()
.generate_thumbnail((50, 50))
.set(priority=30)
.set(description=_("generate xxx"))
.split(50) # Split the job in 20 jobs of 50 records each
.delay()
)
The ``split()`` method takes a ``chain`` boolean keyword argument. If set to
True, the jobs will be chained, meaning that the next job will only start when the previous
one is done:
.. code-block:: python
def button_increment_var(self):
(
self
.delayable()
.increment_counter()
.split(1, chain=True) # Will exceute the jobs one after the other
.delay()
)
Enqueing Job Options
--------------------
* priority: default is 10, the closest it is to 0, the faster it will be
executed
* eta: Estimated Time of Arrival of the job. It will not be executed before this
date/time
* max_retries: default is 5, maximum number of retries before giving up and set
the job state to 'failed'. A value of 0 means infinite retries.
* description: human description of the job. If not set, description is computed
from the function doc or method name
* channel: the complete name of the channel to use to process the function. If
specified it overrides the one defined on the function
* identity_key: key uniquely identifying the job, if specified and a job with
the same key has not yet been run, the new job will not be created
Configure default options for jobs
----------------------------------
In earlier versions, jobs could be configured using the ``@job`` decorator.
This is now obsolete, they can be configured using optional ``queue.job.function``
and ``queue.job.channel`` XML records.
Example of channel:
.. code-block:: XML
<record id="channel_sale" model="queue.job.channel">
<field name="name">sale</field>
<field name="parent_id" ref="queue_job.channel_root" />
</record>
Example of job function:
.. code-block:: XML
<record id="job_function_sale_order_action_done" model="queue.job.function">
<field name="model_id" ref="sale.model_sale_order" />
<field name="method">action_done</field>
<field name="channel_id" ref="channel_sale" />
<field name="related_action" eval='{"func_name": "custom_related_action"}' />
<field name="retry_pattern" eval="{1: 60, 2: 180, 3: 10, 5: 300}" />
</record>
The general form for the ``name`` is: ``<model.name>.method``.
The channel, related action and retry pattern options are optional, they are
documented below.
When writing modules, if 2+ modules add a job function or channel with the same
name (and parent for channels), they'll be merged in the same record, even if
they have different xmlids. On uninstall, the merged record is deleted when all
the modules using it are uninstalled.
**Job function: model**
If the function is defined in an abstract model, you can not write
``<field name="model_id" ref="xml_id_of_the_abstract_model"</field>``
but you have to define a function for each model that inherits from the abstract model.
**Job function: channel**
The channel where the job will be delayed. The default channel is ``root``.
**Job function: related action**
The *Related Action* appears as a button on the Job's view.
The button will execute the defined action.
The default one is to open the view of the record related to the job (form view
when there is a single record, list view for several records).
In many cases, the default related action is enough and doesn't need
customization, but it can be customized by providing a dictionary on the job
function:
.. code-block:: python
{
"enable": False,
"func_name": "related_action_partner",
"kwargs": {"name": "Partner"},
}
* ``enable``: when ``False``, the button has no effect (default: ``True``)
* ``func_name``: name of the method on ``queue.job`` that returns an action
* ``kwargs``: extra arguments to pass to the related action method
Example of related action code:
.. code-block:: python
class QueueJob(models.Model):
_inherit = 'queue.job'
def related_action_partner(self, name):
self.ensure_one()
model = self.model_name
partner = self.records
action = {
'name': name,
'type': 'ir.actions.act_window',
'res_model': model,
'view_type': 'form',
'view_mode': 'form',
'res_id': partner.id,
}
return action
**Job function: retry pattern**
When a job fails with a retryable error type, it is automatically
retried later. By default, the retry is always 10 minutes later.
A retry pattern can be configured on the job function. What a pattern represents
is "from X tries, postpone to Y seconds". It is expressed as a dictionary where
keys are tries and values are seconds to postpone as integers:
.. code-block:: python
{
1: 10,
5: 20,
10: 30,
15: 300,
}
Based on this configuration, we can tell that:
* 5 first retries are postponed 10 seconds later
* retries 5 to 10 postponed 20 seconds later
* retries 10 to 15 postponed 30 seconds later
* all subsequent retries postponed 5 minutes later
**Job Context**
The context of the recordset of the job, or any recordset passed in arguments of
a job, is transferred to the job according to an allow-list.
The default allow-list is `("tz", "lang", "allowed_company_ids", "force_company", "active_test")`. It can
be customized in ``Base._job_prepare_context_before_enqueue_keys``.
**Bypass jobs on running Odoo**
When you are developing (ie: connector modules) you might want
to bypass the queue job and run your code immediately.
To do so you can set `QUEUE_JOB__NO_DELAY=1` in your environment.
**Bypass jobs in tests**
When writing tests on job-related methods is always tricky to deal with
delayed recordsets. To make your testing life easier
you can set `queue_job__no_delay=True` in the context.
Tip: you can do this at test case level like this
.. code-block:: python
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.env = cls.env(context=dict(
cls.env.context,
queue_job__no_delay=True, # no jobs thanks
))
Then all your tests execute the job methods synchronously
without delaying any jobs.
Testing
-------
**Asserting enqueued jobs**
The recommended way to test jobs, rather than running them directly and synchronously is to
split the tests in two parts:
* one test where the job is mocked (trap jobs with ``trap_jobs()`` and the test
only verifies that the job has been delayed with the expected arguments
* one test that only calls the method of the job synchronously, to validate the
proper behavior of this method only
Proceeding this way means that you can prove that jobs will be enqueued properly
at runtime, and it ensures your code does not have a different behavior in tests
and in production (because running your jobs synchronously may have a different
behavior as they are in the same transaction / in the middle of the method).
Additionally, it gives more control on the arguments you want to pass when
calling the job's method (synchronously, this time, in the second type of
tests), and it makes tests smaller.
The best way to run such assertions on the enqueued jobs is to use
``odoo.addons.queue_job.tests.common.trap_jobs()``.
Inside this context manager, instead of being added in the database's queue,
jobs are pushed in an in-memory list. The context manager then provides useful
helpers to verify that jobs have been enqueued with the expected arguments. It
even can run the jobs of its list synchronously! Details in
``odoo.addons.queue_job.tests.common.JobsTester``.
A very small example (more details in ``tests/common.py``):
.. code-block:: python
# code
def my_job_method(self, name, count):
self.write({"name": " ".join([name] * count)
def method_to_test(self):
count = self.env["other.model"].search_count([])
self.with_delay(priority=15).my_job_method("Hi!", count=count)
return count
# tests
from odoo.addons.queue_job.tests.common import trap_jobs
# first test only check the expected behavior of the method and the proper
# enqueuing of jobs
def test_method_to_test(self):
with trap_jobs() as trap:
result = self.env["model"].method_to_test()
expected_count = 12
trap.assert_jobs_count(1, only=self.env["model"].my_job_method)
trap.assert_enqueued_job(
self.env["model"].my_job_method,
args=("Hi!",),
kwargs=dict(count=expected_count),
properties=dict(priority=15)
)
self.assertEqual(result, expected_count)
# second test to validate the behavior of the job unitarily
def test_my_job_method(self):
record = self.env["model"].browse(1)
record.my_job_method("Hi!", count=12)
self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")
If you prefer, you can still test the whole thing in a single test, by calling
``jobs_tester.perform_enqueued_jobs()`` in your test.
.. code-block:: python
def test_method_to_test(self):
with trap_jobs() as trap:
result = self.env["model"].method_to_test()
expected_count = 12
trap.assert_jobs_count(1, only=self.env["model"].my_job_method)
trap.assert_enqueued_job(
self.env["model"].my_job_method,
args=("Hi!",),
kwargs=dict(count=expected_count),
properties=dict(priority=15)
)
self.assertEqual(result, expected_count)
trap.perform_enqueued_jobs()
record = self.env["model"].browse(1)
record.my_job_method("Hi!", count=12)
self.assertEqual(record.name, "Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi! Hi!")
**Execute jobs synchronously when running Odoo**
When you are developing (ie: connector modules) you might want
to bypass the queue job and run your code immediately.
To do so you can set ``QUEUE_JOB__NO_DELAY=1`` in your environment.
.. WARNING:: Do not do this in production
**Execute jobs synchronously in tests**
You should use ``trap_jobs``, really, but if for any reason you could not use it,
and still need to have job methods executed synchronously in your tests, you can
do so by setting ``queue_job__no_delay=True`` in the context.
Tip: you can do this at test case level like this
.. code-block:: python
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.env = cls.env(context=dict(
cls.env.context,
queue_job__no_delay=True, # no jobs thanks
))
Then all your tests execute the job methods synchronously without delaying any
jobs.
In tests you'll have to mute the logger like:
@mute_logger('odoo.addons.queue_job.models.base')
.. NOTE:: in graphs of jobs, the ``queue_job__no_delay`` context key must be in at
least one job's env of the graph for the whole graph to be executed synchronously
Tips and tricks
---------------
* **Idempotency** (https://www.restapitutorial.com/lessons/idempotency.html): The queue_job should be idempotent so they can be retried several times without impact on the data.
* **The job should test at the very beginning its relevance**: the moment the job will be executed is unknown by design. So the first task of a job should be to check if the related work is still relevant at the moment of the execution.
Patterns
--------
Through the time, two main patterns emerged:
1. For data exposed to users, a model should store the data and the model should be the creator of the job. The job is kept hidden from the users
2. For technical data, that are not exposed to the users, it is generally alright to create directly jobs with data passed as arguments to the job, without intermediary models.
Known issues / Roadmap
======================
* After creating a new database or installing ``queue_job`` on an
existing database, Odoo must be restarted for the runner to detect it.
* When Odoo shuts down normally, it waits for running jobs to finish.
However, when the Odoo server crashes or is otherwise force-stopped,
running jobs are interrupted while the runner has no chance to know
they have been aborted. In such situations, jobs may remain in
``started`` or ``enqueued`` state after the Odoo server is halted.
Since the runner has no way to know if they are actually running or
not, and does not know for sure if it is safe to restart the jobs,
it does not attempt to restart them automatically. Such stale jobs
therefore fill the running queue and prevent other jobs to start.
You must therefore requeue them manually, either from the Jobs view,
or by running the following SQL statement *before starting Odoo*:
.. code-block:: sql
update queue_job set state='pending' where state in ('started', 'enqueued')
Changelog
=========
.. [ The change log. The goal of this file is to help readers
understand changes between version. The primary audience is
end users and integrators. Purely technical changes such as
code refactoring must not be mentioned here.
This file may contain ONE level of section titles, underlined
with the ~ (tilde) character. Other section markers are
forbidden and will likely break the structure of the README.rst
or other documents where this fragment is included. ]
Next
~~~~
* [ADD] Run jobrunner as a worker process instead of a thread in the main
process (when running with --workers > 0)
* [REF] ``@job`` and ``@related_action`` deprecated, any method can be delayed,
and configured using ``queue.job.function`` records
* [MIGRATION] from 13.0 branched at rev. e24ff4b
Bug Tracker
===========
Bugs are tracked on `GitHub Issues <https://github.com/OCA/queue/issues>`_.
In case of trouble, please check there if your issue has already been reported.
If you spotted it first, help us to smash it by providing a detailed and welcomed
`feedback <https://github.com/OCA/queue/issues/new?body=module:%20queue_job%0Aversion:%2016.0%0A%0A**Steps%20to%20reproduce**%0A-%20...%0A%0A**Current%20behavior**%0A%0A**Expected%20behavior**>`_.
Do not contact contributors directly about support or help with technical issues.
Credits
=======
Authors
~~~~~~~
* Camptocamp
* ACSONE SA/NV
Contributors
~~~~~~~~~~~~
* Guewen Baconnier <guewen.baconnier@camptocamp.com>
* Stéphane Bidoul <stephane.bidoul@acsone.eu>
* Matthieu Dietrich <matthieu.dietrich@camptocamp.com>
* Jos De Graeve <Jos.DeGraeve@apertoso.be>
* David Lefever <dl@taktik.be>
* Laurent Mignon <laurent.mignon@acsone.eu>
* Laetitia Gangloff <laetitia.gangloff@acsone.eu>
* Cédric Pigeon <cedric.pigeon@acsone.eu>
* Tatiana Deribina <tatiana.deribina@avoin.systems>
* Souheil Bejaoui <souheil.bejaoui@acsone.eu>
* Eric Antones <eantones@nuobit.com>
* Simone Orsi <simone.orsi@camptocamp.com>
Maintainers
~~~~~~~~~~~
This module is maintained by the OCA.
.. image:: https://odoo-community.org/logo.png
:alt: Odoo Community Association
:target: https://odoo-community.org
OCA, or the Odoo Community Association, is a nonprofit organization whose
mission is to support the collaborative development of Odoo features and
promote its widespread use.
.. |maintainer-guewen| image:: https://github.com/guewen.png?size=40px
:target: https://github.com/guewen
:alt: guewen
Current `maintainer <https://odoo-community.org/page/maintainer-role>`__:
|maintainer-guewen|
This module is part of the `OCA/queue <https://github.com/OCA/queue/tree/16.0/queue_job>`_ project on GitHub.
You are welcome to contribute. To learn how please visit https://odoo-community.org/page/Contribute.

View File

@@ -0,0 +1,10 @@
from . import controllers
from . import fields
from . import models
from . import wizards
from . import jobrunner
from .post_init_hook import post_init_hook
from .post_load import post_load
# shortcuts
from .job import identity_exact

View File

@@ -0,0 +1,35 @@
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
{
"name": "Job Queue",
"version": "16.0.2.12.0",
"author": "Camptocamp,ACSONE SA/NV,Odoo Community Association (OCA)",
"website": "https://github.com/OCA/queue",
"license": "LGPL-3",
"category": "Generic Modules",
"depends": ["mail", "base_sparse_field", "web"],
"external_dependencies": {"python": ["requests"]},
"data": [
"security/security.xml",
"security/ir.model.access.csv",
"views/queue_job_views.xml",
"views/queue_job_channel_views.xml",
"views/queue_job_function_views.xml",
"wizards/queue_jobs_to_done_views.xml",
"wizards/queue_jobs_to_cancelled_views.xml",
"wizards/queue_requeue_job_views.xml",
"views/queue_job_menus.xml",
"data/queue_data.xml",
"data/queue_job_function_data.xml",
],
"assets": {
"web.assets_backend": [
"/queue_job/static/src/views/**/*",
],
},
"installable": True,
"development_status": "Mature",
"maintainers": ["guewen"],
"post_init_hook": "post_init_hook",
"post_load": "post_load",
}

View File

@@ -0,0 +1 @@
from . import main

View File

@@ -0,0 +1,320 @@
# Copyright (c) 2015-2016 ACSONE SA/NV (<http://acsone.eu>)
# Copyright 2013-2016 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import logging
import random
import time
import traceback
from io import StringIO
from psycopg2 import OperationalError, errorcodes
from werkzeug.exceptions import BadRequest, Forbidden
from odoo import SUPERUSER_ID, _, api, http, registry, tools
from odoo.service.model import PG_CONCURRENCY_ERRORS_TO_RETRY
from ..delay import chain, group
from ..exception import FailedJobError, NothingToDoJob, RetryableJobError
from ..job import ENQUEUED, Job
_logger = logging.getLogger(__name__)
PG_RETRY = 5 # seconds
DEPENDS_MAX_TRIES_ON_CONCURRENCY_FAILURE = 5
class RunJobController(http.Controller):
def _try_perform_job(self, env, job):
"""Try to perform the job."""
job.set_started()
job.store()
env.cr.commit()
job.lock()
_logger.debug("%s started", job)
job.perform()
# Triggers any stored computed fields before calling 'set_done'
# so that will be part of the 'exec_time'
env.flush_all()
job.set_done()
job.store()
env.flush_all()
env.cr.commit()
_logger.debug("%s done", job)
def _enqueue_dependent_jobs(self, env, job):
tries = 0
while True:
try:
job.enqueue_waiting()
except OperationalError as err:
# Automatically retry the typical transaction serialization
# errors
if err.pgcode not in PG_CONCURRENCY_ERRORS_TO_RETRY:
raise
if tries >= DEPENDS_MAX_TRIES_ON_CONCURRENCY_FAILURE:
_logger.info(
"%s, maximum number of tries reached to update dependencies",
errorcodes.lookup(err.pgcode),
)
raise
wait_time = random.uniform(0.0, 2**tries)
tries += 1
_logger.info(
"%s, retry %d/%d in %.04f sec...",
errorcodes.lookup(err.pgcode),
tries,
DEPENDS_MAX_TRIES_ON_CONCURRENCY_FAILURE,
wait_time,
)
time.sleep(wait_time)
else:
break
@http.route(
"/queue_job/runjob",
type="http",
auth="none",
save_session=False,
readonly=False,
)
def runjob(self, db, job_uuid, **kw):
http.request.session.db = db
env = http.request.env(user=SUPERUSER_ID)
def retry_postpone(job, message, seconds=None):
job.env.clear()
with registry(job.env.cr.dbname).cursor() as new_cr:
job.env = api.Environment(new_cr, SUPERUSER_ID, {})
job.postpone(result=message, seconds=seconds)
job.set_pending(reset_retry=False)
job.store()
# ensure the job to run is in the correct state and lock the record
env.cr.execute(
"SELECT state FROM queue_job WHERE uuid=%s AND state=%s FOR UPDATE",
(job_uuid, ENQUEUED),
)
if not env.cr.fetchone():
_logger.warning(
"was requested to run job %s, but it does not exist, "
"or is not in state %s",
job_uuid,
ENQUEUED,
)
return ""
job = Job.load(env, job_uuid)
assert job and job.state == ENQUEUED
try:
try:
self._try_perform_job(env, job)
except OperationalError as err:
# Automatically retry the typical transaction serialization
# errors
if err.pgcode not in PG_CONCURRENCY_ERRORS_TO_RETRY:
raise
_logger.debug("%s OperationalError, postponed", job)
raise RetryableJobError(
tools.ustr(err.pgerror, errors="replace"), seconds=PG_RETRY
) from err
except NothingToDoJob as err:
if str(err):
msg = str(err)
else:
msg = _("Job interrupted and set to Done: nothing to do.")
job.set_done(msg)
job.store()
env.cr.commit()
except RetryableJobError as err:
# delay the job later, requeue
retry_postpone(job, str(err), seconds=err.seconds)
_logger.debug("%s postponed", job)
# Do not trigger the error up because we don't want an exception
# traceback in the logs we should have the traceback when all
# retries are exhausted
env.cr.rollback()
return ""
except (FailedJobError, Exception) as orig_exception:
buff = StringIO()
traceback.print_exc(file=buff)
traceback_txt = buff.getvalue()
_logger.error(traceback_txt)
job.env.clear()
with registry(job.env.cr.dbname).cursor() as new_cr:
job.env = job.env(cr=new_cr)
vals = self._get_failure_values(job, traceback_txt, orig_exception)
job.set_failed(**vals)
job.store()
buff.close()
raise
_logger.debug("%s enqueue depends started", job)
self._enqueue_dependent_jobs(env, job)
_logger.debug("%s enqueue depends done", job)
return ""
def _get_failure_values(self, job, traceback_txt, orig_exception):
"""Collect relevant data from exception."""
exception_name = orig_exception.__class__.__name__
if hasattr(orig_exception, "__module__"):
exception_name = orig_exception.__module__ + "." + exception_name
exc_message = (
orig_exception.args[0] if orig_exception.args else str(orig_exception)
)
return {
"exc_info": traceback_txt,
"exc_name": exception_name,
"exc_message": exc_message,
}
# flake8: noqa: C901
@http.route("/queue_job/create_test_job", type="http", auth="user")
def create_test_job(
self,
priority=None,
max_retries=None,
channel=None,
description="Test job",
size=1,
failure_rate=0,
):
"""Create test jobs
Examples of urls:
* http://127.0.0.1:8069/queue_job/create_test_job: single job
* http://127.0.0.1:8069/queue_job/create_test_job?size=10: a graph of 10 jobs
* http://127.0.0.1:8069/queue_job/create_test_job?size=10&failure_rate=0.5:
a graph of 10 jobs, half will fail
"""
if not http.request.env.user.has_group("base.group_erp_manager"):
raise Forbidden(_("Access Denied"))
if failure_rate is not None:
try:
failure_rate = float(failure_rate)
except (ValueError, TypeError):
failure_rate = 0
if not (0 <= failure_rate <= 1):
raise BadRequest("failure_rate must be between 0 and 1")
if size is not None:
try:
size = int(size)
except (ValueError, TypeError):
size = 1
if priority is not None:
try:
priority = int(priority)
except ValueError:
priority = None
if max_retries is not None:
try:
max_retries = int(max_retries)
except ValueError:
max_retries = None
if size == 1:
return self._create_single_test_job(
priority=priority,
max_retries=max_retries,
channel=channel,
description=description,
failure_rate=failure_rate,
)
if size > 1:
return self._create_graph_test_jobs(
size,
priority=priority,
max_retries=max_retries,
channel=channel,
description=description,
failure_rate=failure_rate,
)
return ""
def _create_single_test_job(
self,
priority=None,
max_retries=None,
channel=None,
description="Test job",
size=1,
failure_rate=0,
):
delayed = (
http.request.env["queue.job"]
.with_delay(
priority=priority,
max_retries=max_retries,
channel=channel,
description=description,
)
._test_job(failure_rate=failure_rate)
)
return "job uuid: %s" % (delayed.db_record().uuid,)
TEST_GRAPH_MAX_PER_GROUP = 5
def _create_graph_test_jobs(
self,
size,
priority=None,
max_retries=None,
channel=None,
description="Test job",
failure_rate=0,
):
model = http.request.env["queue.job"]
current_count = 0
possible_grouping_methods = (chain, group)
tails = [] # we can connect new graph chains/groups to tails
root_delayable = None
while current_count < size:
jobs_count = min(
size - current_count, random.randint(1, self.TEST_GRAPH_MAX_PER_GROUP)
)
jobs = []
for __ in range(jobs_count):
current_count += 1
jobs.append(
model.delayable(
priority=priority,
max_retries=max_retries,
channel=channel,
description="%s #%d" % (description, current_count),
)._test_job(failure_rate=failure_rate)
)
grouping = random.choice(possible_grouping_methods)
delayable = grouping(*jobs)
if not root_delayable:
root_delayable = delayable
else:
tail_delayable = random.choice(tails)
tail_delayable.on_done(delayable)
tails.append(delayable)
root_delayable.delay()
return "graph uuid: %s" % (
list(root_delayable._head())[0]._generated_job.graph_uuid,
)

View File

@@ -0,0 +1,28 @@
<?xml version="1.0" encoding="utf-8" ?>
<odoo>
<data noupdate="1">
<!-- Queue-job-related subtypes for messaging / Chatter -->
<record id="mt_job_failed" model="mail.message.subtype">
<field name="name">Job failed</field>
<field name="res_model">queue.job</field>
<field name="default" eval="True" />
</record>
<record id="ir_cron_autovacuum_queue_jobs" model="ir.cron">
<field name="name">AutoVacuum Job Queue</field>
<field ref="model_queue_job" name="model_id" />
<field eval="True" name="active" />
<field name="user_id" ref="base.user_root" />
<field name="interval_number">1</field>
<field name="interval_type">days</field>
<field name="numbercall">-1</field>
<field eval="False" name="doall" />
<field name="state">code</field>
<field name="code">model.autovacuum()</field>
</record>
</data>
<data noupdate="0">
<record model="queue.job.channel" id="channel_root">
<field name="name">root</field>
</record>
</data>
</odoo>

View File

@@ -0,0 +1,6 @@
<odoo noupdate="1">
<record id="job_function_queue_job__test_job" model="queue.job.function">
<field name="model_id" ref="queue_job.model_queue_job" />
<field name="method">_test_job</field>
</record>
</odoo>

666
addons/queue_job/delay.py Normal file
View File

@@ -0,0 +1,666 @@
# Copyright 2019 Camptocamp
# Copyright 2019 Guewen Baconnier
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html)
import itertools
import logging
import uuid
from collections import defaultdict, deque
from .job import Job
from .utils import must_run_without_delay
_logger = logging.getLogger(__name__)
def group(*delayables):
"""Return a group of delayable to form a graph
A group means that jobs can be executed concurrently.
A job or a group of jobs depending on a group can be executed only after
all the jobs of the group are done.
Shortcut to :class:`~odoo.addons.queue_job.delay.DelayableGroup`.
Example::
g1 = group(delayable1, delayable2)
g2 = group(delayable3, delayable4)
g1.on_done(g2)
g1.delay()
"""
return DelayableGroup(*delayables)
def chain(*delayables):
"""Return a chain of delayable to form a graph
A chain means that jobs must be executed sequentially.
A job or a group of jobs depending on a group can be executed only after
the last job of the chain is done.
Shortcut to :class:`~odoo.addons.queue_job.delay.DelayableChain`.
Example::
chain1 = chain(delayable1, delayable2, delayable3)
chain2 = chain(delayable4, delayable5, delayable6)
chain1.on_done(chain2)
chain1.delay()
"""
return DelayableChain(*delayables)
class Graph:
"""Acyclic directed graph holding vertices of any hashable type
This graph is not specifically designed to hold :class:`~Delayable`
instances, although ultimately it is used for this purpose.
"""
__slots__ = "_graph"
def __init__(self, graph=None):
if graph:
self._graph = graph
else:
self._graph = {}
def add_vertex(self, vertex):
"""Add a vertex
Has no effect if called several times with the same vertex
"""
self._graph.setdefault(vertex, set())
def add_edge(self, parent, child):
"""Add an edge between a parent and a child vertex
Has no effect if called several times with the same pair of vertices
"""
self.add_vertex(child)
self._graph.setdefault(parent, set()).add(child)
def vertices(self):
"""Return the vertices (nodes) of the graph"""
return set(self._graph)
def edges(self):
"""Return the edges (links) of the graph"""
links = []
for vertex, neighbours in self._graph.items():
for neighbour in neighbours:
links.append((vertex, neighbour))
return links
# from
# https://codereview.stackexchange.com/questions/55767/finding-all-paths-from-a-given-graph
def paths(self, vertex):
"""Generate the maximal cycle-free paths in graph starting at vertex.
>>> g = {1: [2, 3], 2: [3, 4], 3: [1], 4: []}
>>> sorted(self.paths(1))
[[1, 2, 3], [1, 2, 4], [1, 3]]
>>> sorted(self.paths(3))
[[3, 1, 2, 4]]
"""
path = [vertex] # path traversed so far
seen = {vertex} # set of vertices in path
def search():
dead_end = True
for neighbour in self._graph[path[-1]]:
if neighbour not in seen:
dead_end = False
seen.add(neighbour)
path.append(neighbour)
yield from search()
path.pop()
seen.remove(neighbour)
if dead_end:
yield list(path)
yield from search()
def topological_sort(self):
"""Yields a proposed order of nodes to respect dependencies
The order is not unique, the result may vary, but it is guaranteed
that a node depending on another is not yielded before.
It assumes the graph has no cycle.
"""
depends_per_node = defaultdict(int)
for __, tail in self.edges():
depends_per_node[tail] += 1
# the queue contains only elements for which all dependencies
# are resolved
queue = deque(self.root_vertices())
while queue:
vertex = queue.popleft()
yield vertex
for node in self._graph[vertex]:
depends_per_node[node] -= 1
if not depends_per_node[node]:
queue.append(node)
def root_vertices(self):
"""Returns the root vertices
meaning they do not depend on any other job.
"""
dependency_vertices = set()
for dependencies in self._graph.values():
dependency_vertices.update(dependencies)
return set(self._graph.keys()) - dependency_vertices
def __repr__(self):
paths = [path for vertex in self.root_vertices() for path in self.paths(vertex)]
lines = []
for path in paths:
lines.append("".join(repr(vertex) for vertex in path))
return "\n".join(lines)
class DelayableGraph(Graph):
"""Directed Graph for :class:`~Delayable` dependencies
It connects together the :class:`~Delayable`, :class:`~DelayableGroup` and
:class:`~DelayableChain` graphs, and creates then enqueued the jobs.
"""
def _merge_graph(self, graph):
"""Merge a graph in the current graph
It takes each vertex, which can be :class:`~Delayable`,
:class:`~DelayableChain` or :class:`~DelayableGroup`, and updates the
current graph with the edges between Delayable objects (connecting
heads and tails of the groups and chains), so that at the end, the
graph contains only Delayable objects and their links.
"""
for vertex, neighbours in graph._graph.items():
tails = vertex._tail()
for tail in tails:
# connect the tails with the heads of each node
heads = {head for n in neighbours for head in n._head()}
self._graph.setdefault(tail, set()).update(heads)
def _connect_graphs(self):
"""Visit the vertices' graphs and connect them, return the whole graph
Build a new graph, walk the vertices and their related vertices, merge
their graph in the new one, until we have visited all the vertices
"""
graph = DelayableGraph()
graph._merge_graph(self)
seen = set()
visit_stack = deque([self])
while visit_stack:
current = visit_stack.popleft()
if current in seen:
continue
vertices = current.vertices()
for vertex in vertices:
vertex_graph = vertex._graph
graph._merge_graph(vertex_graph)
visit_stack.append(vertex_graph)
seen.add(current)
return graph
def _has_to_execute_directly(self, vertices):
"""Used for tests to run tests directly instead of storing them
In tests, prefer to use
:func:`odoo.addons.queue_job.tests.common.trap_jobs`.
"""
envs = {vertex.recordset.env for vertex in vertices}
for env in envs:
if must_run_without_delay(env):
return True
return False
@staticmethod
def _ensure_same_graph_uuid(jobs):
"""Set the same graph uuid on all jobs of the same graph"""
jobs_count = len(jobs)
if jobs_count == 0:
raise ValueError("Expecting jobs")
elif jobs_count == 1:
if jobs[0].graph_uuid:
raise ValueError(
"Job %s is a single job, it should not"
" have a graph uuid" % (jobs[0],)
)
else:
graph_uuids = {job.graph_uuid for job in jobs if job.graph_uuid}
if len(graph_uuids) > 1:
raise ValueError("Jobs cannot have dependencies between several graphs")
elif len(graph_uuids) == 1:
graph_uuid = graph_uuids.pop()
else:
graph_uuid = str(uuid.uuid4())
for job in jobs:
job.graph_uuid = graph_uuid
def delay(self):
"""Build the whole graph, creates jobs and delay them"""
graph = self._connect_graphs()
vertices = graph.vertices()
for vertex in vertices:
vertex._build_job()
self._ensure_same_graph_uuid([vertex._generated_job for vertex in vertices])
if self._has_to_execute_directly(vertices):
self._execute_graph_direct(graph)
return
for vertex, neighbour in graph.edges():
neighbour._generated_job.add_depends({vertex._generated_job})
# If all the jobs of the graph have another job with the same identity,
# we do not create them. Maybe we should check that the found jobs are
# part of the same graph, but not sure it's really required...
# Also, maybe we want to check only the root jobs.
existing_mapping = {}
for vertex in vertices:
if not vertex.identity_key:
continue
generated_job = vertex._generated_job
existing = generated_job.job_record_with_same_identity_key()
if not existing:
# at least one does not exist yet, we'll delay the whole graph
existing_mapping.clear()
break
existing_mapping[vertex] = existing
# We'll replace the generated jobs by the existing ones, so callers
# can retrieve the existing job in "_generated_job".
# existing_mapping contains something only if *all* the job with an
# identity have an existing one.
for vertex, existing in existing_mapping.items():
vertex._generated_job = existing
return
for vertex in vertices:
vertex._generated_job.store()
def _execute_graph_direct(self, graph):
for delayable in graph.topological_sort():
delayable._execute_direct()
class DelayableChain:
"""Chain of delayables to form a graph
Delayables can be other :class:`~Delayable`, :class:`~DelayableChain` or
:class:`~DelayableGroup` objects.
A chain means that jobs must be executed sequentially.
A job or a group of jobs depending on a group can be executed only after
the last job of the chain is done.
Chains can be connected to other Delayable, DelayableChain or
DelayableGroup objects by using :meth:`~done`.
A Chain is enqueued by calling :meth:`~delay`, which delays the whole
graph.
Important: :meth:`~delay` must be called on the top-level
delayable/chain/group object of the graph.
"""
__slots__ = ("_graph", "__head", "__tail")
def __init__(self, *delayables):
self._graph = DelayableGraph()
iter_delayables = iter(delayables)
head = next(iter_delayables)
self.__head = head
self._graph.add_vertex(head)
for neighbour in iter_delayables:
self._graph.add_edge(head, neighbour)
head = neighbour
self.__tail = head
def _head(self):
return self.__head._tail()
def _tail(self):
return self.__tail._head()
def __repr__(self):
inner_graph = "\n\t".join(repr(self._graph).split("\n"))
return "DelayableChain(\n\t{}\n)".format(inner_graph)
def on_done(self, *delayables):
"""Connects the current chain to other delayables/chains/groups
The delayables/chains/groups passed in the parameters will be executed
when the current Chain is done.
"""
for delayable in delayables:
self._graph.add_edge(self.__tail, delayable)
return self
def delay(self):
"""Delay the whole graph"""
self._graph.delay()
class DelayableGroup:
"""Group of delayables to form a graph
Delayables can be other :class:`~Delayable`, :class:`~DelayableChain` or
:class:`~DelayableGroup` objects.
A group means that jobs must be executed sequentially.
A job or a group of jobs depending on a group can be executed only after
the all the jobs of the group are done.
Groups can be connected to other Delayable, DelayableChain or
DelayableGroup objects by using :meth:`~done`.
A group is enqueued by calling :meth:`~delay`, which delays the whole
graph.
Important: :meth:`~delay` must be called on the top-level
delayable/chain/group object of the graph.
"""
__slots__ = ("_graph", "_delayables")
def __init__(self, *delayables):
self._graph = DelayableGraph()
self._delayables = set(delayables)
for delayable in delayables:
self._graph.add_vertex(delayable)
def _head(self):
return itertools.chain.from_iterable(node._head() for node in self._delayables)
def _tail(self):
return itertools.chain.from_iterable(node._tail() for node in self._delayables)
def __repr__(self):
inner_graph = "\n\t".join(repr(self._graph).split("\n"))
return "DelayableGroup(\n\t{}\n)".format(inner_graph)
def on_done(self, *delayables):
"""Connects the current group to other delayables/chains/groups
The delayables/chains/groups passed in the parameters will be executed
when the current Group is done.
"""
for parent in self._delayables:
for child in delayables:
self._graph.add_edge(parent, child)
return self
def delay(self):
"""Delay the whole graph"""
self._graph.delay()
class Delayable:
"""Unit of a graph, one Delayable will lead to an enqueued job
Delayables can have dependencies on each others, as well as dependencies on
:class:`~DelayableGroup` or :class:`~DelayableChain` objects.
This class will generally not be used directly, it is used internally
by :meth:`~odoo.addons.queue_job.models.base.Base.delayable`. Look
in the base model for more details.
Delayables can be connected to other Delayable, DelayableChain or
DelayableGroup objects by using :meth:`~done`.
Properties of the future job can be set using the :meth:`~set` method,
which always return ``self``::
delayable.set(priority=15).set({"max_retries": 5, "eta": 15}).delay()
It can be used for example to set properties dynamically.
A Delayable is enqueued by calling :meth:`delay()`, which delays the whole
graph.
Important: :meth:`delay()` must be called on the top-level
delayable/chain/group object of the graph.
"""
_properties = (
"priority",
"eta",
"max_retries",
"description",
"channel",
"identity_key",
)
__slots__ = _properties + (
"recordset",
"_graph",
"_job_method",
"_job_args",
"_job_kwargs",
"_generated_job",
)
def __init__(
self,
recordset,
priority=None,
eta=None,
max_retries=None,
description=None,
channel=None,
identity_key=None,
):
self._graph = DelayableGraph()
self._graph.add_vertex(self)
self.recordset = recordset
self.priority = priority
self.eta = eta
self.max_retries = max_retries
self.description = description
self.channel = channel
self.identity_key = identity_key
self._job_method = None
self._job_args = ()
self._job_kwargs = {}
self._generated_job = None
def _head(self):
return [self]
def _tail(self):
return [self]
def __repr__(self):
return "Delayable({}.{}({}, {}))".format(
self.recordset,
self._job_method.__name__ if self._job_method else "",
self._job_args,
self._job_kwargs,
)
def __del__(self):
if not self._generated_job:
_logger.warning("Delayable %s was prepared but never delayed", self)
def _set_from_dict(self, properties):
for key, value in properties.items():
if key not in self._properties:
raise ValueError("No property %s" % (key,))
setattr(self, key, value)
def set(self, *args, **kwargs):
"""Set job properties and return self
The values can be either a dictionary and/or keywork args
"""
if args:
# args must be a dict
self._set_from_dict(*args)
self._set_from_dict(kwargs)
return self
def on_done(self, *delayables):
"""Connects the current Delayable to other delayables/chains/groups
The delayables/chains/groups passed in the parameters will be executed
when the current Delayable is done.
"""
for child in delayables:
self._graph.add_edge(self, child)
return self
def delay(self):
"""Delay the whole graph"""
self._graph.delay()
def split(self, size, chain=False):
"""Split the Delayables.
Use `DelayableGroup` or `DelayableChain`
if `chain` is True containing batches of size `size`
"""
if not self._job_method:
raise ValueError("No method set on the Delayable")
total_records = len(self.recordset)
delayables = []
for index in range(0, total_records, size):
recordset = self.recordset[index : index + size]
delayable = Delayable(
recordset,
priority=self.priority,
eta=self.eta,
max_retries=self.max_retries,
description=self.description,
channel=self.channel,
identity_key=self.identity_key,
)
# Update the __self__
delayable._job_method = getattr(recordset, self._job_method.__name__)
delayable._job_args = self._job_args
delayable._job_kwargs = self._job_kwargs
delayables.append(delayable)
description = self.description or (
self._job_method.__doc__.splitlines()[0].strip()
if self._job_method.__doc__
else "{}.{}".format(self.recordset._name, self._job_method.__name__)
)
for index, delayable in enumerate(delayables):
delayable.set(
description="%s (split %s/%s)"
% (description, index + 1, len(delayables))
)
# Prevent warning on deletion
self._generated_job = True
return (DelayableChain if chain else DelayableGroup)(*delayables)
def _build_job(self):
if self._generated_job:
return self._generated_job
self._generated_job = Job(
self._job_method,
args=self._job_args,
kwargs=self._job_kwargs,
priority=self.priority,
max_retries=self.max_retries,
eta=self.eta,
description=self.description,
channel=self.channel,
identity_key=self.identity_key,
)
return self._generated_job
def _store_args(self, *args, **kwargs):
self._job_args = args
self._job_kwargs = kwargs
return self
def __getattr__(self, name):
if name in self.__slots__:
return super().__getattr__(name)
if name in self.recordset:
raise AttributeError(
"only methods can be delayed (%s called on %s)" % (name, self.recordset)
)
recordset_method = getattr(self.recordset, name)
self._job_method = recordset_method
return self._store_args
def _execute_direct(self):
assert self._generated_job
self._generated_job.perform()
class DelayableRecordset:
"""Allow to delay a method for a recordset (shortcut way)
Usage::
delayable = DelayableRecordset(recordset, priority=20)
delayable.method(args, kwargs)
The method call will be processed asynchronously in the job queue, with
the passed arguments.
This class will generally not be used directly, it is used internally
by :meth:`~odoo.addons.queue_job.models.base.Base.with_delay`
"""
__slots__ = ("delayable",)
def __init__(
self,
recordset,
priority=None,
eta=None,
max_retries=None,
description=None,
channel=None,
identity_key=None,
):
self.delayable = Delayable(
recordset,
priority=priority,
eta=eta,
max_retries=max_retries,
description=description,
channel=channel,
identity_key=identity_key,
)
@property
def recordset(self):
return self.delayable.recordset
def __getattr__(self, name):
def _delay_delayable(*args, **kwargs):
getattr(self.delayable, name)(*args, **kwargs).delay()
return self.delayable._generated_job
return _delay_delayable
def __str__(self):
return "DelayableRecordset(%s%s)" % (
self.delayable.recordset._name,
getattr(self.delayable.recordset, "_ids", ""),
)
__repr__ = __str__

View File

@@ -0,0 +1,43 @@
# Copyright 2012-2016 Camptocamp
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
class BaseQueueJobError(Exception):
"""Base queue job error"""
class JobError(BaseQueueJobError):
"""A job had an error"""
class NoSuchJobError(JobError):
"""The job does not exist."""
class FailedJobError(JobError):
"""A job had an error having to be resolved."""
class RetryableJobError(JobError):
"""A job had an error but can be retried.
The job will be retried after the given number of seconds. If seconds is
empty, it will be retried according to the ``retry_pattern`` of the job or
by :const:`odoo.addons.queue_job.job.RETRY_INTERVAL` if nothing is defined.
If ``ignore_retry`` is True, the retry counter will not be increased.
"""
def __init__(self, msg, seconds=None, ignore_retry=False):
super().__init__(msg)
self.seconds = seconds
self.ignore_retry = ignore_retry
# TODO: remove support of NothingToDo: too dangerous
class NothingToDoJob(JobError):
"""The Job has nothing to do."""
class ChannelNotFound(BaseQueueJobError):
"""A channel could not be found"""

123
addons/queue_job/fields.py Normal file
View File

@@ -0,0 +1,123 @@
# copyright 2016 Camptocamp
# license lgpl-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import json
from datetime import date, datetime
import dateutil
import lxml
from odoo import fields, models
from odoo.tools.func import lazy
class JobSerialized(fields.Field):
"""Provide the storage for job fields stored as json
A base_type must be set, it must be dict, list or tuple.
When the field is not set, the json will be the corresponding
json string ("{}" or "[]").
Support for some custom types has been added to the json decoder/encoder
(see JobEncoder and JobDecoder).
"""
type = "job_serialized"
column_type = ("text", "text")
_base_type = None
# these are the default values when we convert an empty value
_default_json_mapping = {
dict: "{}",
list: "[]",
tuple: "[]",
models.BaseModel: lambda env: json.dumps(
{"_type": "odoo_recordset", "model": "base", "ids": [], "uid": env.uid}
),
}
def __init__(self, string=fields.Default, base_type=fields.Default, **kwargs):
super().__init__(string=string, _base_type=base_type, **kwargs)
def _setup_attrs(self, model, name): # pylint: disable=missing-return
super()._setup_attrs(model, name)
if self._base_type not in self._default_json_mapping:
raise ValueError("%s is not a supported base type" % (self._base_type))
def _base_type_default_json(self, env):
default_json = self._default_json_mapping.get(self._base_type)
if not isinstance(default_json, str):
default_json = default_json(env)
return default_json
def convert_to_column(self, value, record, values=None, validate=True):
return self.convert_to_cache(value, record, validate=validate)
def convert_to_cache(self, value, record, validate=True):
# cache format: json.dumps(value) or None
if isinstance(value, self._base_type):
return json.dumps(value, cls=JobEncoder)
else:
return value or None
def convert_to_record(self, value, record):
default = self._base_type_default_json(record.env)
return json.loads(value or default, cls=JobDecoder, env=record.env)
class JobEncoder(json.JSONEncoder):
"""Encode Odoo recordsets so that we can later recompose them"""
def _get_record_context(self, obj):
return obj._job_prepare_context_before_enqueue()
def default(self, obj):
if isinstance(obj, models.BaseModel):
return {
"_type": "odoo_recordset",
"model": obj._name,
"ids": obj.ids,
"uid": obj.env.uid,
"su": obj.env.su,
"context": self._get_record_context(obj),
}
elif isinstance(obj, datetime):
return {"_type": "datetime_isoformat", "value": obj.isoformat()}
elif isinstance(obj, date):
return {"_type": "date_isoformat", "value": obj.isoformat()}
elif isinstance(obj, lxml.etree._Element):
return {
"_type": "etree_element",
"value": lxml.etree.tostring(obj, encoding=str),
}
elif isinstance(obj, lazy):
return obj._value
return json.JSONEncoder.default(self, obj)
class JobDecoder(json.JSONDecoder):
"""Decode json, recomposing recordsets"""
def __init__(self, *args, **kwargs):
env = kwargs.pop("env")
super().__init__(object_hook=self.object_hook, *args, **kwargs)
assert env
self.env = env
def object_hook(self, obj):
if "_type" not in obj:
return obj
type_ = obj["_type"]
if type_ == "odoo_recordset":
model = self.env(user=obj.get("uid"), su=obj.get("su"))[obj["model"]]
if obj.get("context"):
model = model.with_context(**obj.get("context"))
return model.browse(obj["ids"])
elif type_ == "datetime_isoformat":
return dateutil.parser.parse(obj["value"])
elif type_ == "date_isoformat":
return dateutil.parser.parse(obj["value"]).date()
elif type_ == "etree_element":
return lxml.etree.fromstring(obj["value"])
return obj

964
addons/queue_job/i18n/ca.po Normal file
View File

@@ -0,0 +1,964 @@
# Translation of Odoo Server.
# This file contains the translation of the following modules:
# * queue_job
#
msgid ""
msgstr ""
"Project-Id-Version: Odoo Server 16.0\n"
"Report-Msgid-Bugs-To: \n"
"PO-Revision-Date: 2025-07-29 07:25+0000\n"
"Last-Translator: Enric Tobella <etobella@creublanca.es>\n"
"Language-Team: none\n"
"Language: ca\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: \n"
"Plural-Forms: nplurals=2; plural=n != 1;\n"
"X-Generator: Weblate 5.10.4\n"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> If the max. "
"retries is 0, the number of retries is infinite.</span>"
msgstr ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> Si el máx. "
"reintents es 0, el nombre de reintents es infinit.</span>"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Access Denied"
msgstr "Accés denegat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction
msgid "Action Needed"
msgstr "Acció requerida"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_ids
msgid "Activities"
msgstr "Activitats"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_decoration
msgid "Activity Exception Decoration"
msgstr "Decoració de l'activitat d'exepció"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_state
msgid "Activity State"
msgstr "Estat de l'activitat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_icon
msgid "Activity Type Icon"
msgstr "Icona del tipus d'activitat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__args
msgid "Args"
msgstr "Arguments"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_attachment_count
msgid "Attachment Count"
msgstr "Nombre d'adjunts"
#. module: queue_job
#: model:ir.actions.server,name:queue_job.ir_cron_autovacuum_queue_jobs_ir_actions_server
#: model:ir.cron,cron_name:queue_job.ir_cron_autovacuum_queue_jobs
msgid "AutoVacuum Job Queue"
msgstr "Buidat automàtic de la cua de Treballs"
#. module: queue_job
#: model:ir.model,name:queue_job.model_base
msgid "Base"
msgstr "Base"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Cancel"
msgstr "Cancel·lar"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_cancelled
msgid "Cancel all selected jobs"
msgstr "Cancel·lar tots els treballs seleccionats"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Cancel job"
msgstr "Cancel·lar treball"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "Cancel jobs"
msgstr "Cancel·lar treballs"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Cancelled"
msgstr "Cancel·lat"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Cancelled by %s"
msgstr "Cancel·lat per %s"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot change the root channel"
msgstr "No es pot cambiar el canal arrel"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot remove the root channel"
msgstr "No es pot eliminar el canal arrel"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Channel"
msgstr "Canal"
#. module: queue_job
#: model:ir.model.constraint,message:queue_job.constraint_queue_job_channel_name_uniq
msgid "Channel complete name must be unique"
msgstr "El nom complet del canal ha de ser únic"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_channel
#: model:ir.ui.menu,name:queue_job.menu_queue_job_channel
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_search
msgid "Channels"
msgstr "Canals"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__company_id
msgid "Company"
msgstr "Companyia"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel_method_name
msgid "Complete Method Name"
msgstr "Nom complet del métode"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__complete_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel
msgid "Complete Name"
msgstr "Nom complet"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_created
msgid "Created Date"
msgstr "Data de creació"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_uid
msgid "Created by"
msgstr "Creat per"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Created date"
msgstr "Data de creació"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_date
msgid "Created on"
msgstr "Creat el"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__retry
msgid "Current try"
msgstr "Intent actual"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Current try / max. retries"
msgstr "Intent actual / reintents màx."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_cancelled
msgid "Date Cancelled"
msgstr "Data de cancel·lació"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_done
msgid "Date Done"
msgstr "Data de realització"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Dependencies"
msgstr "Dependències"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependency_graph
msgid "Dependency Graph"
msgstr "Gràfic de dependència"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__name
msgid "Description"
msgstr "Descripció"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__display_name
msgid "Display Name"
msgstr "Nom a mostrar"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__done
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Done"
msgstr "Realitzat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_enqueued
msgid "Enqueue Time"
msgstr "Encua el treball"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__enqueued
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Enqueued"
msgstr "Encuat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_name
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception"
msgstr "Exepció"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_info
msgid "Exception Info"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception Information"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_message
msgid "Exception Message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception:"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__eta
msgid "Execute only after"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exec_time
msgid "Execution Time (avg)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__failed
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Failed"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_ir_model_fields__ttype
msgid "Field Type"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_ir_model_fields
msgid "Fields"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_follower_ids
msgid "Followers"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_partner_ids
msgid "Followers (Partners)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_type_icon
msgid "Font awesome icon e.g. fa-tasks"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Graph"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Graph Jobs"
msgstr "Gràfic dels treballs"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_jobs_count
msgid "Graph Jobs Count"
msgstr "Nombre de gràfics de treballs"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_uuid
msgid "Graph UUID"
msgstr "UUID del gràfic"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Group By"
msgstr "Agrupar per"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__has_message
msgid "Has Message"
msgstr "Té missatges"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__id
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__id
msgid "ID"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_icon
msgid "Icon"
msgstr "Icona"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_icon
msgid "Icon to indicate an exception activity."
msgstr "Icona per indicar una activitat d'excepció."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__identity_key
msgid "Identity Key"
msgstr "Clau identificadora"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction
msgid "If checked, new messages require your attention."
msgstr "Si està marcat, nous missatges requereixen la teva atenció."
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error
msgid "If checked, some messages have a delivery error."
msgstr "Si està marcat, alguns missatges tenen error d'enviament."
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Invalid job function: {}"
msgstr "Funció de treball invàlida: {}"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_is_follower
msgid "Is Follower"
msgstr "Es seguidor"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_channel
msgid "Job Channels"
msgstr "Canals de treball"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__job_function_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Job Function"
msgstr "Funció de treball"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_function
#: model:ir.model,name:queue_job.model_queue_job_function
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__job_function_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job_function
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
msgid "Job Functions"
msgstr "Funcions de treball"
#. module: queue_job
#: model:ir.module.category,name:queue_job.module_category_queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue_job_root
msgid "Job Queue"
msgstr "Cua de treballs"
#. module: queue_job
#: model:res.groups,name:queue_job.group_queue_job_manager
msgid "Job Queue Manager"
msgstr "Gestor de la Cua de treballs"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__ir_model_fields__ttype__job_serialized
msgid "Job Serialized"
msgstr "Treball seralitzat"
#. module: queue_job
#: model:mail.message.subtype,name:queue_job.mt_job_failed
msgid "Job failed"
msgstr "Treball fallit"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Job interrupted and set to Done: nothing to do."
msgstr "Treball interromput i marcat com a realitzat: res a realitzar."
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__job_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_graph
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_pivot
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Jobs"
msgstr "Treballs"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Jobs for graph %s"
msgstr "Gràfic de treball %s"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__kwargs
msgid "Kwargs"
msgstr "Kwargs"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 24 hours"
msgstr "Últimes 24 hores"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 30 days"
msgstr "Últims 30 dies"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 7 days"
msgstr "Últims 7 dies"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job____last_update
msgid "Last Modified on"
msgstr "Última modificació el"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_uid
msgid "Last Updated by"
msgstr "Última actualització per"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_date
msgid "Last Updated on"
msgstr "Última actualització el"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_main_attachment_id
msgid "Main Attachment"
msgstr "Adjunt principal"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Manually set to done by %s"
msgstr "Marcat manualment com realitzat per %s"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__max_retries
msgid "Max. retries"
msgstr "Màx. reintents"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error
msgid "Message Delivery error"
msgstr "Error d'enviament del missatge"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_ids
msgid "Messages"
msgstr "Missatges"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__method
msgid "Method"
msgstr "Mètode"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__method_name
msgid "Method Name"
msgstr "Nom del mètode"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__model_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__model_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Model"
msgstr "Model"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Model {} not found"
msgstr "Model {} no trobat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__my_activity_date_deadline
msgid "My Activity Deadline"
msgstr "Data límit de la meva activitat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__name
msgid "Name"
msgstr "Nom"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_date_deadline
msgid "Next Activity Deadline"
msgstr "Data límit de la següent activitat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_summary
msgid "Next Activity Summary"
msgstr "Resum de la següent activitat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_id
msgid "Next Activity Type"
msgstr "Tipus de la següent activitat"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "No action available for this job"
msgstr "No hi ha accions disponibles per aquest treball"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Not allowed to change field(s): {}"
msgstr "No està permés cambiar els camps: {}"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction_counter
msgid "Number of Actions"
msgstr "Nombre d'accions"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error_counter
msgid "Number of errors"
msgstr "Nombre d'errors"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction_counter
msgid "Number of messages requiring action"
msgstr "Nombre de missatges que requereixen accions"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error_counter
msgid "Number of messages with delivery error"
msgstr "Nombre de missatges amb error d'enviament"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__parent_id
msgid "Parent Channel"
msgstr "Canal pare"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Parent channel required."
msgstr "Canal pare obligatori."
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_retry_pattern
msgid ""
"Pattern expressing from the count of retries on retryable errors, the number "
"of of seconds to postpone the next execution. Setting the number of seconds "
"to a 2-element tuple or list will randomize the retry interval between the 2 "
"values.\n"
"Example: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Example: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__pending
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Pending"
msgstr "Pendent"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__priority
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Priority"
msgstr "Prioritat"
#. module: queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue
msgid "Queue"
msgstr "Cua"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__queue_job_id
msgid "Queue Job"
msgstr "Cua de treballs"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_lock
msgid "Queue Job Lock"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Queue jobs must be created by calling 'with_delay()'."
msgstr "Els treballs en cua es creen mitjançant la funció 'with_delay()'."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__record_ids
msgid "Record"
msgstr "Registre"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__records
msgid "Record(s)"
msgstr "Registre(s)"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Related"
msgstr "Relacionat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_related_action
msgid "Related Action"
msgstr "Acció relacionada"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__related_action
msgid "Related Action (serialized)"
msgstr "Acció relacionada (en serie)"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Record"
msgstr "Registre relacionat"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Records"
msgstr "Registres relacionats"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_tree
msgid "Remaining days to execute"
msgstr "Dies restants per executar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__removal_interval
msgid "Removal Interval"
msgstr "Interval d'eliminació"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue"
msgstr "Reencua"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Requeue Job"
msgstr "Reencua el treball"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue Jobs"
msgstr "Reencua el treball"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_user_id
msgid "Responsible User"
msgstr "Usuari responsable"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__result
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Result"
msgstr "Resultat"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Results"
msgstr "Resultats"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_retry_pattern
msgid "Retry Pattern"
msgstr "Patró de reintents"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__retry_pattern
msgid "Retry Pattern (serialized)"
msgstr "Patró de reintents (serialitzat)"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_done
msgid "Set all selected jobs to done"
msgstr "Marcar tots els treballs seleccionats com realitzats"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set jobs done"
msgstr "Marcar treballs com realitzats"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_done
msgid "Set jobs to done"
msgstr "Marcar treballs com realitzats"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Set to 'Done'"
msgstr "Marcar com 'Realitzat'"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set to done"
msgstr "Marcar com realitzat"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__graph_uuid
msgid "Single shared identifier of a Graph. Empty for a single job."
msgstr "Identificador únic del gràfic. Buit per a un únic treball."
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid ""
"Something bad happened during the execution of job %s. More details in the "
"'Exception Information' section."
msgstr ""
"Ha sorgit un problema durant l'execució del treball %s. Més detalls a la "
"secció 'Informació sobre l'exepció'."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_started
msgid "Start Date"
msgstr "Data d'inici"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__started
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Started"
msgstr "Iniciat"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__state
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "State"
msgstr "Estat"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_state
msgid ""
"Status based on activities\n"
"Overdue: Due date is already passed\n"
"Today: Activity date is today\n"
"Planned: Future activities."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__func_string
msgid "Task"
msgstr "Tasca"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_related_action
msgid ""
"The action when the button *Related Action* is used on a job. The default "
"action is to open the view of the record related to the job. Configured as a "
"dictionary with optional keys: enable, func_name, kwargs.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__max_retries
msgid ""
"The job will fail if the number of tries reach the max. retries.\n"
"Retries are infinite when empty."
msgstr ""
"El treball fallarà si arriba al nombre de reintents màxim.\n"
"Els reintents són infinits quan es deixa buit."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "The selected jobs will be cancelled."
msgstr "Els treballs seleccionats seran cancel·lats."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "The selected jobs will be requeued."
msgstr "Els treballs seleccionats es tornaran a posar en cua."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "The selected jobs will be set to done."
msgstr "Els treballs seleccionats es marcaran com realitzats."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Time (s)"
msgstr "Temps (s)"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__exec_time
msgid "Time required to execute this job in seconds. Average when grouped."
msgstr ""
"Temps necessari per executar el treball en segons. Mitjana quan s'agrupa."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Tried many times"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_decoration
msgid "Type of the exception activity on record."
msgstr "Tipus d'acitivitat de l'exepció en el registre."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__uuid
msgid "UUID"
msgstr "UUID"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Related Action for {}.\n"
"Example of valid format:\n"
"{{\"enable\": True, \"func_name\": \"related_action_foo\", "
"\"kwargs\" {{\"limit\": 10}}}}"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Retry Pattern for {}.\n"
"Example of valid formats:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__user_id
msgid "User ID"
msgstr "ID de l'usuari"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__wait_dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Wait Dependencies"
msgstr "Esperant dependències"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_requeue_job
msgid "Wizard to requeue a selection of jobs"
msgstr "Assistent per tornar a posar en cua una selecció de treballs"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__worker_pid
msgid "Worker Pid"
msgstr "PID del treballador"

1029
addons/queue_job/i18n/de.po Normal file

File diff suppressed because it is too large Load Diff

1033
addons/queue_job/i18n/es.po Normal file

File diff suppressed because it is too large Load Diff

1006
addons/queue_job/i18n/fr.po Normal file

File diff suppressed because it is too large Load Diff

1005
addons/queue_job/i18n/it.po Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,946 @@
# Translation of Odoo Server.
# This file contains the translation of the following modules:
# * queue_job
#
msgid ""
msgstr ""
"Project-Id-Version: Odoo Server 16.0\n"
"Report-Msgid-Bugs-To: \n"
"Last-Translator: \n"
"Language-Team: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: \n"
"Plural-Forms: \n"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> If the max. retries is 0, the number of retries is infinite.</span>"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Access Denied"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction
msgid "Action Needed"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_ids
msgid "Activities"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_decoration
msgid "Activity Exception Decoration"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_state
msgid "Activity State"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_icon
msgid "Activity Type Icon"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__args
msgid "Args"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_attachment_count
msgid "Attachment Count"
msgstr ""
#. module: queue_job
#: model:ir.actions.server,name:queue_job.ir_cron_autovacuum_queue_jobs_ir_actions_server
#: model:ir.cron,cron_name:queue_job.ir_cron_autovacuum_queue_jobs
msgid "AutoVacuum Job Queue"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_base
msgid "Base"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Cancel"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_cancelled
msgid "Cancel all selected jobs"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Cancel job"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "Cancel jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Cancelled"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Cancelled by %s"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot change the root channel"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot remove the root channel"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Channel"
msgstr ""
#. module: queue_job
#: model:ir.model.constraint,message:queue_job.constraint_queue_job_channel_name_uniq
msgid "Channel complete name must be unique"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_channel
#: model:ir.ui.menu,name:queue_job.menu_queue_job_channel
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_search
msgid "Channels"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__company_id
msgid "Company"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel_method_name
msgid "Complete Method Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__complete_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel
msgid "Complete Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_created
msgid "Created Date"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_uid
msgid "Created by"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Created date"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_date
msgid "Created on"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__retry
msgid "Current try"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Current try / max. retries"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_cancelled
msgid "Date Cancelled"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_done
msgid "Date Done"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Dependencies"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependency_graph
msgid "Dependency Graph"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__name
msgid "Description"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__display_name
msgid "Display Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__done
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Done"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_enqueued
msgid "Enqueue Time"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__enqueued
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Enqueued"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_name
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_info
msgid "Exception Info"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception Information"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_message
msgid "Exception Message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception:"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__eta
msgid "Execute only after"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exec_time
msgid "Execution Time (avg)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__failed
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Failed"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_ir_model_fields__ttype
msgid "Field Type"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_ir_model_fields
msgid "Fields"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_follower_ids
msgid "Followers"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_partner_ids
msgid "Followers (Partners)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_type_icon
msgid "Font awesome icon e.g. fa-tasks"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Graph"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Graph Jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_jobs_count
msgid "Graph Jobs Count"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_uuid
msgid "Graph UUID"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Group By"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__has_message
msgid "Has Message"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__id
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__id
msgid "ID"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_icon
msgid "Icon"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_icon
msgid "Icon to indicate an exception activity."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__identity_key
msgid "Identity Key"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction
msgid "If checked, new messages require your attention."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error
msgid "If checked, some messages have a delivery error."
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Invalid job function: {}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_is_follower
msgid "Is Follower"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_channel
msgid "Job Channels"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__job_function_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Job Function"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_function
#: model:ir.model,name:queue_job.model_queue_job_function
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__job_function_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job_function
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
msgid "Job Functions"
msgstr ""
#. module: queue_job
#: model:ir.module.category,name:queue_job.module_category_queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue_job_root
msgid "Job Queue"
msgstr ""
#. module: queue_job
#: model:res.groups,name:queue_job.group_queue_job_manager
msgid "Job Queue Manager"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__ir_model_fields__ttype__job_serialized
msgid "Job Serialized"
msgstr ""
#. module: queue_job
#: model:mail.message.subtype,name:queue_job.mt_job_failed
msgid "Job failed"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Job interrupted and set to Done: nothing to do."
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__job_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_graph
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_pivot
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Jobs"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Jobs for graph %s"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__kwargs
msgid "Kwargs"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 24 hours"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 30 days"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 7 days"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job____last_update
msgid "Last Modified on"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_uid
msgid "Last Updated by"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_date
msgid "Last Updated on"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_main_attachment_id
msgid "Main Attachment"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Manually set to done by %s"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__max_retries
msgid "Max. retries"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error
msgid "Message Delivery error"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_ids
msgid "Messages"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__method
msgid "Method"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__method_name
msgid "Method Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__model_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__model_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Model"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Model {} not found"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__my_activity_date_deadline
msgid "My Activity Deadline"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__name
msgid "Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_date_deadline
msgid "Next Activity Deadline"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_summary
msgid "Next Activity Summary"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_id
msgid "Next Activity Type"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "No action available for this job"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Not allowed to change field(s): {}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction_counter
msgid "Number of Actions"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error_counter
msgid "Number of errors"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction_counter
msgid "Number of messages requiring action"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error_counter
msgid "Number of messages with delivery error"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__parent_id
msgid "Parent Channel"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Parent channel required."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_retry_pattern
msgid ""
"Pattern expressing from the count of retries on retryable errors, the number of of seconds to postpone the next execution. Setting the number of seconds to a 2-element tuple or list will randomize the retry interval between the 2 values.\n"
"Example: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Example: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__pending
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Pending"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__priority
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Priority"
msgstr ""
#. module: queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue
msgid "Queue"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__queue_job_id
msgid "Queue Job"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_lock
msgid "Queue Job Lock"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Queue jobs must be created by calling 'with_delay()'."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__record_ids
msgid "Record"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__records
msgid "Record(s)"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Related"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_related_action
msgid "Related Action"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__related_action
msgid "Related Action (serialized)"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Record"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Records"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_tree
msgid "Remaining days to execute"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__removal_interval
msgid "Removal Interval"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Requeue Job"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue Jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_user_id
msgid "Responsible User"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__result
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Result"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Results"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_retry_pattern
msgid "Retry Pattern"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__retry_pattern
msgid "Retry Pattern (serialized)"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_done
msgid "Set all selected jobs to done"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set jobs done"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_done
msgid "Set jobs to done"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Set to 'Done'"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set to done"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__graph_uuid
msgid "Single shared identifier of a Graph. Empty for a single job."
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid ""
"Something bad happened during the execution of job %s. More details in the "
"'Exception Information' section."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_started
msgid "Start Date"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__started
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Started"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__state
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "State"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_state
msgid ""
"Status based on activities\n"
"Overdue: Due date is already passed\n"
"Today: Activity date is today\n"
"Planned: Future activities."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__func_string
msgid "Task"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_related_action
msgid ""
"The action when the button *Related Action* is used on a job. The default action is to open the view of the record related to the job. Configured as a dictionary with optional keys: enable, func_name, kwargs.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__max_retries
msgid ""
"The job will fail if the number of tries reach the max. retries.\n"
"Retries are infinite when empty."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "The selected jobs will be cancelled."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "The selected jobs will be requeued."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "The selected jobs will be set to done."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Time (s)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__exec_time
msgid "Time required to execute this job in seconds. Average when grouped."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Tried many times"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_decoration
msgid "Type of the exception activity on record."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__uuid
msgid "UUID"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Related Action for {}.\n"
"Example of valid format:\n"
"{{\"enable\": True, \"func_name\": \"related_action_foo\", \"kwargs\" {{\"limit\": 10}}}}"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Retry Pattern for {}.\n"
"Example of valid formats:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__user_id
msgid "User ID"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__wait_dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Wait Dependencies"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_requeue_job
msgid "Wizard to requeue a selection of jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__worker_pid
msgid "Worker Pid"
msgstr ""

995
addons/queue_job/i18n/tr.po Normal file
View File

@@ -0,0 +1,995 @@
# Translation of Odoo Server.
# This file contains the translation of the following modules:
# * queue_job
#
msgid ""
msgstr ""
"Project-Id-Version: Odoo Server 16.0\n"
"Report-Msgid-Bugs-To: \n"
"PO-Revision-Date: 2025-06-13 15:27+0000\n"
"Last-Translator: Betül Öğmen <betulo@eska.biz>\n"
"Language-Team: none\n"
"Language: tr\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: \n"
"Plural-Forms: nplurals=2; plural=n != 1;\n"
"X-Generator: Weblate 5.10.4\n"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> If the max. "
"retries is 0, the number of retries is infinite.</span>"
msgstr ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> Eğer maks. "
"deneme sayısı 0 ise, deneme sayısı sonsuz olur.</span>"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Access Denied"
msgstr "Erişim Engellendi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction
msgid "Action Needed"
msgstr "Eylem Gerekli"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_ids
msgid "Activities"
msgstr "Aktiviteler"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_decoration
msgid "Activity Exception Decoration"
msgstr "Aktivite İstisna Dekorasyonu"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_state
msgid "Activity State"
msgstr "Aktivite Durumu"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_icon
msgid "Activity Type Icon"
msgstr "Aktivite Türü Simgesi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__args
msgid "Args"
msgstr "Argümanlar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_attachment_count
msgid "Attachment Count"
msgstr "Ek Sayısı"
#. module: queue_job
#: model:ir.actions.server,name:queue_job.ir_cron_autovacuum_queue_jobs_ir_actions_server
#: model:ir.cron,cron_name:queue_job.ir_cron_autovacuum_queue_jobs
msgid "AutoVacuum Job Queue"
msgstr "Otomatik Temizleme İş Kuyruğu"
#. module: queue_job
#: model:ir.model,name:queue_job.model_base
msgid "Base"
msgstr "Temel"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Cancel"
msgstr "İptal"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_cancelled
msgid "Cancel all selected jobs"
msgstr "Tüm seçili işleri iptal et"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Cancel job"
msgstr "İşi iptal et"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "Cancel jobs"
msgstr "İşleri iptal et"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Cancelled"
msgstr "İptal Edildi"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Cancelled by %s"
msgstr "%s tarafından iptal edildi"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot change the root channel"
msgstr "Kök kanal değiştirilemez"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot remove the root channel"
msgstr "Kök kanal silinemez"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Channel"
msgstr "Kanal"
#. module: queue_job
#: model:ir.model.constraint,message:queue_job.constraint_queue_job_channel_name_uniq
msgid "Channel complete name must be unique"
msgstr "Kanal tam adı benzersiz olmalıdır"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_channel
#: model:ir.ui.menu,name:queue_job.menu_queue_job_channel
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_search
msgid "Channels"
msgstr "Kanallar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__company_id
msgid "Company"
msgstr "Şirket"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel_method_name
msgid "Complete Method Name"
msgstr "Tam Metot Adı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__complete_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel
msgid "Complete Name"
msgstr "Tam Adı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_created
msgid "Created Date"
msgstr "Oluşturulma Tarihi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_uid
msgid "Created by"
msgstr "Oluşturan"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Created date"
msgstr "Oluşturulma tarihi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_date
msgid "Created on"
msgstr "Oluşturuldu"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__retry
msgid "Current try"
msgstr "Şu anki deneme"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Current try / max. retries"
msgstr "Şu anki deneme / maks. deneme"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_cancelled
msgid "Date Cancelled"
msgstr "İptal Edilme Tarihi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_done
msgid "Date Done"
msgstr "Tamamlanma Tarihi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Dependencies"
msgstr "Bağımlılıklar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependency_graph
msgid "Dependency Graph"
msgstr "Bağımlılık Grafiği"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__name
msgid "Description"
msgstr "Açıklama"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__display_name
msgid "Display Name"
msgstr "Görünüm Adı"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__done
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Done"
msgstr "Tamamlandı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_enqueued
msgid "Enqueue Time"
msgstr "Sıraya Alınma Zamanı"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__enqueued
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Enqueued"
msgstr "Sıraya Alındı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_name
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception"
msgstr "İstisna"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_info
msgid "Exception Info"
msgstr "İstisna Bilgisi"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception Information"
msgstr "İstisna Bilgisi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_message
msgid "Exception Message"
msgstr "İstisna Mesajı"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception message"
msgstr "İstisna mesajı"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception:"
msgstr "İstisna:"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__eta
msgid "Execute only after"
msgstr "Bundan sonra çalıştır"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exec_time
msgid "Execution Time (avg)"
msgstr "Çalıştırma Zamanı (ort)"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__failed
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Failed"
msgstr "Başarısız"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_ir_model_fields__ttype
msgid "Field Type"
msgstr "Alan Tipi"
#. module: queue_job
#: model:ir.model,name:queue_job.model_ir_model_fields
msgid "Fields"
msgstr "Alanlar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_follower_ids
msgid "Followers"
msgstr "Takipçiler"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_partner_ids
msgid "Followers (Partners)"
msgstr "Takipçiler (İş Ortakları)"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_type_icon
msgid "Font awesome icon e.g. fa-tasks"
msgstr "Font awesome simgeleri ör. fa-tasks"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Graph"
msgstr "Grafik"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Graph Jobs"
msgstr "Grafiğin İşleri"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_jobs_count
msgid "Graph Jobs Count"
msgstr "Grafiğin İş Sayısı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_uuid
msgid "Graph UUID"
msgstr "Grafik UUID"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Group By"
msgstr "Gruplandı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__has_message
msgid "Has Message"
msgstr "Mesajı Var"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__id
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__id
msgid "ID"
msgstr "ID"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_icon
msgid "Icon"
msgstr "Simge"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_icon
msgid "Icon to indicate an exception activity."
msgstr "İstisna etkinliğini gösteren simge."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__identity_key
msgid "Identity Key"
msgstr "Benzersiz Anahtar"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction
msgid "If checked, new messages require your attention."
msgstr "İşaretlenirse, sizi bekleyen mesajlar var."
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error
msgid "If checked, some messages have a delivery error."
msgstr "İşaretlenirse, bazı mesajlar teslimat hatası içerir."
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Invalid job function: {}"
msgstr "Geçersiz iş fonksiyonu: {}"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_is_follower
msgid "Is Follower"
msgstr "Takipçi"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_channel
msgid "Job Channels"
msgstr "İş Kanalları"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__job_function_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Job Function"
msgstr "İş Fonksiyonu"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_function
#: model:ir.model,name:queue_job.model_queue_job_function
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__job_function_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job_function
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
msgid "Job Functions"
msgstr "İş Fonksiyonları"
#. module: queue_job
#: model:ir.module.category,name:queue_job.module_category_queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue_job_root
msgid "Job Queue"
msgstr "İş Kuyruğu"
#. module: queue_job
#: model:res.groups,name:queue_job.group_queue_job_manager
msgid "Job Queue Manager"
msgstr "İş Kuyruğu Yöneticisi"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__ir_model_fields__ttype__job_serialized
msgid "Job Serialized"
msgstr "Serileştirilmiş İş"
#. module: queue_job
#: model:mail.message.subtype,name:queue_job.mt_job_failed
msgid "Job failed"
msgstr "İş başarısız oldu"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Job interrupted and set to Done: nothing to do."
msgstr "İş yarıda kesildi ve bitti olarak ayarlandı: yapılacak bir şey yok."
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__job_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_graph
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_pivot
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Jobs"
msgstr "İşler"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Jobs for graph %s"
msgstr "%s grafiğinin işleri"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__kwargs
msgid "Kwargs"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 24 hours"
msgstr "Son 24 saat"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 30 days"
msgstr "Son 30 gün"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 7 days"
msgstr "Son 7 gün"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job____last_update
msgid "Last Modified on"
msgstr "Son Değiştirme"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_uid
msgid "Last Updated by"
msgstr "Son Güncelleyen"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_date
msgid "Last Updated on"
msgstr "Son Güncelleme"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_main_attachment_id
msgid "Main Attachment"
msgstr "Ana Ek"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Manually set to done by %s"
msgstr "%s tarafından tamamlandı olarak ayarlandı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__max_retries
msgid "Max. retries"
msgstr "Maks. deneme"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error
msgid "Message Delivery error"
msgstr "Mesaj Teslimat hatası"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_ids
msgid "Messages"
msgstr "Mesajlar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__method
msgid "Method"
msgstr "Yöntem"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__method_name
msgid "Method Name"
msgstr "Yöntem Adı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__model_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__model_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Model"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Model {} not found"
msgstr "Model {} bulunamadı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__my_activity_date_deadline
msgid "My Activity Deadline"
msgstr "Aktivite Son Tarihim"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__name
msgid "Name"
msgstr "Ad"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_date_deadline
msgid "Next Activity Deadline"
msgstr "Sonraki Aktivite Son Tarihi"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_summary
msgid "Next Activity Summary"
msgstr "Sonraki Aktivite Özeti"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_id
msgid "Next Activity Type"
msgstr "Sonraki Aktivite Türü"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "No action available for this job"
msgstr "Bu iş için uygun bir eylem yok"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Not allowed to change field(s): {}"
msgstr "Alan(lar)ı değiştirme izni yok: {}"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction_counter
msgid "Number of Actions"
msgstr "Eylem Sayısı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error_counter
msgid "Number of errors"
msgstr "Hata sayısı"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction_counter
msgid "Number of messages requiring action"
msgstr "Eylem gerektiren mesaj sayısı"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error_counter
msgid "Number of messages with delivery error"
msgstr "Teslimat hatası içeren mesaj sayısı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__parent_id
msgid "Parent Channel"
msgstr "Üst Kanal"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Parent channel required."
msgstr "Üst kanal gerekli."
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_retry_pattern
msgid ""
"Pattern expressing from the count of retries on retryable errors, the number "
"of of seconds to postpone the next execution. Setting the number of seconds "
"to a 2-element tuple or list will randomize the retry interval between the 2 "
"values.\n"
"Example: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Example: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"See the module description for details."
msgstr ""
"Yeniden denenebilir hatalarda yeniden deneme sayısından, bir sonraki "
"çalışmanın ertelenmesi için saniye sayısını ifade eden desen. Saniye "
"sayısını 2 elemanlı bir demet veya liste olarak ayarlama, yeniden deneme "
"aralığını 2 değer arasında rastgele hale getirir.\n"
"Örneğin: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Örneğin: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"Ayrıntılar için modül açıklamasına bakınız."
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__pending
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Pending"
msgstr "Beklemede"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__priority
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Priority"
msgstr "Öncelik"
#. module: queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue
msgid "Queue"
msgstr "Sıra"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__queue_job_id
msgid "Queue Job"
msgstr "Kuyruk İşi"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_lock
msgid "Queue Job Lock"
msgstr "Kuyruk İş Kilidi"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Queue jobs must be created by calling 'with_delay()'."
msgstr "Kuyruk işleri 'with_delay()' çağrılarak oluşturulmalıdır."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__record_ids
msgid "Record"
msgstr "Kayıt"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__records
msgid "Record(s)"
msgstr "Kayıt(lar)"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Related"
msgstr "İlgili"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_related_action
msgid "Related Action"
msgstr "İlgili Eylem"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__related_action
msgid "Related Action (serialized)"
msgstr "İlgili Eylem (serileştirilmiş)"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Record"
msgstr "İlgili Kayıt"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Records"
msgstr "İlgili Kayıtlar"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_tree
msgid "Remaining days to execute"
msgstr "Çalıştırılacak kalan günler"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__removal_interval
msgid "Removal Interval"
msgstr "Kaldırma Aralığı"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue"
msgstr "Yeniden sıraya al"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Requeue Job"
msgstr "İşi yeniden sıraya al"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue Jobs"
msgstr "İşleri Tekrar Sıraya Al"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_user_id
msgid "Responsible User"
msgstr "Sorumlu Kullanıcı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__result
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Result"
msgstr "Sonuç"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Results"
msgstr "Sonuçlar"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_retry_pattern
msgid "Retry Pattern"
msgstr "Tekrar Şablonu"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__retry_pattern
msgid "Retry Pattern (serialized)"
msgstr "Tekrar Şablonu (serileştirilmiş)"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_done
msgid "Set all selected jobs to done"
msgstr "Seçili tüm işleri tamamlandı olarak işaretle"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set jobs done"
msgstr "İşleri tamamlandı olarak işaretle"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_done
msgid "Set jobs to done"
msgstr "İşleri tamamlandı olarak işaretle"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Set to 'Done'"
msgstr "'Tamamlandı' olarak işaretle"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set to done"
msgstr "Tamamlandı olarak işaretle"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__graph_uuid
msgid "Single shared identifier of a Graph. Empty for a single job."
msgstr "Bir grafiğin paylaşılan tanımlayıcısı. Tek işler için değeri boştur."
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid ""
"Something bad happened during the execution of job %s. More details in the "
"'Exception Information' section."
msgstr ""
"Bu işi yaparken bir şeyler ters gitti %s. 'İstisna Bilgisi' kısmında daha "
"fazla detaya erişebilirsiniz."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_started
msgid "Start Date"
msgstr "Başlama Tarihi"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__started
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Started"
msgstr "Başladı"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__state
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "State"
msgstr "Durum"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_state
msgid ""
"Status based on activities\n"
"Overdue: Due date is already passed\n"
"Today: Activity date is today\n"
"Planned: Future activities."
msgstr ""
"Aktivitelere dayalı durum\n"
"Geçikmiş: Son tarih zaten geçti\n"
"Bugün: Aktivite tarihi bugün\n"
"Planlandı: Gelecekteki aktiviteler."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__func_string
msgid "Task"
msgstr "Görev"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_related_action
msgid ""
"The action when the button *Related Action* is used on a job. The default "
"action is to open the view of the record related to the job. Configured as a "
"dictionary with optional keys: enable, func_name, kwargs.\n"
"See the module description for details."
msgstr ""
"İşteki *İlgili Eylem* düğmesi kullanıldığında gerçekleşen eylem. Varsayılan "
"eylem, işle ilişkili kaydın görünümünü açmaktır. İsteğe bağlı anahtarlar ile "
"yapılandırılmış bir sözlük: etkinleştir, func_name, kwargs.\n"
"Ayrıntılar için modül açıklamasına bakınız."
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__max_retries
msgid ""
"The job will fail if the number of tries reach the max. retries.\n"
"Retries are infinite when empty."
msgstr ""
"Eğer tekrar sayısı maks. deneme sayısına ulaşırsa iş başarısız olacaktır.\n"
"Boş olduğunda sonsuz tekrar yapar."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "The selected jobs will be cancelled."
msgstr "Seçilen işler iptal edilecektir."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "The selected jobs will be requeued."
msgstr "Seçilen işler tekrar kuyruğa alınacaktır."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "The selected jobs will be set to done."
msgstr "Seçilen işler tamamlandı olarak ayarlanacaktır."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Time (s)"
msgstr "Süre (sn)"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__exec_time
msgid "Time required to execute this job in seconds. Average when grouped."
msgstr ""
"Saniye cinsinden bu işi yapmak için gereken süre. Gruplandığında ortalaması "
"alınır."
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Tried many times"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_decoration
msgid "Type of the exception activity on record."
msgstr "Kayıttaki istisna aktivitesinin türü."
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__uuid
msgid "UUID"
msgstr "UUID"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Related Action for {}.\n"
"Example of valid format:\n"
"{{\"enable\": True, \"func_name\": \"related_action_foo\", "
"\"kwargs\" {{\"limit\": 10}}}}"
msgstr ""
"İlgili eylem için beklenmeyen biçim {}.\n"
"Doğru biçim örneği:\n"
"{{\"enable\": True, \"func_name\": \"related_action_foo\", "
"\"kwargs\" {{\"limit\": 10}}}}"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Retry Pattern for {}.\n"
"Example of valid formats:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
msgstr ""
"{} için beklenmeyen tekrarlama şablonu.\n"
"Geçerli kullanım örnekleri:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__user_id
msgid "User ID"
msgstr "Kullanıcı ID"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__wait_dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Wait Dependencies"
msgstr "Bağımlılıklar Bekleniyor"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_requeue_job
msgid "Wizard to requeue a selection of jobs"
msgstr "Seçilen işleri tekrar sıraya alan sihirbaz"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__worker_pid
msgid "Worker Pid"
msgstr "Çalışan Pid"
#, python-format
#~ msgid "If both parameters are 0, ALL jobs will be requeued!"
#~ msgstr "Her iki parametre de 0 ise, TÜM işler tekrar sıraya alınacaktır!"
#~ msgid "Jobs Garbage Collector"
#~ msgstr "İş Çöp Toplayıcısı"

View File

@@ -0,0 +1,996 @@
# Translation of Odoo Server.
# This file contains the translation of the following modules:
# * queue_job
#
msgid ""
msgstr ""
"Project-Id-Version: Odoo Server 12.0\n"
"Report-Msgid-Bugs-To: \n"
"PO-Revision-Date: 2022-08-13 08:07+0000\n"
"Last-Translator: Dong <dong@freshoo.cn>\n"
"Language-Team: none\n"
"Language: zh_CN\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: \n"
"Plural-Forms: nplurals=1; plural=0;\n"
"X-Generator: Weblate 4.3.2\n"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid ""
"<br/>\n"
" <span class=\"oe_grey oe_inline\"> If the max. "
"retries is 0, the number of retries is infinite.</span>"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Access Denied"
msgstr "拒绝访问"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction
msgid "Action Needed"
msgstr "前置操作"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_ids
msgid "Activities"
msgstr "活动"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_decoration
msgid "Activity Exception Decoration"
msgstr "活动异常装饰"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_state
msgid "Activity State"
msgstr "活动状态"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_icon
msgid "Activity Type Icon"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__args
msgid "Args"
msgstr "位置参数"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_attachment_count
msgid "Attachment Count"
msgstr "附件数量"
#. module: queue_job
#: model:ir.actions.server,name:queue_job.ir_cron_autovacuum_queue_jobs_ir_actions_server
#: model:ir.cron,cron_name:queue_job.ir_cron_autovacuum_queue_jobs
msgid "AutoVacuum Job Queue"
msgstr "自动清空作业队列"
#. module: queue_job
#: model:ir.model,name:queue_job.model_base
msgid "Base"
msgstr "基础"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Cancel"
msgstr "取消"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_cancelled
msgid "Cancel all selected jobs"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Cancel job"
msgstr ""
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "Cancel jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__cancelled
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Cancelled"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Cancelled by %s"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot change the root channel"
msgstr "无法更改root频道"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Cannot remove the root channel"
msgstr "无法删除root频道"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Channel"
msgstr "频道"
#. module: queue_job
#: model:ir.model.constraint,message:queue_job.constraint_queue_job_channel_name_uniq
msgid "Channel complete name must be unique"
msgstr "频道完整名称必须是唯一的"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_channel
#: model:ir.ui.menu,name:queue_job.menu_queue_job_channel
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_channel_search
msgid "Channels"
msgstr "频道"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__company_id
msgid "Company"
msgstr "公司"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__channel_method_name
msgid "Complete Method Name"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__complete_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__channel
msgid "Complete Name"
msgstr "完整名称"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_created
msgid "Created Date"
msgstr "创建日期"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_uid
msgid "Created by"
msgstr "创建者"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Created date"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__create_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__create_date
msgid "Created on"
msgstr "创建时间"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__retry
msgid "Current try"
msgstr "当前尝试"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Current try / max. retries"
msgstr "当前尝试/最大重试次数"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_cancelled
msgid "Date Cancelled"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_done
msgid "Date Done"
msgstr "完成日期"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Dependencies"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__dependency_graph
msgid "Dependency Graph"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__name
msgid "Description"
msgstr "说明"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__display_name
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__display_name
msgid "Display Name"
msgstr "显示名称"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__done
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Done"
msgstr "完成"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_enqueued
msgid "Enqueue Time"
msgstr "排队时间"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__enqueued
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Enqueued"
msgstr "排队"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_name
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_info
msgid "Exception Info"
msgstr "异常信息"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception Information"
msgstr "异常信息"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exc_message
msgid "Exception Message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Exception message"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Exception:"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__eta
msgid "Execute only after"
msgstr "仅在此之后执行"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__exec_time
msgid "Execution Time (avg)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__failed
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Failed"
msgstr "失败"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_ir_model_fields__ttype
msgid "Field Type"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_ir_model_fields
msgid "Fields"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_follower_ids
msgid "Followers"
msgstr "关注者"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_partner_ids
msgid "Followers (Partners)"
msgstr "关注者(业务伙伴)"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_type_icon
msgid "Font awesome icon e.g. fa-tasks"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Graph"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Graph Jobs"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_jobs_count
msgid "Graph Jobs Count"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__graph_uuid
msgid "Graph UUID"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Group By"
msgstr "分组"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__has_message
msgid "Has Message"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__id
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__id
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__id
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__id
msgid "ID"
msgstr "ID"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_exception_icon
msgid "Icon"
msgstr "图标"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_icon
msgid "Icon to indicate an exception activity."
msgstr "指示异常活动的图标。"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__identity_key
msgid "Identity Key"
msgstr "身份密钥"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction
msgid "If checked, new messages require your attention."
msgstr "确认后, 出现提示消息。"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error
msgid "If checked, some messages have a delivery error."
msgstr "如果勾选此项, 某些消息将会产生传递错误。"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Invalid job function: {}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_is_follower
msgid "Is Follower"
msgstr "关注者"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_channel
msgid "Job Channels"
msgstr "作业频道"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__job_function_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Job Function"
msgstr "作业函数"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job_function
#: model:ir.model,name:queue_job.model_queue_job_function
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__job_function_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job_function
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_function_search
msgid "Job Functions"
msgstr "作业函数"
#. module: queue_job
#: model:ir.module.category,name:queue_job.module_category_queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue_job_root
msgid "Job Queue"
msgstr "作业队列"
#. module: queue_job
#: model:res.groups,name:queue_job.group_queue_job_manager
msgid "Job Queue Manager"
msgstr "作业队列管理员"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__ir_model_fields__ttype__job_serialized
#, fuzzy
msgid "Job Serialized"
msgstr "作业失败"
#. module: queue_job
#: model:mail.message.subtype,name:queue_job.mt_job_failed
msgid "Job failed"
msgstr "作业失败"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/controllers/main.py:0
#, python-format
msgid "Job interrupted and set to Done: nothing to do."
msgstr "作业中断并设置为已完成:无需执行任何操作。"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__job_ids
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__job_ids
#: model:ir.ui.menu,name:queue_job.menu_queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_graph
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_pivot
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Jobs"
msgstr "作业"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Jobs for graph %s"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__kwargs
msgid "Kwargs"
msgstr "关键字参数"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 24 hours"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 30 days"
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Last 7 days"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done____last_update
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job____last_update
msgid "Last Modified on"
msgstr "最后修改日"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_uid
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_uid
msgid "Last Updated by"
msgstr "最后更新者"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_cancelled__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_jobs_to_done__write_date
#: model:ir.model.fields,field_description:queue_job.field_queue_requeue_job__write_date
msgid "Last Updated on"
msgstr "最后更新时间"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_main_attachment_id
msgid "Main Attachment"
msgstr "附件"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Manually set to done by %s"
msgstr "由%s手动设置为完成"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__max_retries
msgid "Max. retries"
msgstr "最大重试次数"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error
msgid "Message Delivery error"
msgstr "消息递送错误"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_ids
msgid "Messages"
msgstr "消息"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__method
#, fuzzy
msgid "Method"
msgstr "方法名称"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__method_name
msgid "Method Name"
msgstr "方法名称"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__model_name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__model_id
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Model"
msgstr "模型"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid "Model {} not found"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__my_activity_date_deadline
msgid "My Activity Deadline"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__name
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__name
msgid "Name"
msgstr "名称"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_date_deadline
msgid "Next Activity Deadline"
msgstr "下一活动截止日期"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_summary
msgid "Next Activity Summary"
msgstr "下一活动摘要"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_type_id
msgid "Next Activity Type"
msgstr "下一活动类型"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "No action available for this job"
msgstr "此作业无法执行任何操作"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Not allowed to change field(s): {}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_needaction_counter
msgid "Number of Actions"
msgstr "操作次数"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__message_has_error_counter
msgid "Number of errors"
msgstr "错误数量"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_needaction_counter
msgid "Number of messages requiring action"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__message_has_error_counter
msgid "Number of messages with delivery error"
msgstr "递送错误消息数量"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__parent_id
msgid "Parent Channel"
msgstr "父频道"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_channel.py:0
#, python-format
msgid "Parent channel required."
msgstr "父频道必填。"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_retry_pattern
msgid ""
"Pattern expressing from the count of retries on retryable errors, the number "
"of of seconds to postpone the next execution. Setting the number of seconds "
"to a 2-element tuple or list will randomize the retry interval between the 2 "
"values.\n"
"Example: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Example: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__pending
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Pending"
msgstr "等待"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__priority
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Priority"
msgstr "优先级"
#. module: queue_job
#: model:ir.ui.menu,name:queue_job.menu_queue
msgid "Queue"
msgstr "队列"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_lock__queue_job_id
msgid "Queue Job"
msgstr "队列作业"
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_job_lock
msgid "Queue Job Lock"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Queue jobs must be created by calling 'with_delay()'."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__record_ids
msgid "Record"
msgstr "记录"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__records
#, fuzzy
msgid "Record(s)"
msgstr "记录"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Related"
msgstr "相关的"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_related_action
#, fuzzy
msgid "Related Action"
msgstr "相关记录"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__related_action
msgid "Related Action (serialized)"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Record"
msgstr "相关记录"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid "Related Records"
msgstr "相关记录"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_tree
msgid "Remaining days to execute"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_channel__removal_interval
msgid "Removal Interval"
msgstr "清除间隔"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue"
msgstr "重新排队"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Requeue Job"
msgstr "重新排队作业"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_requeue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "Requeue Jobs"
msgstr "重新排队作业"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__activity_user_id
msgid "Responsible User"
msgstr "负责的用户"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__result
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Result"
msgstr "结果"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Results"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__edit_retry_pattern
msgid "Retry Pattern"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job_function__retry_pattern
msgid "Retry Pattern (serialized)"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_jobs_to_done
msgid "Set all selected jobs to done"
msgstr "将所有选定的作业设置为完成"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set jobs done"
msgstr "设置作业完成"
#. module: queue_job
#: model:ir.actions.act_window,name:queue_job.action_set_jobs_done
msgid "Set jobs to done"
msgstr "将作业设置为完成"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Set to 'Done'"
msgstr "设置为“完成”"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "Set to done"
msgstr "设置为完成"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__graph_uuid
msgid "Single shared identifier of a Graph. Empty for a single job."
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job.py:0
#, python-format
msgid ""
"Something bad happened during the execution of job %s. More details in the "
"'Exception Information' section."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__date_started
msgid "Start Date"
msgstr "开始日期"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__started
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Started"
msgstr "开始"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__state
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "State"
msgstr "状态"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_state
msgid ""
"Status based on activities\n"
"Overdue: Due date is already passed\n"
"Today: Activity date is today\n"
"Planned: Future activities."
msgstr ""
"基于活动的状态\n"
"逾期:已经超过截止日期\n"
"现今:活动日期是当天\n"
"计划:未来的活动。"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__func_string
msgid "Task"
msgstr "任务"
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job_function__edit_related_action
msgid ""
"The action when the button *Related Action* is used on a job. The default "
"action is to open the view of the record related to the job. Configured as a "
"dictionary with optional keys: enable, func_name, kwargs.\n"
"See the module description for details."
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__max_retries
msgid ""
"The job will fail if the number of tries reach the max. retries.\n"
"Retries are infinite when empty."
msgstr ""
"如果尝试次数达到最大重试次数,作业将失败。\n"
"空的时候重试是无限的。"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_cancelled
msgid "The selected jobs will be cancelled."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_requeue_job
msgid "The selected jobs will be requeued."
msgstr "所选作业将重新排队。"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_set_jobs_done
msgid "The selected jobs will be set to done."
msgstr "所选作业将设置为完成。"
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_form
msgid "Time (s)"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__exec_time
msgid "Time required to execute this job in seconds. Average when grouped."
msgstr ""
#. module: queue_job
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Tried many times"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,help:queue_job.field_queue_job__activity_exception_decoration
msgid "Type of the exception activity on record."
msgstr "记录的异常活动的类型。"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__uuid
msgid "UUID"
msgstr "UUID"
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Related Action for {}.\n"
"Example of valid format:\n"
"{{\"enable\": True, \"func_name\": \"related_action_foo\", "
"\"kwargs\" {{\"limit\": 10}}}}"
msgstr ""
#. module: queue_job
#. odoo-python
#: code:addons/queue_job/models/queue_job_function.py:0
#, python-format
msgid ""
"Unexpected format of Retry Pattern for {}.\n"
"Example of valid formats:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
msgstr ""
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__user_id
msgid "User ID"
msgstr "用户"
#. module: queue_job
#: model:ir.model.fields.selection,name:queue_job.selection__queue_job__state__wait_dependencies
#: model_terms:ir.ui.view,arch_db:queue_job.view_queue_job_search
msgid "Wait Dependencies"
msgstr ""
#. module: queue_job
#: model:ir.model,name:queue_job.model_queue_requeue_job
msgid "Wizard to requeue a selection of jobs"
msgstr "重新排队向导所选的作业"
#. module: queue_job
#: model:ir.model.fields,field_description:queue_job.field_queue_job__worker_pid
msgid "Worker Pid"
msgstr ""
#, fuzzy, python-format
#~ msgid "If both parameters are 0, ALL jobs will be requeued!"
#~ msgstr "所选作业将重新排队。"
#, python-format
#~ msgid ""
#~ "Something bad happened during the execution of the job. More details in "
#~ "the 'Exception Information' section."
#~ msgstr ""
#~ "在执行作业期间发生了一些不好的事情。有关详细信息,请参见“异常信息”部分。"
#~ msgid "SMS Delivery error"
#~ msgstr "短信传递错误"
#~ msgid "Number of messages which requires an action"
#~ msgstr "需要操作消息数量"
#~ msgid ""
#~ "<span class=\"oe_grey oe_inline\"> If the max. retries is 0, the number "
#~ "of retries is infinite.</span>"
#~ msgstr ""
#~ "<span class=\"oe_grey oe_inline\">如果最大重试次数是0则重试次数是无限"
#~ "的。</span>"
#~ msgid "Override Channel"
#~ msgstr "覆盖频道"
#~ msgid "Number of unread messages"
#~ msgstr "未读消息数量"

891
addons/queue_job/job.py Normal file
View File

@@ -0,0 +1,891 @@
# Copyright 2013-2020 Camptocamp
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import hashlib
import inspect
import logging
import os
import sys
import uuid
import weakref
from datetime import datetime, timedelta
from random import randint
import odoo
from .exception import FailedJobError, NoSuchJobError, RetryableJobError
WAIT_DEPENDENCIES = "wait_dependencies"
PENDING = "pending"
ENQUEUED = "enqueued"
CANCELLED = "cancelled"
DONE = "done"
STARTED = "started"
FAILED = "failed"
STATES = [
(WAIT_DEPENDENCIES, "Wait Dependencies"),
(PENDING, "Pending"),
(ENQUEUED, "Enqueued"),
(STARTED, "Started"),
(DONE, "Done"),
(CANCELLED, "Cancelled"),
(FAILED, "Failed"),
]
DEFAULT_PRIORITY = 10 # used by the PriorityQueue to sort the jobs
DEFAULT_MAX_RETRIES = 5
RETRY_INTERVAL = 10 * 60 # seconds
_logger = logging.getLogger(__name__)
# TODO remove in 15.0 or 16.0, used to keep compatibility as the
# class has been moved in 'delay'.
def DelayableRecordset(*args, **kwargs):
# prevent circular import
from .delay import DelayableRecordset as dr
_logger.warning(
"DelayableRecordset moved from the queue_job.job"
" to the queue_job.delay python module"
)
return dr(*args, **kwargs)
def identity_exact(job_):
"""Identity function using the model, method and all arguments as key
When used, this identity key will have the effect that when a job should be
created and a pending job with the exact same recordset and arguments, the
second will not be created.
It should be used with the ``identity_key`` argument:
.. python::
from odoo.addons.queue_job.job import identity_exact
# [...]
delayable = self.with_delay(identity_key=identity_exact)
delayable.export_record(force=True)
Alternative identity keys can be built using the various fields of the job.
For example, you could compute a hash using only some arguments of
the job.
.. python::
def identity_example(job_):
hasher = hashlib.sha1()
hasher.update(job_.model_name)
hasher.update(job_.method_name)
hasher.update(str(sorted(job_.recordset.ids)))
hasher.update(str(job_.args[1]))
hasher.update(str(job_.kwargs.get('foo', '')))
return hasher.hexdigest()
Usually you will probably always want to include at least the name of the
model and method.
"""
hasher = identity_exact_hasher(job_)
return hasher.hexdigest()
def identity_exact_hasher(job_):
"""Prepare hasher object for identity_exact."""
hasher = hashlib.sha1()
hasher.update(job_.model_name.encode("utf-8"))
hasher.update(job_.method_name.encode("utf-8"))
hasher.update(str(sorted(job_.recordset.ids)).encode("utf-8"))
hasher.update(str(job_.args).encode("utf-8"))
hasher.update(str(sorted(job_.kwargs.items())).encode("utf-8"))
return hasher
class Job:
"""A Job is a task to execute. It is the in-memory representation of a job.
Jobs are stored in the ``queue.job`` Odoo Model, but they are handled
through this class.
.. attribute:: uuid
Id (UUID) of the job.
.. attribute:: graph_uuid
Shared UUID of the job's graph. Empty if the job is a single job.
.. attribute:: state
State of the job, can pending, enqueued, started, done or failed.
The start state is pending and the final state is done.
.. attribute:: retry
The current try, starts at 0 and each time the job is executed,
it increases by 1.
.. attribute:: max_retries
The maximum number of retries allowed before the job is
considered as failed.
.. attribute:: args
Arguments passed to the function when executed.
.. attribute:: kwargs
Keyword arguments passed to the function when executed.
.. attribute:: description
Human description of the job.
.. attribute:: func
The python function itself.
.. attribute:: model_name
Odoo model on which the job will run.
.. attribute:: priority
Priority of the job, 0 being the higher priority.
.. attribute:: date_created
Date and time when the job was created.
.. attribute:: date_enqueued
Date and time when the job was enqueued.
.. attribute:: date_started
Date and time when the job was started.
.. attribute:: date_done
Date and time when the job was done.
.. attribute:: result
A description of the result (for humans).
.. attribute:: exc_name
Exception error name when the job failed.
.. attribute:: exc_message
Exception error message when the job failed.
.. attribute:: exc_info
Exception information (traceback) when the job failed.
.. attribute:: user_id
Odoo user id which created the job
.. attribute:: eta
Estimated Time of Arrival of the job. It will not be executed
before this date/time.
.. attribute:: recordset
Model recordset when we are on a delayed Model method
.. attribute::channel
The complete name of the channel to use to process the job. If
provided it overrides the one defined on the job's function.
.. attribute::identity_key
A key referencing the job, multiple job with the same key will not
be added to a channel if the existing job with the same key is not yet
started or executed.
"""
@classmethod
def load(cls, env, job_uuid):
"""Read a single job from the Database
Raise an error if the job is not found.
"""
stored = cls.db_records_from_uuids(env, [job_uuid])
if not stored:
raise NoSuchJobError(
"Job %s does no longer exist in the storage." % job_uuid
)
return cls._load_from_db_record(stored)
@classmethod
def load_many(cls, env, job_uuids):
"""Read jobs in batch from the Database
Jobs not found are ignored.
"""
recordset = cls.db_records_from_uuids(env, job_uuids)
return {cls._load_from_db_record(record) for record in recordset}
def add_lock_record(self):
"""
Create row in db to be locked while the job is being performed.
"""
self.env.cr.execute(
"""
INSERT INTO
queue_job_lock (id, queue_job_id)
SELECT
id, id
FROM
queue_job
WHERE
uuid = %s
ON CONFLICT(id)
DO NOTHING;
""",
[self.uuid],
)
def lock(self):
"""
Lock row of job that is being performed
If a job cannot be locked,
it means that the job wasn't started,
a RetryableJobError is thrown.
"""
self.env.cr.execute(
"""
SELECT
*
FROM
queue_job_lock
WHERE
queue_job_id in (
SELECT
id
FROM
queue_job
WHERE
uuid = %s
AND state='started'
)
FOR UPDATE;
""",
[self.uuid],
)
# 1 job should be locked
if 1 != len(self.env.cr.fetchall()):
raise RetryableJobError(
f"Trying to lock job that wasn't started, uuid: {self.uuid}"
)
@classmethod
def _load_from_db_record(cls, job_db_record):
stored = job_db_record
args = stored.args
kwargs = stored.kwargs
method_name = stored.method_name
recordset = stored.records
method = getattr(recordset, method_name)
eta = None
if stored.eta:
eta = stored.eta
job_ = cls(
method,
args=args,
kwargs=kwargs,
priority=stored.priority,
eta=eta,
job_uuid=stored.uuid,
description=stored.name,
channel=stored.channel,
identity_key=stored.identity_key,
)
if stored.date_created:
job_.date_created = stored.date_created
if stored.date_enqueued:
job_.date_enqueued = stored.date_enqueued
if stored.date_started:
job_.date_started = stored.date_started
if stored.date_done:
job_.date_done = stored.date_done
if stored.date_cancelled:
job_.date_cancelled = stored.date_cancelled
job_.state = stored.state
job_.graph_uuid = stored.graph_uuid if stored.graph_uuid else None
job_.result = stored.result if stored.result else None
job_.exc_info = stored.exc_info if stored.exc_info else None
job_.retry = stored.retry
job_.max_retries = stored.max_retries
if stored.company_id:
job_.company_id = stored.company_id.id
job_.identity_key = stored.identity_key
job_.worker_pid = stored.worker_pid
job_.__depends_on_uuids.update(stored.dependencies.get("depends_on", []))
job_.__reverse_depends_on_uuids.update(
stored.dependencies.get("reverse_depends_on", [])
)
return job_
def job_record_with_same_identity_key(self):
"""Check if a job to be executed with the same key exists."""
existing = (
self.env["queue.job"]
.sudo()
.search(
[
("identity_key", "=", self.identity_key),
("state", "in", [WAIT_DEPENDENCIES, PENDING, ENQUEUED]),
],
limit=1,
)
)
return existing
@staticmethod
def db_record_from_uuid(env, job_uuid):
# TODO remove in 15.0 or 16.0
_logger.debug("deprecated, use 'db_records_from_uuids")
return Job.db_records_from_uuids(env, [job_uuid])
@staticmethod
def db_records_from_uuids(env, job_uuids):
model = env["queue.job"].sudo()
record = model.search([("uuid", "in", tuple(job_uuids))])
return record.with_env(env).sudo()
def __init__(
self,
func,
args=None,
kwargs=None,
priority=None,
eta=None,
job_uuid=None,
max_retries=None,
description=None,
channel=None,
identity_key=None,
):
"""Create a Job
:param func: function to execute
:type func: function
:param args: arguments for func
:type args: tuple
:param kwargs: keyworkd arguments for func
:type kwargs: dict
:param priority: priority of the job,
the smaller is the higher priority
:type priority: int
:param eta: the job can be executed only after this datetime
(or now + timedelta)
:type eta: datetime or timedelta
:param job_uuid: UUID of the job
:param max_retries: maximum number of retries before giving up and set
the job state to 'failed'. A value of 0 means infinite retries.
:param description: human description of the job. If None, description
is computed from the function doc or name
:param channel: The complete channel name to use to process the job.
:param identity_key: A hash to uniquely identify a job, or a function
that returns this hash (the function takes the job
as argument)
"""
if args is None:
args = ()
if isinstance(args, list):
args = tuple(args)
assert isinstance(args, tuple), "%s: args are not a tuple" % args
if kwargs is None:
kwargs = {}
assert isinstance(kwargs, dict), "%s: kwargs are not a dict" % kwargs
if not _is_model_method(func):
raise TypeError("Job accepts only methods of Models")
recordset = func.__self__
env = recordset.env
self.method_name = func.__name__
self.recordset = recordset
self.env = env
self.job_model = self.env["queue.job"]
self.job_model_name = "queue.job"
self.job_config = (
self.env["queue.job.function"].sudo().job_config(self.job_function_name)
)
self.state = PENDING
self.retry = 0
if max_retries is None:
self.max_retries = DEFAULT_MAX_RETRIES
else:
self.max_retries = max_retries
self._uuid = job_uuid
self.graph_uuid = None
self.args = args
self.kwargs = kwargs
self.__depends_on_uuids = set()
self.__reverse_depends_on_uuids = set()
self._depends_on = set()
self._reverse_depends_on = weakref.WeakSet()
self.priority = priority
if self.priority is None:
self.priority = DEFAULT_PRIORITY
self.date_created = datetime.now()
self._description = description
if isinstance(identity_key, str):
self._identity_key = identity_key
self._identity_key_func = None
else:
# we'll compute the key on the fly when called
# from the function
self._identity_key = None
self._identity_key_func = identity_key
self.date_enqueued = None
self.date_started = None
self.date_done = None
self.date_cancelled = None
self.result = None
self.exc_name = None
self.exc_message = None
self.exc_info = None
if "company_id" in env.context:
company_id = env.context["company_id"]
else:
company_id = env.company.id
self.company_id = company_id
self._eta = None
self.eta = eta
self.channel = channel
self.worker_pid = None
def add_depends(self, jobs):
if self in jobs:
raise ValueError("job cannot depend on itself")
self.__depends_on_uuids |= {j.uuid for j in jobs}
self._depends_on.update(jobs)
for parent in jobs:
parent.__reverse_depends_on_uuids.add(self.uuid)
parent._reverse_depends_on.add(self)
if any(j.state != DONE for j in jobs):
self.state = WAIT_DEPENDENCIES
def perform(self):
"""Execute the job.
The job is executed with the user which has initiated it.
"""
self.retry += 1
try:
self.result = self.func(*tuple(self.args), **self.kwargs)
except RetryableJobError as err:
if err.ignore_retry:
self.retry -= 1
raise
elif not self.max_retries: # infinite retries
raise
elif self.retry >= self.max_retries:
type_, value, traceback = sys.exc_info()
# change the exception type but keep the original
# traceback and message:
# http://blog.ianbicking.org/2007/09/12/re-raising-exceptions/
new_exc = FailedJobError(
"Max. retries (%d) reached: %s" % (self.max_retries, value or type_)
)
raise new_exc from err
raise
return self.result
def _get_common_dependent_jobs_query(self):
return """
UPDATE queue_job
SET state = %s
FROM (
SELECT child.id, array_agg(parent.state) as parent_states
FROM queue_job job
JOIN LATERAL
json_array_elements_text(
job.dependencies::json->'reverse_depends_on'
) child_deps ON true
JOIN queue_job child
ON child.graph_uuid = job.graph_uuid
AND child.uuid = child_deps
JOIN LATERAL
json_array_elements_text(
child.dependencies::json->'depends_on'
) parent_deps ON true
JOIN queue_job parent
ON parent.graph_uuid = job.graph_uuid
AND parent.uuid = parent_deps
WHERE job.uuid = %s
GROUP BY child.id
) jobs
WHERE
queue_job.id = jobs.id
AND %s = ALL(jobs.parent_states)
AND state = %s;
"""
def enqueue_waiting(self):
sql = self._get_common_dependent_jobs_query()
self.env.cr.execute(sql, (PENDING, self.uuid, DONE, WAIT_DEPENDENCIES))
self.env["queue.job"].invalidate_model(["state"])
def cancel_dependent_jobs(self):
sql = self._get_common_dependent_jobs_query()
self.env.cr.execute(sql, (CANCELLED, self.uuid, CANCELLED, WAIT_DEPENDENCIES))
self.env["queue.job"].invalidate_model(["state"])
def store(self):
"""Store the Job"""
job_model = self.env["queue.job"]
# The sentinel is used to prevent edition sensitive fields (such as
# method_name) from RPC methods.
edit_sentinel = job_model.EDIT_SENTINEL
db_record = self.db_record()
if db_record:
db_record.with_context(_job_edit_sentinel=edit_sentinel).write(
self._store_values()
)
else:
job_model.with_context(_job_edit_sentinel=edit_sentinel).sudo().create(
self._store_values(create=True)
)
def _store_values(self, create=False):
vals = {
"state": self.state,
"priority": self.priority,
"retry": self.retry,
"max_retries": self.max_retries,
"exc_name": self.exc_name,
"exc_message": self.exc_message,
"exc_info": self.exc_info,
"company_id": self.company_id,
"result": str(self.result) if self.result else False,
"date_enqueued": False,
"date_started": False,
"date_done": False,
"exec_time": False,
"date_cancelled": False,
"eta": False,
"identity_key": False,
"worker_pid": self.worker_pid,
"graph_uuid": self.graph_uuid,
}
if self.date_enqueued:
vals["date_enqueued"] = self.date_enqueued
if self.date_started:
vals["date_started"] = self.date_started
if self.date_done:
vals["date_done"] = self.date_done
if self.exec_time:
vals["exec_time"] = self.exec_time
if self.date_cancelled:
vals["date_cancelled"] = self.date_cancelled
if self.eta:
vals["eta"] = self.eta
if self.identity_key:
vals["identity_key"] = self.identity_key
dependencies = {
"depends_on": [parent.uuid for parent in self.depends_on],
"reverse_depends_on": [
children.uuid for children in self.reverse_depends_on
],
}
vals["dependencies"] = dependencies
if create:
vals.update(
{
"user_id": self.env.uid,
"channel": self.channel,
# The following values must never be modified after the
# creation of the job
"uuid": self.uuid,
"name": self.description,
"func_string": self.func_string,
"date_created": self.date_created,
"model_name": self.recordset._name,
"method_name": self.method_name,
"job_function_id": self.job_config.job_function_id,
"channel_method_name": self.job_function_name,
"records": self.recordset,
"args": self.args,
"kwargs": self.kwargs,
}
)
vals_from_model = self._store_values_from_model()
# Sanitize values: make sure you cannot screw core values
vals_from_model = {k: v for k, v in vals_from_model.items() if k not in vals}
vals.update(vals_from_model)
return vals
def _store_values_from_model(self):
vals = {}
value_handlers_candidates = (
"_job_store_values_for_" + self.method_name,
"_job_store_values",
)
for candidate in value_handlers_candidates:
handler = getattr(self.recordset, candidate, None)
if handler is not None:
vals = handler(self)
return vals
@property
def func_string(self):
model = repr(self.recordset)
args = [repr(arg) for arg in self.args]
kwargs = ["{}={!r}".format(key, val) for key, val in self.kwargs.items()]
all_args = ", ".join(args + kwargs)
return "{}.{}({})".format(model, self.method_name, all_args)
def __eq__(self, other):
return self.uuid == other.uuid
def __hash__(self):
return self.uuid.__hash__()
def db_record(self):
return self.db_records_from_uuids(self.env, [self.uuid])
@property
def func(self):
recordset = self.recordset.with_context(job_uuid=self.uuid)
return getattr(recordset, self.method_name)
@property
def job_function_name(self):
func_model = self.env["queue.job.function"].sudo()
return func_model.job_function_name(self.recordset._name, self.method_name)
@property
def identity_key(self):
if self._identity_key is None:
if self._identity_key_func:
self._identity_key = self._identity_key_func(self)
return self._identity_key
@identity_key.setter
def identity_key(self, value):
if isinstance(value, str):
self._identity_key = value
self._identity_key_func = None
else:
# we'll compute the key on the fly when called
# from the function
self._identity_key = None
self._identity_key_func = value
@property
def depends_on(self):
if not self._depends_on:
self._depends_on = Job.load_many(self.env, self.__depends_on_uuids)
return self._depends_on
@property
def reverse_depends_on(self):
if not self._reverse_depends_on:
self._reverse_depends_on = Job.load_many(
self.env, self.__reverse_depends_on_uuids
)
return set(self._reverse_depends_on)
@property
def description(self):
if self._description:
return self._description
elif self.func.__doc__:
return self.func.__doc__.splitlines()[0].strip()
else:
return "{}.{}".format(self.model_name, self.func.__name__)
@property
def uuid(self):
"""Job ID, this is an UUID"""
if self._uuid is None:
self._uuid = str(uuid.uuid4())
return self._uuid
@property
def model_name(self):
return self.recordset._name
@property
def user_id(self):
return self.recordset.env.uid
@property
def eta(self):
return self._eta
@eta.setter
def eta(self, value):
if not value:
self._eta = None
elif isinstance(value, timedelta):
self._eta = datetime.now() + value
elif isinstance(value, int):
self._eta = datetime.now() + timedelta(seconds=value)
else:
self._eta = value
@property
def channel(self):
return self._channel or self.job_config.channel
@channel.setter
def channel(self, value):
self._channel = value
@property
def exec_time(self):
if self.date_done and self.date_started:
return (self.date_done - self.date_started).total_seconds()
return None
def set_pending(self, result=None, reset_retry=True):
if any(j.state != DONE for j in self.depends_on):
self.state = WAIT_DEPENDENCIES
else:
self.state = PENDING
self.date_enqueued = None
self.date_started = None
self.date_done = None
self.worker_pid = None
self.date_cancelled = None
if reset_retry:
self.retry = 0
if result is not None:
self.result = result
def set_enqueued(self):
self.state = ENQUEUED
self.date_enqueued = datetime.now()
self.date_started = None
self.worker_pid = None
def set_started(self):
self.state = STARTED
self.date_started = datetime.now()
self.worker_pid = os.getpid()
self.add_lock_record()
def set_done(self, result=None):
self.state = DONE
self.exc_name = None
self.exc_info = None
self.date_done = datetime.now()
if result is not None:
self.result = result
def set_cancelled(self, result=None):
self.state = CANCELLED
self.date_cancelled = datetime.now()
if result is not None:
self.result = result
def set_failed(self, **kw):
self.state = FAILED
for k, v in kw.items():
if v is not None:
setattr(self, k, v)
def __repr__(self):
return "<Job %s, priority:%d>" % (self.uuid, self.priority)
def _get_retry_seconds(self, seconds=None):
retry_pattern = self.job_config.retry_pattern
if not seconds and retry_pattern:
# ordered from higher to lower count of retries
patt = sorted(retry_pattern.items(), key=lambda t: t[0])
seconds = RETRY_INTERVAL
for retry_count, postpone_seconds in patt:
if self.retry >= retry_count:
seconds = postpone_seconds
else:
break
elif not seconds:
seconds = RETRY_INTERVAL
if isinstance(seconds, (list, tuple)):
seconds = randint(seconds[0], seconds[1])
return seconds
def postpone(self, result=None, seconds=None):
"""Postpone the job
Write an estimated time arrival to n seconds
later than now. Used when an retryable exception
want to retry a job later.
"""
eta_seconds = self._get_retry_seconds(seconds)
self.eta = timedelta(seconds=eta_seconds)
self.exc_name = None
self.exc_info = None
if result is not None:
self.result = result
def related_action(self):
record = self.db_record()
if not self.job_config.related_action_enable:
return None
funcname = self.job_config.related_action_func_name
if not funcname:
funcname = record._default_related_action
if not isinstance(funcname, str):
raise ValueError(
"related_action must be the name of the "
"method on queue.job as string"
)
action = getattr(record, funcname)
action_kwargs = self.job_config.related_action_kwargs
return action(**action_kwargs)
def _is_model_method(func):
return inspect.ismethod(func) and isinstance(
func.__self__.__class__, odoo.models.MetaModel
)

View File

@@ -0,0 +1,163 @@
# Copyright (c) 2015-2016 ACSONE SA/NV (<http://acsone.eu>)
# Copyright 2016 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import logging
from threading import Thread
import time
from odoo.service import server
from odoo.tools import config
try:
from odoo.addons.server_environment import serv_config
if serv_config.has_section("queue_job"):
queue_job_config = serv_config["queue_job"]
else:
queue_job_config = {}
except ImportError:
queue_job_config = config.misc.get("queue_job", {})
from .runner import QueueJobRunner, _channels
_logger = logging.getLogger(__name__)
START_DELAY = 5
# Here we monkey patch the Odoo server to start the job runner thread
# in the main server process (and not in forked workers). This is
# very easy to deploy as we don't need another startup script.
class QueueJobRunnerThread(Thread):
def __init__(self):
Thread.__init__(self)
self.daemon = True
self.runner = QueueJobRunner.from_environ_or_config()
def run(self):
# sleep a bit to let the workers start at ease
time.sleep(START_DELAY)
self.runner.run()
def stop(self):
self.runner.stop()
class WorkerJobRunner(server.Worker):
"""Jobrunner workers"""
def __init__(self, multi):
super().__init__(multi)
self.watchdog_timeout = None
self.runner = QueueJobRunner.from_environ_or_config()
self._recover = False
def sleep(self):
pass
def signal_handler(self, sig, frame): # pylint: disable=missing-return
_logger.debug("WorkerJobRunner (%s) received signal %s", self.pid, sig)
super().signal_handler(sig, frame)
self.runner.stop()
def process_work(self):
if self._recover:
_logger.info("WorkerJobRunner (%s) runner is reinitialized", self.pid)
self.runner = QueueJobRunner.from_environ_or_config()
self._recover = False
_logger.debug("WorkerJobRunner (%s) starting up", self.pid)
time.sleep(START_DELAY)
self.runner.run()
def signal_time_expired_handler(self, n, stack):
_logger.info(
"Worker (%d) CPU time limit (%s) reached.Stop gracefully and recover",
self.pid,
config["limit_time_cpu"],
)
self._recover = True
self.runner.stop()
runner_thread = None
def _is_runner_enabled():
return not _channels().strip().startswith("root:0")
def _start_runner_thread(server_type):
global runner_thread
if not config["stop_after_init"]:
if _is_runner_enabled():
_logger.info("starting jobrunner thread (in %s)", server_type)
runner_thread = QueueJobRunnerThread()
runner_thread.start()
else:
_logger.info(
"jobrunner thread (in %s) NOT started, "
"because the root channel's capacity is set to 0",
server_type,
)
orig_prefork__init__ = server.PreforkServer.__init__
orig_prefork_process_spawn = server.PreforkServer.process_spawn
orig_prefork_worker_pop = server.PreforkServer.worker_pop
orig_threaded_start = server.ThreadedServer.start
orig_threaded_stop = server.ThreadedServer.stop
def prefork__init__(server, app):
res = orig_prefork__init__(server, app)
server.jobrunner = {}
return res
def prefork_process_spawn(server):
orig_prefork_process_spawn(server)
if not hasattr(server, "jobrunner"):
# if 'queue_job' is not in server wide modules, PreforkServer is
# not initialized with a 'jobrunner' attribute, skip this
return
if not server.jobrunner and _is_runner_enabled():
server.worker_spawn(WorkerJobRunner, server.jobrunner)
def prefork_worker_pop(server, pid):
res = orig_prefork_worker_pop(server, pid)
if not hasattr(server, "jobrunner"):
# if 'queue_job' is not in server wide modules, PreforkServer is
# not initialized with a 'jobrunner' attribute, skip this
return res
if pid in server.jobrunner:
server.jobrunner.pop(pid)
return res
def threaded_start(server, *args, **kwargs):
res = orig_threaded_start(server, *args, **kwargs)
_start_runner_thread("threaded server")
return res
def threaded_stop(server):
global runner_thread
if runner_thread:
runner_thread.stop()
res = orig_threaded_stop(server)
if runner_thread:
runner_thread.join()
runner_thread = None
return res
server.PreforkServer.__init__ = prefork__init__
server.PreforkServer.process_spawn = prefork_process_spawn
server.PreforkServer.worker_pop = prefork_worker_pop
server.ThreadedServer.start = threaded_start
server.ThreadedServer.stop = threaded_stop

View File

@@ -0,0 +1,13 @@
import odoo
from .runner import QueueJobRunner
def main():
odoo.tools.config.parse_config()
runner = QueueJobRunner.from_environ_or_config()
runner.run()
if __name__ == "__main__":
main()

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,629 @@
# Copyright (c) 2015-2016 ACSONE SA/NV (<http://acsone.eu>)
# Copyright 2015-2016 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
"""
What is the job runner?
-----------------------
The job runner is the main process managing the dispatch of delayed jobs to
available Odoo workers
How does it work?
-----------------
* It starts as a thread in the Odoo main process or as a new worker
* It receives postgres NOTIFY messages each time jobs are
added or updated in the queue_job table.
* It maintains an in-memory priority queue of jobs that
is populated from the queue_job tables in all databases.
* It does not run jobs itself, but asks Odoo to run them through an
anonymous ``/queue_job/runjob`` HTTP request. [1]_
How to use it?
--------------
* Optionally adjust your configuration through environment variables:
- ``ODOO_QUEUE_JOB_CHANNELS=root:4`` (or any other channels
configuration), default ``root:1``.
- ``ODOO_QUEUE_JOB_SCHEME=https``, default ``http``.
- ``ODOO_QUEUE_JOB_HOST=load-balancer``, default ``http_interface``
or ``localhost`` if unset.
- ``ODOO_QUEUE_JOB_PORT=443``, default ``http_port`` or 8069 if unset.
- ``ODOO_QUEUE_JOB_HTTP_AUTH_USER=jobrunner``, default empty.
- ``ODOO_QUEUE_JOB_HTTP_AUTH_PASSWORD=s3cr3t``, default empty.
- ``ODOO_QUEUE_JOB_JOBRUNNER_DB_HOST=master-db``, default ``db_host``
or ``False`` if unset.
- ``ODOO_QUEUE_JOB_JOBRUNNER_DB_PORT=5432``, default ``db_port``
or ``False`` if unset.
- ``ODOO_QUEUE_JOB_JOBRUNNER_DB_USER=userdb``, default ``db_user``
or ``False`` if unset.
- ``ODOO_QUEUE_JOB_JOBRUNNER_DB_PASSWORD=passdb``, default ``db_password``
or ``False`` if unset.
* Alternatively, configure the channels through the Odoo configuration
file, like:
.. code-block:: ini
[queue_job]
channels = root:4
scheme = https
host = load-balancer
port = 443
http_auth_user = jobrunner
http_auth_password = s3cr3t
jobrunner_db_host = master-db
jobrunner_db_port = 5432
jobrunner_db_user = userdb
jobrunner_db_password = passdb
* Or, if using ``anybox.recipe.odoo``, add this to your buildout configuration:
.. code-block:: ini
[odoo]
recipe = anybox.recipe.odoo
(...)
queue_job.channels = root:4
queue_job.scheme = https
queue_job.host = load-balancer
queue_job.port = 443
queue_job.http_auth_user = jobrunner
queue_job.http_auth_password = s3cr3t
* Start Odoo with ``--load=web,web_kanban,queue_job``
and ``--workers`` greater than 1 [2]_, or set the ``server_wide_modules``
option in The Odoo configuration file:
.. code-block:: ini
[options]
(...)
workers = 4
server_wide_modules = web,web_kanban,queue_job
(...)
* Or, if using ``anybox.recipe.odoo``:
.. code-block:: ini
[odoo]
recipe = anybox.recipe.odoo
(...)
options.workers = 4
options.server_wide_modules = web,web_kanban,queue_job
* Confirm the runner is starting correctly by checking the odoo log file:
.. code-block:: none
...INFO...queue_job.jobrunner.runner: starting
...INFO...queue_job.jobrunner.runner: initializing database connections
...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname>
...INFO...queue_job.jobrunner.runner: database connections ready
* Create jobs (eg using base_import_async) and observe they
start immediately and in parallel.
* Tip: to enable debug logging for the queue job, use
``--log-handler=odoo.addons.queue_job:DEBUG``
Caveat
------
* After creating a new database or installing queue_job on an
existing database, Odoo must be restarted for the runner to detect it.
.. rubric:: Footnotes
.. [1] From a security standpoint, it is safe to have an anonymous HTTP
request because this request only accepts to run jobs that are
enqueued.
.. [2] It works with the threaded Odoo server too, although this way
of running Odoo is obviously not for production purposes.
"""
import datetime
import logging
import os
import selectors
import threading
import time
from contextlib import closing, contextmanager
import psycopg2
import requests
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
import odoo
from odoo.tools import config
from . import queue_job_config
from .channels import ENQUEUED, NOT_DONE, ChannelManager
SELECT_TIMEOUT = 60
ERROR_RECOVERY_DELAY = 5
PG_ADVISORY_LOCK_ID = 2293787760715711918
_logger = logging.getLogger(__name__)
select = selectors.DefaultSelector
class MasterElectionLost(Exception):
pass
# Unfortunately, it is not possible to extend the Odoo
# server command line arguments, so we resort to environment variables
# to configure the runner (channels mostly).
#
# On the other hand, the odoo configuration file can be extended at will,
# so we check it in addition to the environment variables.
def _channels():
return (
os.environ.get("ODOO_QUEUE_JOB_CHANNELS")
or queue_job_config.get("channels")
or "root:1"
)
def _datetime_to_epoch(dt):
# important: this must return the same as postgresql
# EXTRACT(EPOCH FROM TIMESTAMP dt)
return (dt - datetime.datetime(1970, 1, 1)).total_seconds()
def _odoo_now():
dt = datetime.datetime.utcnow()
return _datetime_to_epoch(dt)
def _connection_info_for(db_name):
db_or_uri, connection_info = odoo.sql_db.connection_info_for(db_name)
for p in ("host", "port", "user", "password"):
cfg = os.environ.get(
"ODOO_QUEUE_JOB_JOBRUNNER_DB_%s" % p.upper()
) or queue_job_config.get("jobrunner_db_" + p)
if cfg:
connection_info[p] = cfg
return connection_info
def _async_http_get(scheme, host, port, user, password, db_name, job_uuid):
# TODO: better way to HTTP GET asynchronously (grequest, ...)?
# if this was python3 I would be doing this with
# asyncio, aiohttp and aiopg
def urlopen():
url = "{}://{}:{}/queue_job/runjob?db={}&job_uuid={}".format(
scheme, host, port, db_name, job_uuid
)
# pylint: disable=except-pass
try:
auth = None
if user:
auth = (user, password)
# we are not interested in the result, so we set a short timeout
# but not too short so we trap and log hard configuration errors
response = requests.get(url, timeout=1, auth=auth)
# raise_for_status will result in either nothing, a Client Error
# for HTTP Response codes between 400 and 500 or a Server Error
# for codes between 500 and 600
response.raise_for_status()
except requests.Timeout:
# A timeout is a normal behaviour, it shouldn't be logged as an exception
pass
except Exception:
_logger.exception("exception in GET %s", url)
thread = threading.Thread(target=urlopen)
thread.daemon = True
thread.start()
class Database:
def __init__(self, db_name):
self.db_name = db_name
connection_info = _connection_info_for(db_name)
self.conn = psycopg2.connect(**connection_info)
try:
self.conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
self.has_queue_job = self._has_queue_job()
if self.has_queue_job:
self._acquire_master_lock()
self._initialize()
except BaseException:
self.close()
raise
def close(self):
# pylint: disable=except-pass
# if close fail for any reason, it's either because it's already closed
# and we don't care, or for any reason but anyway it will be closed on
# del
try:
self.conn.close()
except Exception:
pass
self.conn = None
def _acquire_master_lock(self):
"""Acquire the master runner lock or raise MasterElectionLost"""
with closing(self.conn.cursor()) as cr:
cr.execute("SELECT pg_try_advisory_lock(%s)", (PG_ADVISORY_LOCK_ID,))
if not cr.fetchone()[0]:
msg = f"could not acquire master runner lock on {self.db_name}"
raise MasterElectionLost(msg)
def _has_queue_job(self):
with closing(self.conn.cursor()) as cr:
cr.execute(
"SELECT 1 FROM pg_tables WHERE tablename=%s", ("ir_module_module",)
)
if not cr.fetchone():
_logger.debug("%s doesn't seem to be an odoo db", self.db_name)
return False
cr.execute(
"SELECT 1 FROM ir_module_module WHERE name=%s AND state=%s",
("queue_job", "installed"),
)
if not cr.fetchone():
_logger.debug("queue_job is not installed for db %s", self.db_name)
return False
cr.execute(
"""SELECT COUNT(1)
FROM information_schema.triggers
WHERE event_object_table = %s
AND trigger_name = %s""",
("queue_job", "queue_job_notify"),
)
if cr.fetchone()[0] != 3: # INSERT, DELETE, UPDATE
_logger.error(
"queue_job_notify trigger is missing in db %s", self.db_name
)
return False
return True
def _initialize(self):
with closing(self.conn.cursor()) as cr:
cr.execute("LISTEN queue_job")
@contextmanager
def select_jobs(self, where, args):
# pylint: disable=sql-injection
# the checker thinks we are injecting values but we are not, we are
# adding the where conditions, values are added later properly with
# parameters
query = (
"SELECT channel, uuid, id as seq, date_created, "
"priority, EXTRACT(EPOCH FROM eta), state "
"FROM queue_job WHERE %s" % (where,)
)
with closing(self.conn.cursor("select_jobs", withhold=True)) as cr:
cr.execute(query, args)
yield cr
def keep_alive(self):
query = "SELECT 1"
with closing(self.conn.cursor()) as cr:
cr.execute(query)
def set_job_enqueued(self, uuid):
with closing(self.conn.cursor()) as cr:
cr.execute(
"UPDATE queue_job SET state=%s, "
"date_enqueued=date_trunc('seconds', "
" now() at time zone 'utc') "
"WHERE uuid=%s",
(ENQUEUED, uuid),
)
def _query_requeue_dead_jobs(self):
return """
UPDATE
queue_job
SET
state=(
CASE
WHEN
max_retries IS NOT NULL AND
max_retries != 0 AND -- infinite retries if max_retries is 0
retry IS NOT NULL AND
retry>max_retries
THEN 'failed'
ELSE 'pending'
END),
retry=(CASE WHEN state='started' THEN COALESCE(retry,0)+1 ELSE retry END),
exc_name=(
CASE
WHEN
max_retries IS NOT NULL AND
max_retries != 0 AND -- infinite retries if max_retries is 0
retry IS NOT NULL AND
retry>max_retries
THEN 'JobFoundDead'
ELSE exc_name
END),
exc_info=(
CASE
WHEN
max_retries IS NOT NULL AND
max_retries != 0 AND -- infinite retries if max_retries is 0
retry IS NOT NULL AND
retry>max_retries
THEN 'Job found dead after too many retries'
ELSE exc_info
END)
WHERE
id in (
SELECT
queue_job_id
FROM
queue_job_lock
WHERE
queue_job_id in (
SELECT
id
FROM
queue_job
WHERE
state IN ('enqueued','started')
AND date_enqueued <
(now() AT TIME ZONE 'utc' - INTERVAL '10 sec')
)
FOR UPDATE SKIP LOCKED
)
RETURNING uuid
"""
def requeue_dead_jobs(self):
"""
Set started and enqueued jobs but not locked to pending
A job is locked when it's being executed
When a job is killed, it releases the lock
If the number of retries exceeds the number of max retries,
the job is set as 'failed' with the error 'JobFoundDead'.
Adding a buffer on 'date_enqueued' to check
that it has been enqueued for more than 10sec.
This prevents from requeuing jobs before they are actually started.
When Odoo shuts down normally, it waits for running jobs to finish.
However, when the Odoo server crashes or is otherwise force-stopped,
running jobs are interrupted while the runner has no chance to know
they have been aborted.
"""
with closing(self.conn.cursor()) as cr:
query = self._query_requeue_dead_jobs()
cr.execute(query)
for (uuid,) in cr.fetchall():
_logger.warning("Re-queued dead job with uuid: %s", uuid)
class QueueJobRunner:
def __init__(
self,
scheme="http",
host="localhost",
port=8069,
user=None,
password=None,
channel_config_string=None,
):
self.scheme = scheme
self.host = host
self.port = port
self.user = user
self.password = password
self.channel_manager = ChannelManager()
if channel_config_string is None:
channel_config_string = _channels()
self.channel_manager.simple_configure(channel_config_string)
self.db_by_name = {}
self._stop = False
self._stop_pipe = os.pipe()
def __del__(self):
# pylint: disable=except-pass
try:
os.close(self._stop_pipe[0])
except OSError:
pass
try:
os.close(self._stop_pipe[1])
except OSError:
pass
@classmethod
def from_environ_or_config(cls):
scheme = os.environ.get("ODOO_QUEUE_JOB_SCHEME") or queue_job_config.get(
"scheme"
)
host = (
os.environ.get("ODOO_QUEUE_JOB_HOST")
or queue_job_config.get("host")
or config["http_interface"]
)
port = (
os.environ.get("ODOO_QUEUE_JOB_PORT")
or queue_job_config.get("port")
or config["http_port"]
)
user = os.environ.get("ODOO_QUEUE_JOB_HTTP_AUTH_USER") or queue_job_config.get(
"http_auth_user"
)
password = os.environ.get(
"ODOO_QUEUE_JOB_HTTP_AUTH_PASSWORD"
) or queue_job_config.get("http_auth_password")
runner = cls(
scheme=scheme or "http",
host=host or "localhost",
port=port or 8069,
user=user,
password=password,
)
return runner
def get_db_names(self):
if config["db_name"]:
db_names = config["db_name"].split(",")
else:
db_names = odoo.service.db.list_dbs(True)
return db_names
def close_databases(self, remove_jobs=True):
for db_name, db in self.db_by_name.items():
try:
if remove_jobs:
self.channel_manager.remove_db(db_name)
db.close()
except Exception:
_logger.warning("error closing database %s", db_name, exc_info=True)
self.db_by_name = {}
def initialize_databases(self):
for db_name in sorted(self.get_db_names()):
# sorting is important to avoid deadlocks in acquiring the master lock
db = Database(db_name)
if db.has_queue_job:
self.db_by_name[db_name] = db
with db.select_jobs("state in %s", (NOT_DONE,)) as cr:
for job_data in cr:
self.channel_manager.notify(db_name, *job_data)
_logger.info("queue job runner ready for db %s", db_name)
else:
db.close()
def requeue_dead_jobs(self):
for db in self.db_by_name.values():
if db.has_queue_job:
db.requeue_dead_jobs()
def run_jobs(self):
now = _odoo_now()
for job in self.channel_manager.get_jobs_to_run(now):
if self._stop:
break
_logger.info("asking Odoo to run job %s on db %s", job.uuid, job.db_name)
self.db_by_name[job.db_name].set_job_enqueued(job.uuid)
_async_http_get(
self.scheme,
self.host,
self.port,
self.user,
self.password,
job.db_name,
job.uuid,
)
def process_notifications(self):
for db in self.db_by_name.values():
if not db.conn.notifies:
# If there are no activity in the queue_job table it seems that
# tcp keepalives are not sent (in that very specific scenario),
# causing some intermediaries (such as haproxy) to close the
# connection, making the jobrunner to restart on a socket error
db.keep_alive()
while db.conn.notifies:
if self._stop:
break
notification = db.conn.notifies.pop()
uuid = notification.payload
with db.select_jobs("uuid = %s", (uuid,)) as cr:
job_datas = cr.fetchone()
if job_datas:
self.channel_manager.notify(db.db_name, *job_datas)
else:
self.channel_manager.remove_job(uuid)
def wait_notification(self):
for db in self.db_by_name.values():
if db.conn.notifies:
# something is going on in the queue, no need to wait
return
# wait for something to happen in the queue_job tables
# we'll select() on database connections and the stop pipe
conns = [db.conn for db in self.db_by_name.values()]
conns.append(self._stop_pipe[0])
# look if the channels specify a wakeup time
wakeup_time = self.channel_manager.get_wakeup_time()
if not wakeup_time:
# this could very well be no timeout at all, because
# any activity in the job queue will wake us up, but
# let's have a timeout anyway, just to be safe
timeout = SELECT_TIMEOUT
else:
timeout = wakeup_time - _odoo_now()
# wait for a notification or a timeout;
# if timeout is negative (ie wakeup time in the past),
# do not wait; this should rarely happen
# because of how get_wakeup_time is designed; actually
# if timeout remains a large negative number, it is most
# probably a bug
_logger.debug("select() timeout: %.2f sec", timeout)
if timeout > 0:
if conns and not self._stop:
with select() as sel:
for conn in conns:
sel.register(conn, selectors.EVENT_READ)
events = sel.select(timeout=timeout)
for key, _mask in events:
if key.fileobj == self._stop_pipe[0]:
# stop-pipe is not a conn so doesn't need poll()
continue
key.fileobj.poll()
def stop(self):
_logger.info("graceful stop requested")
self._stop = True
# wakeup the select() in wait_notification
os.write(self._stop_pipe[1], b".")
def run(self):
_logger.info("starting")
while not self._stop:
# outer loop does exception recovery
try:
_logger.debug("initializing database connections")
# TODO: how to detect new databases or databases
# on which queue_job is installed after server start?
self.initialize_databases()
_logger.info("database connections ready")
# inner loop does the normal processing
while not self._stop:
self.requeue_dead_jobs()
self.process_notifications()
self.run_jobs()
self.wait_notification()
except KeyboardInterrupt:
self.stop()
except InterruptedError:
# Interrupted system call, i.e. KeyboardInterrupt during select
self.stop()
except MasterElectionLost as e:
_logger.debug(
"master election lost: %s, sleeping %ds and retrying",
e,
ERROR_RECOVERY_DELAY,
)
self.close_databases()
time.sleep(ERROR_RECOVERY_DELAY)
except Exception:
_logger.exception(
"exception: sleeping %ds and retrying", ERROR_RECOVERY_DELAY
)
self.close_databases()
time.sleep(ERROR_RECOVERY_DELAY)
self.close_databases(remove_jobs=False)
_logger.info("stopped")

View File

@@ -0,0 +1,47 @@
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import logging
from odoo import SUPERUSER_ID, api
_logger = logging.getLogger(__name__)
def migrate(cr, version):
with api.Environment.manage():
env = api.Environment(cr, SUPERUSER_ID, {})
_logger.info("Computing exception name for failed jobs")
_compute_jobs_new_values(env)
def _compute_jobs_new_values(env):
for job in env["queue.job"].search(
[("state", "=", "failed"), ("exc_info", "!=", False)]
):
exception_details = _get_exception_details(job)
if exception_details:
job.update(exception_details)
def _get_exception_details(job):
for line in reversed(job.exc_info.splitlines()):
if _find_exception(line):
name, msg = line.split(":", 1)
return {
"exc_name": name.strip(),
"exc_message": msg.strip("()', \""),
}
def _find_exception(line):
# Just a list of common errors.
# If you want to target others, add your own migration step for your db.
exceptions = (
"Error:", # catch all well named exceptions
# other live instance errors found
"requests.exceptions.MissingSchema",
"botocore.errorfactory.NoSuchKey",
)
for exc in exceptions:
if exc in line:
return exc

View File

@@ -0,0 +1,33 @@
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
from odoo.tools.sql import column_exists, table_exists
def migrate(cr, version):
if table_exists(cr, "queue_job") and not column_exists(
cr, "queue_job", "exec_time"
):
# Disable trigger otherwise the update takes ages.
cr.execute(
"""
ALTER TABLE queue_job DISABLE TRIGGER queue_job_notify;
"""
)
cr.execute(
"""
ALTER TABLE queue_job ADD COLUMN exec_time double precision DEFAULT 0;
"""
)
cr.execute(
"""
UPDATE
queue_job
SET
exec_time = EXTRACT(EPOCH FROM (date_done - date_started));
"""
)
cr.execute(
"""
ALTER TABLE queue_job ENABLE TRIGGER queue_job_notify;
"""
)

View File

@@ -0,0 +1,11 @@
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
from openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
# Remove cron garbage collector
openupgrade.delete_records_safely_by_xml_id(
env,
["queue_job.ir_cron_queue_job_garbage_collector"],
)

View File

@@ -0,0 +1,10 @@
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
from odoo.tools.sql import table_exists
def migrate(cr, version):
if table_exists(cr, "queue_job"):
# Drop index 'queue_job_identity_key_state_partial_index',
# it will be recreated during the update
cr.execute("DROP INDEX IF EXISTS queue_job_identity_key_state_partial_index;")

View File

@@ -0,0 +1,6 @@
from . import base
from . import ir_model_fields
from . import queue_job
from . import queue_job_channel
from . import queue_job_function
from . import queue_job_lock

View File

@@ -0,0 +1,266 @@
# Copyright 2016 Camptocamp
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import functools
from odoo import api, models
from ..delay import Delayable, DelayableRecordset
from ..utils import must_run_without_delay
class Base(models.AbstractModel):
"""The base model, which is implicitly inherited by all models.
A new :meth:`~with_delay` method is added on all Odoo Models, allowing to
postpone the execution of a job method in an asynchronous process.
"""
_inherit = "base"
def with_delay(
self,
priority=None,
eta=None,
max_retries=None,
description=None,
channel=None,
identity_key=None,
):
"""Return a ``DelayableRecordset``
It is a shortcut for the longer form as shown below::
self.with_delay(priority=20).action_done()
# is equivalent to:
self.delayable().set(priority=20).action_done().delay()
``with_delay()`` accepts job properties which specify how the job will
be executed.
Usage with job properties::
env['a.model'].with_delay(priority=30, eta=60*60*5).action_done()
delayable.export_one_thing(the_thing_to_export)
# => the job will be executed with a low priority and not before a
# delay of 5 hours from now
When using :meth:``with_delay``, the final ``delay()`` is implicit.
See the documentation of :meth:``delayable`` for more details.
:return: instance of a DelayableRecordset
:rtype: :class:`odoo.addons.queue_job.job.DelayableRecordset`
"""
return DelayableRecordset(
self,
priority=priority,
eta=eta,
max_retries=max_retries,
description=description,
channel=channel,
identity_key=identity_key,
)
def delayable(
self,
priority=None,
eta=None,
max_retries=None,
description=None,
channel=None,
identity_key=None,
):
"""Return a ``Delayable``
The returned instance allows to enqueue any method of the recordset's
Model.
Usage::
delayable = self.env["res.users"].browse(10).delayable(priority=20)
delayable.do_work(name="test"}).delay()
In this example, the ``do_work`` method will not be executed directly.
It will be executed in an asynchronous job.
Method calls on a Delayable generally return themselves, so calls can
be chained together::
delayable.set(priority=15).do_work(name="test"}).delay()
The order of the calls that build the job is not relevant, beside
the call to ``delay()`` that must happen at the very end. This is
equivalent to the example above::
delayable.do_work(name="test"}).set(priority=15).delay()
Very importantly, ``delay()`` must be called on the top-most parent
of a chain of jobs, so if you have this::
job1 = record1.delayable().do_work()
job2 = record2.delayable().do_work()
job1.on_done(job2)
The ``delay()`` call must be made on ``job1``, otherwise ``job2`` will
be delayed, but ``job1`` will never be. When done on ``job1``, the
``delay()`` call will traverse the graph of jobs and delay all of
them::
job1.delay()
For more details on the graph dependencies, read the documentation of
:module:`~odoo.addons.queue_job.delay`.
:param priority: Priority of the job, 0 being the higher priority.
Default is 10.
:param eta: Estimated Time of Arrival of the job. It will not be
executed before this date/time.
:param max_retries: maximum number of retries before giving up and set
the job state to 'failed'. A value of 0 means
infinite retries. Default is 5.
:param description: human description of the job. If None, description
is computed from the function doc or name
:param channel: the complete name of the channel to use to process
the function. If specified it overrides the one
defined on the function
:param identity_key: key uniquely identifying the job, if specified
and a job with the same key has not yet been run,
the new job will not be added. It is either a
string, either a function that takes the job as
argument (see :py:func:`..job.identity_exact`).
the new job will not be added.
:return: instance of a Delayable
:rtype: :class:`odoo.addons.queue_job.job.Delayable`
"""
return Delayable(
self,
priority=priority,
eta=eta,
max_retries=max_retries,
description=description,
channel=channel,
identity_key=identity_key,
)
def _patch_job_auto_delay(self, method_name, context_key=None):
"""Patch a method to be automatically delayed as job method when called
This patch method has to be called in ``_register_hook`` (example
below).
When a method is patched, any call to the method will not directly
execute the method's body, but will instead enqueue a job.
When a ``context_key`` is set when calling ``_patch_job_auto_delay``,
the patched method is automatically delayed only when this key is
``True`` in the caller's context. It is advised to patch the method
with a ``context_key``, because making the automatic delay *in any
case* can produce nasty and unexpected side effects (e.g. another
module calls the method and expects it to be computed before doing
something else, expecting a result, ...).
A typical use case is when a method in a module we don't control is
called synchronously in the middle of another method, and we'd like all
the calls to this method become asynchronous.
The options of the job usually passed to ``with_delay()`` (priority,
description, identity_key, ...) can be returned in a dictionary by a
method named after the name of the method suffixed by ``_job_options``
which takes the same parameters as the initial method.
It is still possible to force synchronous execution of the method by
setting a key ``_job_force_sync`` to True in the environment context.
Example patching the "foo" method to be automatically delayed as job
(the job options method is optional):
.. code-block:: python
# original method:
def foo(self, arg1):
print("hello", arg1)
def large_method(self):
# doing a lot of things
self.foo("world)
# doing a lot of other things
def button_x(self):
self.with_context(auto_delay_foo=True).large_method()
# auto delay patch:
def foo_job_options(self, arg1):
return {
"priority": 100,
"description": "Saying hello to {}".format(arg1)
}
def _register_hook(self):
self._patch_method(
"foo",
self._patch_job_auto_delay("foo", context_key="auto_delay_foo")
)
return super()._register_hook()
The result when ``button_x`` is called, is that a new job for ``foo``
is delayed.
"""
def auto_delay_wrapper(self, *args, **kwargs):
# when no context_key is set, we delay in any case (warning, can be
# dangerous)
context_delay = self.env.context.get(context_key) if context_key else True
if (
self.env.context.get("job_uuid")
or not context_delay
or must_run_without_delay(self.env)
):
# we are in the job execution
return auto_delay_wrapper.origin(self, *args, **kwargs)
else:
# replace the synchronous call by a job on itself
method_name = auto_delay_wrapper.origin.__name__
job_options_method = getattr(
self, "{}_job_options".format(method_name), None
)
job_options = {}
if job_options_method:
job_options.update(job_options_method(*args, **kwargs))
delayed = self.with_delay(**job_options)
return getattr(delayed, method_name)(*args, **kwargs)
origin = getattr(self, method_name)
return functools.update_wrapper(auto_delay_wrapper, origin)
@api.model
def _job_store_values(self, job):
"""Hook for manipulating job stored values.
You can define a more specific hook for a job function
by defining a method name with this pattern:
`_queue_job_store_values_${func_name}`
NOTE: values will be stored only if they match stored fields on `queue.job`.
:param job: current queue_job.job.Job instance.
:return: dictionary for setting job values.
"""
return {}
@api.model
def _job_prepare_context_before_enqueue_keys(self):
"""Keys to keep in context of stored jobs
Empty by default for backward compatibility.
"""
return ("tz", "lang", "allowed_company_ids", "force_company", "active_test")
def _job_prepare_context_before_enqueue(self):
"""Return the context to store in the jobs
Can be used to keep only safe keys.
"""
return {
key: value
for key, value in self.env.context.items()
if key in self._job_prepare_context_before_enqueue_keys()
}

View File

@@ -0,0 +1,13 @@
# Copyright 2020 Camptocamp
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
from odoo import fields, models
class IrModelFields(models.Model):
_inherit = "ir.model.fields"
ttype = fields.Selection(
selection_add=[("job_serialized", "Job Serialized")],
ondelete={"job_serialized": "cascade"},
)

View File

@@ -0,0 +1,463 @@
# Copyright 2013-2020 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import logging
import random
from datetime import datetime, timedelta
from odoo import _, api, exceptions, fields, models
from odoo.tools import config, html_escape, index_exists
from odoo.addons.base_sparse_field.models.fields import Serialized
from ..delay import Graph
from ..exception import JobError
from ..fields import JobSerialized
from ..job import (
CANCELLED,
DONE,
FAILED,
PENDING,
STARTED,
STATES,
WAIT_DEPENDENCIES,
Job,
)
_logger = logging.getLogger(__name__)
class QueueJob(models.Model):
"""Model storing the jobs to be executed."""
_name = "queue.job"
_description = "Queue Job"
_inherit = ["mail.thread", "mail.activity.mixin"]
_log_access = False
_order = "date_created DESC, date_done DESC"
_removal_interval = 30 # days
_default_related_action = "related_action_open_record"
# This must be passed in a context key "_job_edit_sentinel" to write on
# protected fields. It protects against crafting "queue.job" records from
# RPC (e.g. on internal methods). When ``with_delay`` is used, the sentinel
# is set.
EDIT_SENTINEL = object()
_protected_fields = (
"uuid",
"name",
"date_created",
"model_name",
"method_name",
"func_string",
"channel_method_name",
"job_function_id",
"records",
"args",
"kwargs",
)
uuid = fields.Char(string="UUID", readonly=True, index=True, required=True)
graph_uuid = fields.Char(
string="Graph UUID",
readonly=True,
index=True,
help="Single shared identifier of a Graph. Empty for a single job.",
)
user_id = fields.Many2one(comodel_name="res.users", string="User ID")
company_id = fields.Many2one(
comodel_name="res.company", string="Company", index=True
)
name = fields.Char(string="Description", readonly=True)
model_name = fields.Char(string="Model", readonly=True)
method_name = fields.Char(readonly=True)
# record_ids field is only for backward compatibility (e.g. used in related
# actions), can be removed (replaced by "records") in 14.0
record_ids = JobSerialized(compute="_compute_record_ids", base_type=list)
records = JobSerialized(
string="Record(s)",
readonly=True,
base_type=models.BaseModel,
)
dependencies = Serialized(readonly=True)
# dependency graph as expected by the field widget
dependency_graph = Serialized(compute="_compute_dependency_graph")
graph_jobs_count = fields.Integer(compute="_compute_graph_jobs_count")
args = JobSerialized(readonly=True, base_type=tuple)
kwargs = JobSerialized(readonly=True, base_type=dict)
func_string = fields.Char(string="Task", readonly=True)
state = fields.Selection(STATES, readonly=True, required=True, index=True)
priority = fields.Integer(group_operator=False)
exc_name = fields.Char(string="Exception", readonly=True)
exc_message = fields.Char(string="Exception Message", readonly=True, tracking=True)
exc_info = fields.Text(string="Exception Info", readonly=True)
result = fields.Text(readonly=True)
date_created = fields.Datetime(string="Created Date", readonly=True)
date_started = fields.Datetime(string="Start Date", readonly=True)
date_enqueued = fields.Datetime(string="Enqueue Time", readonly=True)
date_done = fields.Datetime(readonly=True)
exec_time = fields.Float(
string="Execution Time (avg)",
group_operator="avg",
help="Time required to execute this job in seconds. Average when grouped.",
)
date_cancelled = fields.Datetime(readonly=True)
eta = fields.Datetime(string="Execute only after")
retry = fields.Integer(string="Current try")
max_retries = fields.Integer(
string="Max. retries",
help="The job will fail if the number of tries reach the "
"max. retries.\n"
"Retries are infinite when empty.",
)
# FIXME the name of this field is very confusing
channel_method_name = fields.Char(string="Complete Method Name", readonly=True)
job_function_id = fields.Many2one(
comodel_name="queue.job.function",
string="Job Function",
readonly=True,
)
channel = fields.Char(index=True)
identity_key = fields.Char(readonly=True)
worker_pid = fields.Integer(readonly=True)
def init(self):
index_1 = "queue_job_identity_key_state_partial_index"
index_2 = "queue_job_channel_date_done_date_created_index"
if not index_exists(self._cr, index_1):
# Used by Job.job_record_with_same_identity_key
self._cr.execute(
"CREATE INDEX queue_job_identity_key_state_partial_index "
"ON queue_job (identity_key) WHERE state in ('pending', "
"'enqueued', 'wait_dependencies') AND identity_key IS NOT NULL;"
)
if not index_exists(self._cr, index_2):
# Used by <queue.job>.autovacuum
self._cr.execute(
"CREATE INDEX queue_job_channel_date_done_date_created_index "
"ON queue_job (channel, date_done, date_created);"
)
@api.depends("records")
def _compute_record_ids(self):
for record in self:
record.record_ids = record.records.ids
@api.depends("dependencies")
def _compute_dependency_graph(self):
jobs_groups = self.env["queue.job"].read_group(
[
(
"graph_uuid",
"in",
[uuid for uuid in self.mapped("graph_uuid") if uuid],
)
],
["graph_uuid", "ids:array_agg(id)"],
["graph_uuid"],
)
ids_per_graph_uuid = {
group["graph_uuid"]: group["ids"] for group in jobs_groups
}
for record in self:
if not record.graph_uuid:
record.dependency_graph = {}
continue
graph_jobs = self.browse(ids_per_graph_uuid.get(record.graph_uuid) or [])
if not graph_jobs:
record.dependency_graph = {}
continue
graph_ids = {graph_job.uuid: graph_job.id for graph_job in graph_jobs}
graph_jobs_by_ids = {graph_job.id: graph_job for graph_job in graph_jobs}
graph = Graph()
for graph_job in graph_jobs:
graph.add_vertex(graph_job.id)
for parent_uuid in graph_job.dependencies["depends_on"]:
parent_id = graph_ids.get(parent_uuid)
if not parent_id:
continue
graph.add_edge(parent_id, graph_job.id)
for child_uuid in graph_job.dependencies["reverse_depends_on"]:
child_id = graph_ids.get(child_uuid)
if not child_id:
continue
graph.add_edge(graph_job.id, child_id)
record.dependency_graph = {
# list of ids
"nodes": [
graph_jobs_by_ids[graph_id]._dependency_graph_vis_node()
for graph_id in graph.vertices()
],
# list of tuples (from, to)
"edges": graph.edges(),
}
def _dependency_graph_vis_node(self):
"""Return the node as expected by the JobDirectedGraph widget"""
default = ("#D2E5FF", "#2B7CE9")
colors = {
DONE: ("#C2FABC", "#4AD63A"),
FAILED: ("#FB7E81", "#FA0A10"),
STARTED: ("#FFFF00", "#FFA500"),
}
return {
"id": self.id,
"title": "<strong>%s</strong><br/>%s"
% (
html_escape(self.display_name),
html_escape(self.func_string),
),
"color": colors.get(self.state, default)[0],
"border": colors.get(self.state, default)[1],
"shadow": True,
}
def _compute_graph_jobs_count(self):
jobs_groups = self.env["queue.job"].read_group(
[
(
"graph_uuid",
"in",
[uuid for uuid in self.mapped("graph_uuid") if uuid],
)
],
["graph_uuid"],
["graph_uuid"],
)
count_per_graph_uuid = {
group["graph_uuid"]: group["graph_uuid_count"] for group in jobs_groups
}
for record in self:
record.graph_jobs_count = count_per_graph_uuid.get(record.graph_uuid) or 0
@api.model_create_multi
def create(self, vals_list):
if self.env.context.get("_job_edit_sentinel") is not self.EDIT_SENTINEL:
# Prevent to create a queue.job record "raw" from RPC.
# ``with_delay()`` must be used.
raise exceptions.AccessError(
_("Queue jobs must be created by calling 'with_delay()'.")
)
return super(
QueueJob,
self.with_context(mail_create_nolog=True, mail_create_nosubscribe=True),
).create(vals_list)
def write(self, vals):
if self.env.context.get("_job_edit_sentinel") is not self.EDIT_SENTINEL:
write_on_protected_fields = [
fieldname for fieldname in vals if fieldname in self._protected_fields
]
if write_on_protected_fields:
raise exceptions.AccessError(
_("Not allowed to change field(s): {}").format(
write_on_protected_fields
)
)
different_user_jobs = self.browse()
if vals.get("user_id"):
different_user_jobs = self.filtered(
lambda records: records.env.user.id != vals["user_id"]
)
if vals.get("state") == "failed":
self._message_post_on_failure()
result = super().write(vals)
for record in different_user_jobs:
# the user is stored in the env of the record, but we still want to
# have a stored user_id field to be able to search/groupby, so
# synchronize the env of records with user_id
super(QueueJob, record).write(
{"records": record.records.with_user(vals["user_id"])}
)
return result
def open_related_action(self):
"""Open the related action associated to the job"""
self.ensure_one()
job = Job.load(self.env, self.uuid)
action = job.related_action()
if action is None:
raise exceptions.UserError(_("No action available for this job"))
return action
def open_graph_jobs(self):
"""Return action that opens all jobs of the same graph"""
self.ensure_one()
jobs = self.env["queue.job"].search([("graph_uuid", "=", self.graph_uuid)])
action = self.env["ir.actions.act_window"]._for_xml_id(
"queue_job.action_queue_job"
)
action.update(
{
"name": _("Jobs for graph %s") % (self.graph_uuid),
"context": {},
"domain": [("id", "in", jobs.ids)],
}
)
return action
def _change_job_state(self, state, result=None):
"""Change the state of the `Job` object
Changing the state of the Job will automatically change some fields
(date, result, ...).
"""
for record in self:
job_ = Job.load(record.env, record.uuid)
if state == DONE:
job_.set_done(result=result)
job_.store()
record.env["queue.job"].flush_model()
job_.enqueue_waiting()
elif state == PENDING:
job_.set_pending(result=result)
job_.store()
elif state == CANCELLED:
job_.set_cancelled(result=result)
job_.store()
record.env["queue.job"].flush_model()
job_.cancel_dependent_jobs()
else:
raise ValueError("State not supported: %s" % state)
def button_done(self):
result = _("Manually set to done by %s") % self.env.user.name
self._change_job_state(DONE, result=result)
return True
def button_cancelled(self):
result = _("Cancelled by %s") % self.env.user.name
self._change_job_state(CANCELLED, result=result)
return True
def requeue(self):
jobs_to_requeue = self.filtered(lambda job_: job_.state != WAIT_DEPENDENCIES)
jobs_to_requeue._change_job_state(PENDING)
return jobs_to_requeue
def _message_post_on_failure(self):
# subscribe the users now to avoid to subscribe them
# at every job creation
domain = self._subscribe_users_domain()
base_users = self.env["res.users"].search(domain)
for record in self:
users = base_users | record.user_id
record.message_subscribe(partner_ids=users.mapped("partner_id").ids)
msg = record._message_failed_job()
if msg:
record.message_post(body=msg, subtype_xmlid="queue_job.mt_job_failed")
def _subscribe_users_domain(self):
"""Subscribe all users having the 'Queue Job Manager' group"""
group = self.env.ref("queue_job.group_queue_job_manager")
if not group:
return None
companies = self.mapped("company_id")
domain = [("groups_id", "=", group.id)]
if companies:
domain.append(("company_id", "in", companies.ids))
return domain
def _message_failed_job(self):
"""Return a message which will be posted on the job when it is failed.
It can be inherited to allow more precise messages based on the
exception informations.
If nothing is returned, no message will be posted.
"""
self.ensure_one()
return _(
"Something bad happened during the execution of job %s. "
"More details in the 'Exception Information' section.",
self.uuid,
)
def _needaction_domain_get(self):
"""Returns the domain to filter records that require an action
:return: domain or False is no action
"""
return [("state", "=", "failed")]
def autovacuum(self):
"""Delete all jobs done based on the removal interval defined on the
channel
Called from a cron.
"""
for channel in self.env["queue.job.channel"].search([]):
deadline = datetime.now() - timedelta(days=int(channel.removal_interval))
while True:
jobs = self.search(
[
"|",
("date_done", "<=", deadline),
("date_cancelled", "<=", deadline),
("channel", "=", channel.complete_name),
],
order="date_done, date_created",
limit=1000,
)
if jobs:
jobs.unlink()
if not config["test_enable"]:
self.env.cr.commit() # pylint: disable=E8102
else:
break
return True
def related_action_open_record(self):
"""Open a form view with the record(s) of the job.
For instance, for a job on a ``product.product``, it will open a
``product.product`` form view with the product record(s) concerned by
the job. If the job concerns more than one record, it opens them in a
list.
This is the default related action.
"""
self.ensure_one()
records = self.records.exists()
if not records:
return None
action = {
"name": _("Related Record"),
"type": "ir.actions.act_window",
"view_mode": "form",
"res_model": records._name,
}
if len(records) == 1:
action["res_id"] = records.id
else:
action.update(
{
"name": _("Related Records"),
"view_mode": "tree,form",
"domain": [("id", "in", records.ids)],
}
)
return action
def _test_job(self, failure_rate=0):
_logger.info("Running test job.")
if random.random() <= failure_rate:
raise JobError("Job failed")

View File

@@ -0,0 +1,94 @@
# Copyright 2013-2020 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
from odoo import _, api, exceptions, fields, models
class QueueJobChannel(models.Model):
_name = "queue.job.channel"
_description = "Job Channels"
name = fields.Char()
complete_name = fields.Char(
compute="_compute_complete_name", store=True, readonly=True, recursive=True
)
parent_id = fields.Many2one(
comodel_name="queue.job.channel", string="Parent Channel", ondelete="restrict"
)
job_function_ids = fields.One2many(
comodel_name="queue.job.function",
inverse_name="channel_id",
string="Job Functions",
)
removal_interval = fields.Integer(
default=lambda self: self.env["queue.job"]._removal_interval, required=True
)
_sql_constraints = [
("name_uniq", "unique(complete_name)", "Channel complete name must be unique")
]
@api.depends("name", "parent_id.complete_name")
def _compute_complete_name(self):
for record in self:
if not record.name:
complete_name = "" # new record
elif record.parent_id:
complete_name = ".".join([record.parent_id.complete_name, record.name])
else:
complete_name = record.name
record.complete_name = complete_name
@api.constrains("parent_id", "name")
def parent_required(self):
for record in self:
if record.name != "root" and not record.parent_id:
raise exceptions.ValidationError(_("Parent channel required."))
@api.model_create_multi
def create(self, vals_list):
records = self.browse()
if self.env.context.get("install_mode"):
# installing a module that creates a channel: rebinds the channel
# to an existing one (likely we already had the channel created by
# the @job decorator previously)
new_vals_list = []
for vals in vals_list:
name = vals.get("name")
parent_id = vals.get("parent_id")
if name and parent_id:
existing = self.search(
[("name", "=", name), ("parent_id", "=", parent_id)]
)
if existing:
if not existing.get_metadata()[0].get("noupdate"):
existing.write(vals)
records |= existing
continue
new_vals_list.append(vals)
vals_list = new_vals_list
records |= super().create(vals_list)
return records
def write(self, values):
for channel in self:
if (
not self.env.context.get("install_mode")
and channel.name == "root"
and ("name" in values or "parent_id" in values)
):
raise exceptions.UserError(_("Cannot change the root channel"))
return super().write(values)
def unlink(self):
for channel in self:
if channel.name == "root":
raise exceptions.UserError(_("Cannot remove the root channel"))
return super().unlink()
def name_get(self):
result = []
for record in self:
result.append((record.id, record.complete_name))
return result

View File

@@ -0,0 +1,273 @@
# Copyright 2013-2020 Camptocamp SA
# License LGPL-3.0 or later (http://www.gnu.org/licenses/lgpl.html)
import ast
import logging
import re
from collections import namedtuple
from odoo import _, api, exceptions, fields, models, tools
from ..fields import JobSerialized
_logger = logging.getLogger(__name__)
regex_job_function_name = re.compile(r"^<([0-9a-z_\.]+)>\.([0-9a-zA-Z_]+)$")
class QueueJobFunction(models.Model):
_name = "queue.job.function"
_description = "Job Functions"
_log_access = False
JobConfig = namedtuple(
"JobConfig",
"channel "
"retry_pattern "
"related_action_enable "
"related_action_func_name "
"related_action_kwargs "
"job_function_id ",
)
def _default_channel(self):
return self.env.ref("queue_job.channel_root")
name = fields.Char(
compute="_compute_name",
inverse="_inverse_name",
index=True,
store=True,
)
# model and method should be required, but the required flag doesn't
# let a chance to _inverse_name to be executed
model_id = fields.Many2one(
comodel_name="ir.model", string="Model", ondelete="cascade"
)
method = fields.Char()
channel_id = fields.Many2one(
comodel_name="queue.job.channel",
string="Channel",
required=True,
default=lambda r: r._default_channel(),
)
channel = fields.Char(related="channel_id.complete_name", store=True, readonly=True)
retry_pattern = JobSerialized(string="Retry Pattern (serialized)", base_type=dict)
edit_retry_pattern = fields.Text(
string="Retry Pattern",
compute="_compute_edit_retry_pattern",
inverse="_inverse_edit_retry_pattern",
help="Pattern expressing from the count of retries on retryable errors,"
" the number of of seconds to postpone the next execution. Setting the "
"number of seconds to a 2-element tuple or list will randomize the "
"retry interval between the 2 values.\n"
"Example: {1: 10, 5: 20, 10: 30, 15: 300}.\n"
"Example: {1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}.\n"
"See the module description for details.",
)
related_action = JobSerialized(string="Related Action (serialized)", base_type=dict)
edit_related_action = fields.Text(
string="Related Action",
compute="_compute_edit_related_action",
inverse="_inverse_edit_related_action",
help="The action when the button *Related Action* is used on a job. "
"The default action is to open the view of the record related "
"to the job. Configured as a dictionary with optional keys: "
"enable, func_name, kwargs.\n"
"See the module description for details.",
)
@api.depends("model_id.model", "method")
def _compute_name(self):
for record in self:
if not (record.model_id and record.method):
record.name = ""
continue
record.name = self.job_function_name(record.model_id.model, record.method)
def _inverse_name(self):
groups = regex_job_function_name.match(self.name)
if not groups:
raise exceptions.UserError(_("Invalid job function: {}").format(self.name))
model_name = groups[1]
method = groups[2]
model = (
self.env["ir.model"].sudo().search([("model", "=", model_name)], limit=1)
)
if not model:
raise exceptions.UserError(_("Model {} not found").format(model_name))
self.model_id = model.id
self.method = method
@api.depends("retry_pattern")
def _compute_edit_retry_pattern(self):
for record in self:
retry_pattern = record._parse_retry_pattern()
record.edit_retry_pattern = str(retry_pattern)
def _inverse_edit_retry_pattern(self):
try:
edited = (self.edit_retry_pattern or "").strip()
if edited:
self.retry_pattern = ast.literal_eval(edited)
else:
self.retry_pattern = {}
except (ValueError, TypeError, SyntaxError) as ex:
raise exceptions.UserError(
self._retry_pattern_format_error_message()
) from ex
@api.depends("related_action")
def _compute_edit_related_action(self):
for record in self:
record.edit_related_action = str(record.related_action)
def _inverse_edit_related_action(self):
try:
edited = (self.edit_related_action or "").strip()
if edited:
self.related_action = ast.literal_eval(edited)
else:
self.related_action = {}
except (ValueError, TypeError, SyntaxError) as ex:
raise exceptions.UserError(
self._related_action_format_error_message()
) from ex
@staticmethod
def job_function_name(model_name, method_name):
return "<{}>.{}".format(model_name, method_name)
def job_default_config(self):
return self.JobConfig(
channel="root",
retry_pattern={},
related_action_enable=True,
related_action_func_name=None,
related_action_kwargs={},
job_function_id=None,
)
def _parse_retry_pattern(self):
try:
# as json can't have integers as keys and the field is stored
# as json, convert back to int
retry_pattern = {}
for try_count, postpone_value in self.retry_pattern.items():
if isinstance(postpone_value, int):
retry_pattern[int(try_count)] = postpone_value
else:
retry_pattern[int(try_count)] = tuple(postpone_value)
except ValueError:
_logger.error(
"Invalid retry pattern for job function %s,"
" keys could not be parsed as integers, fallback"
" to the default retry pattern.",
self.name,
)
retry_pattern = {}
return retry_pattern
@tools.ormcache("name")
def job_config(self, name):
config = self.search([("name", "=", name)], limit=1)
if not config:
return self.job_default_config()
retry_pattern = config._parse_retry_pattern()
return self.JobConfig(
channel=config.channel,
retry_pattern=retry_pattern,
related_action_enable=config.related_action.get("enable", True),
related_action_func_name=config.related_action.get("func_name"),
related_action_kwargs=config.related_action.get("kwargs", {}),
job_function_id=config.id,
)
def _retry_pattern_format_error_message(self):
return _(
"Unexpected format of Retry Pattern for {}.\n"
"Example of valid formats:\n"
"{{1: 300, 5: 600, 10: 1200, 15: 3000}}\n"
"{{1: (1, 10), 5: (11, 20), 10: (21, 30), 15: (100, 300)}}"
).format(self.name)
@api.constrains("retry_pattern")
def _check_retry_pattern(self):
for record in self:
retry_pattern = record.retry_pattern
if not retry_pattern:
continue
all_values = list(retry_pattern) + list(retry_pattern.values())
for value in all_values:
try:
self._retry_value_type_check(value)
except ValueError as ex:
raise exceptions.UserError(
record._retry_pattern_format_error_message()
) from ex
def _retry_value_type_check(self, value):
if isinstance(value, (tuple, list)):
if len(value) != 2:
raise ValueError
[self._retry_value_type_check(element) for element in value]
return
int(value)
def _related_action_format_error_message(self):
return _(
"Unexpected format of Related Action for {}.\n"
"Example of valid format:\n"
'{{"enable": True, "func_name": "related_action_foo",'
' "kwargs" {{"limit": 10}}}}'
).format(self.name)
@api.constrains("related_action")
def _check_related_action(self):
valid_keys = ("enable", "func_name", "kwargs")
for record in self:
related_action = record.related_action
if not related_action:
continue
if any(key not in valid_keys for key in related_action):
raise exceptions.UserError(
record._related_action_format_error_message()
)
@api.model_create_multi
def create(self, vals_list):
records = self.browse()
if self.env.context.get("install_mode"):
# installing a module that creates a job function: rebinds the record
# to an existing one (likely we already had the job function created by
# the @job decorator previously)
new_vals_list = []
for vals in vals_list:
name = vals.get("name")
if name:
existing = self.search([("name", "=", name)], limit=1)
if existing:
if not existing.get_metadata()[0].get("noupdate"):
existing.write(vals)
records |= existing
continue
new_vals_list.append(vals)
vals_list = new_vals_list
records |= super().create(vals_list)
self.clear_caches()
return records
def write(self, values):
res = super().write(values)
self.clear_caches()
return res
def unlink(self):
res = super().unlink()
self.clear_caches()
return res

View File

@@ -0,0 +1,16 @@
# Copyright 2025 ACSONE SA/NV
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
from odoo import fields, models
class QueueJobLock(models.Model):
_name = "queue.job.lock"
_description = "Queue Job Lock"
queue_job_id = fields.Many2one(
comodel_name="queue.job",
required=True,
ondelete="cascade",
index=True,
)

View File

@@ -0,0 +1,33 @@
# Copyright 2020 ACSONE SA/NV
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
import logging
logger = logging.getLogger(__name__)
def post_init_hook(cr, registry):
# this is the trigger that sends notifications when jobs change
logger.info("Create queue_job_notify trigger")
cr.execute(
"""
DROP TRIGGER IF EXISTS queue_job_notify ON queue_job;
CREATE OR REPLACE
FUNCTION queue_job_notify() RETURNS trigger AS $$
BEGIN
IF TG_OP = 'DELETE' THEN
IF OLD.state != 'done' THEN
PERFORM pg_notify('queue_job', OLD.uuid);
END IF;
ELSE
PERFORM pg_notify('queue_job', NEW.uuid);
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER queue_job_notify
AFTER INSERT OR UPDATE OR DELETE
ON queue_job
FOR EACH ROW EXECUTE PROCEDURE queue_job_notify();
"""
)

View File

@@ -0,0 +1,25 @@
import logging
from odoo import http
_logger = logging.getLogger(__name__)
def post_load():
_logger.info(
"Apply Request._get_session_and_dbname monkey patch to capture db"
" from request with multiple databases"
)
_get_session_and_dbname_orig = http.Request._get_session_and_dbname
def _get_session_and_dbname(self):
session, dbname = _get_session_and_dbname_orig(self)
if (
not dbname
and self.httprequest.path == "/queue_job/runjob"
and self.httprequest.args.get("db")
):
dbname = self.httprequest.args["db"]
return session, dbname
http.Request._get_session_and_dbname = _get_session_and_dbname

View File

@@ -0,0 +1,50 @@
* Using environment variables and command line:
* Adjust environment variables (optional):
- ``ODOO_QUEUE_JOB_CHANNELS=root:4`` or any other channels configuration.
The default is ``root:1``
- if ``xmlrpc_port`` is not set: ``ODOO_QUEUE_JOB_PORT=8069``
* Start Odoo with ``--load=web,queue_job``
and ``--workers`` greater than 1. [1]_
* Keep in mind that the number of workers should be greater than the number of
channels. ``queue_job`` will reuse normal Odoo workers to process jobs. It
will not spawn its own workers.
* Using the Odoo configuration file:
.. code-block:: ini
[options]
(...)
workers = 6
server_wide_modules = web,queue_job
(...)
[queue_job]
channels = root:2
* Environment variables have priority over the configuration file.
* Confirm the runner is starting correctly by checking the odoo log file:
.. code-block::
...INFO...queue_job.jobrunner.runner: starting
...INFO...queue_job.jobrunner.runner: initializing database connections
...INFO...queue_job.jobrunner.runner: queue job runner ready for db <dbname>
...INFO...queue_job.jobrunner.runner: database connections ready
* Create jobs (eg using ``base_import_async``) and observe they
start immediately and in parallel.
* Tip: to enable debug logging for the queue job, use
``--log-handler=odoo.addons.queue_job:DEBUG``
.. [1] It works with the threaded Odoo server too, although this way
of running Odoo is obviously not for production purposes.
* Jobs that remain in `enqueued` or `started` state (because, for instance, their worker has been killed) will be automatically re-queued.

View File

@@ -0,0 +1,12 @@
* Guewen Baconnier <guewen.baconnier@camptocamp.com>
* Stéphane Bidoul <stephane.bidoul@acsone.eu>
* Matthieu Dietrich <matthieu.dietrich@camptocamp.com>
* Jos De Graeve <Jos.DeGraeve@apertoso.be>
* David Lefever <dl@taktik.be>
* Laurent Mignon <laurent.mignon@acsone.eu>
* Laetitia Gangloff <laetitia.gangloff@acsone.eu>
* Cédric Pigeon <cedric.pigeon@acsone.eu>
* Tatiana Deribina <tatiana.deribina@avoin.systems>
* Souheil Bejaoui <souheil.bejaoui@acsone.eu>
* Eric Antones <eantones@nuobit.com>
* Simone Orsi <simone.orsi@camptocamp.com>

View File

@@ -0,0 +1,46 @@
This addon adds an integrated Job Queue to Odoo.
It allows to postpone method calls executed asynchronously.
Jobs are executed in the background by a ``Jobrunner``, in their own transaction.
Example:
.. code-block:: python
from odoo import models, fields, api
class MyModel(models.Model):
_name = 'my.model'
def my_method(self, a, k=None):
_logger.info('executed with a: %s and k: %s', a, k)
class MyOtherModel(models.Model):
_name = 'my.other.model'
def button_do_stuff(self):
self.env['my.model'].with_delay().my_method('a', k=2)
In the snippet of code above, when we call ``button_do_stuff``, a job **capturing
the method and arguments** will be postponed. It will be executed as soon as the
Jobrunner has a free bucket, which can be instantaneous if no other job is
running.
Features:
* Views for jobs, jobs are stored in PostgreSQL
* Jobrunner: execute the jobs, highly efficient thanks to PostgreSQL's NOTIFY
* Channels: give a capacity for the root channel and its sub-channels and
segregate jobs in them. Allow for instance to restrict heavy jobs to be
executed one at a time while little ones are executed 4 at a times.
* Retries: Ability to retry jobs by raising a type of exception
* Retry Pattern: the 3 first tries, retry after 10 seconds, the 5 next tries,
retry after 1 minutes, ...
* Job properties: priorities, estimated time of arrival (ETA), custom
description, number of retries
* Related Actions: link an action on the job view, such as open the record
concerned by the job

Some files were not shown because too many files have changed in this diff Show More