Compare commits

...

559 commits

Author SHA1 Message Date
Peter Bieringer
8cdf262560 revert back to ubuntu-latest for https://github.com/Kozea/Radicale/issues/1594 2025-03-31 07:00:18 +02:00
Peter Bieringer
69587d3f5c
Merge pull request #1751 from pbiering/fix-1515
Fix 1515
2025-03-29 08:57:33 +01:00
Peter Bieringer
f41533cca7 fix for https://github.com/Kozea/Radicale/issues/1515 2025-03-29 08:38:16 +01:00
Peter Bieringer
393a26814b extend copyright 2025-03-29 08:37:57 +01:00
Peter Bieringer
3bdcbbdc56
Merge pull request #1750 from pbiering/lock-location-review
Lock location review
2025-03-29 07:11:04 +01:00
Peter Bieringer
9ca82a8aa2 base folder log+create 2025-03-29 07:04:32 +01:00
Peter Bieringer
ffe5fcc6f3 lock review changelog 2025-03-29 07:03:29 +01:00
Peter Bieringer
ecaed3188c change location of main lock file back to original 2025-03-29 06:57:10 +01:00
Peter Bieringer
c23821ad0c conditionally create missing collection* folders on startup 2025-03-28 07:36:17 +01:00
Peter Bieringer
b744e9658c use collection-root location for lock instead of base directory 2025-03-28 07:35:43 +01:00
Peter Bieringer
3ee5433397 use proper cache location for lock 2025-03-28 07:35:15 +01:00
Peter Bieringer
29915b20c8 add clarification about external auth methods 2025-03-28 06:19:03 +01:00
Peter Bieringer
c91b8e49d5
Merge pull request #1747 from pbiering/hook-ext-path-upload-file
Hook extension for path of uploaded file
2025-03-27 08:39:45 +01:00
Peter Bieringer
14fb50954c extend changelog 2025-03-27 08:36:27 +01:00
Peter Bieringer
312e26977b storage hook extend changelog 2025-03-27 08:32:35 +01:00
Peter Bieringer
3bdc438283 storage hook extend doc 2025-03-27 08:32:23 +01:00
Peter Bieringer
3eb61a82a6 add support for cwd+path 2025-03-27 08:30:48 +01:00
Peter Bieringer
fb986ea02e fix for flake8 2025-03-27 08:30:22 +01:00
Peter Bieringer
af09d532c3 catch unsupported placeholder 2025-03-27 07:57:28 +01:00
Peter Bieringer
70b66ddfe2 extend copyright 2025-03-27 07:57:13 +01:00
Peter Bieringer
6b83c409d4
Merge pull request #1742 from BastelBaus/patch-2
Update ldap.py
2025-03-26 05:50:28 +01:00
Peter Bieringer
7fcf473662 ldap_ignore_attribute_create_modify_timestamp changelog + cosmetics in description 2025-03-25 07:11:36 +01:00
Peter Bieringer
d25786c190
Merge pull request #1740 from BastelBaus/master
added configuration to enable radicale LDAP with Authentik
2025-03-25 07:08:15 +01:00
BastelBaus@gmail.com
5d5b12c124 fixed flake8 errors 2025-03-24 22:14:29 +01:00
BastelBaus
23387fa2f3
Merge branch 'Kozea:master' into master 2025-03-24 21:48:53 +01:00
Peter Bieringer
e0a24b14b4 add requirement for requests module 2025-03-24 21:38:24 +01:00
BastelBaus
2439266d0e
Update ldap.py
Bugfix, user_entry['attributes'][self._ldap_user_attr] is already the string so user_entry['attributes'][self._ldap_user_attr][0] would give only the first character and not the full user attribute
2025-03-24 20:25:51 +01:00
BastelBaus
9f7941d428
Update DOCUMENTATION.md 2025-03-24 20:19:28 +01:00
BastelBaus
3af690fcb6
Update ldap.py 2025-03-24 20:13:38 +01:00
BastelBaus
0d1dcec61a
Update config 2025-03-24 20:12:45 +01:00
BastelBaus
98152062df
Update ldap.py 2025-03-24 20:11:40 +01:00
BastelBaus
bcbf0918a9
Update ldap.py 2025-03-24 20:10:53 +01:00
BastelBaus
f40c4d6e9b
Update config.py 2025-03-24 20:10:10 +01:00
BastelBaus
633dfbc875
Update config.py 2025-03-24 20:09:35 +01:00
BastelBaus
34f51033b7
Update config
added in default config file
2025-03-23 18:10:27 +01:00
BastelBaus
94ad295124
Update config.py
added ldap_authentik_timestamp_hack to config file
2025-03-23 18:08:00 +01:00
BastelBaus
7399286ec9
Update ldap.py
timestamp hack
2025-03-23 18:04:53 +01:00
Peter Bieringer
7d351d6692 bugfix 2025-03-22 18:24:15 +01:00
Peter Bieringer
d4e23e6731 add additional links 2025-03-22 18:24:05 +01:00
Peter Bieringer
de527632e0 add note about authentication 2025-03-22 07:14:24 +01:00
Peter Bieringer
217978e9d5 review, add hint about virtualenv 2025-03-22 07:10:16 +01:00
Peter Bieringer
2772305dde
Merge pull request #1737 from pbiering/pytest-brcrpt-fix
Fix: auth/htpasswd related to detection and use of bcrypt
2025-03-19 06:26:25 +01:00
Peter Bieringer
2ef99e5e85 Fix: auth/htpasswd related to detection and use of bcrypt 2025-03-19 06:17:34 +01:00
Peter Bieringer
26eab43f40 update version to 3.5.0 2025-03-16 06:54:54 +01:00
Peter Bieringer
a3880480a9
Merge pull request #1733 from pbiering/auth-default-denyall
Auth default type changed to denyall
2025-03-15 14:43:06 +01:00
Peter Bieringer
9f8ac21130 reflect change of default for auth.type 2025-03-15 14:36:40 +01:00
Peter Bieringer
e8c974a72a add versions when option was introduced 2025-03-15 14:35:30 +01:00
Peter Bieringer
be43ce5161 change default of authentication type to "denyall" for secure-by-default 2025-03-15 14:34:51 +01:00
Peter Bieringer
7bb4beeae2 add note when was introduced 2025-03-15 14:33:55 +01:00
Peter Bieringer
c9ffde27d8 add forgotten entry 2025-03-15 14:32:10 +01:00
Peter Bieringer
dc56d67c33
Merge pull request #1731 from pbiering/add-remote-auth-warn-if-not-loopback
Add remote auth warn if not loopback
2025-03-14 21:55:42 +01:00
Peter Bieringer
081b8a7fcc extension related to https://github.com/Kozea/Radicale/issues/1529 2025-03-14 21:39:20 +01:00
Peter Bieringer
76753d271a extend changelog 2025-03-14 21:35:31 +01:00
Peter Bieringer
69f85a0bdf only display warning if not started as wsgi 2025-03-14 21:33:36 +01:00
Peter Bieringer
820691ca53 set env to be honored later 2025-03-14 21:32:38 +01:00
Peter Bieringer
358ae55540 add warning in case authentication based on environment is selected and server is not listen to loopback addresses only 2025-03-13 21:48:14 +01:00
Peter Bieringer
e22fbe282b centralize format_address 2025-03-13 21:47:44 +01:00
Peter Bieringer
b0d649f8b9 adjust copyright 2025-03-13 21:31:50 +01:00
Peter Bieringer
8f2099baf8 add note about unpatched htpasswd related to https://github.com/Kozea/Radicale/issues/1721 2025-03-13 06:43:27 +01:00
Peter Bieringer
3a13ffbc51 forgotten bcrypt pattern extension 2025-03-10 06:00:30 +01:00
Peter Bieringer
0f67336987
Merge pull request #1729 from pbiering/htpasswd-cosmetic-bcrypt-extensions
Htpasswd cosmetic bcrypt extensions
2025-03-09 08:59:24 +01:00
Peter Bieringer
cf727101f8 update related to htpasswd auth 2025-03-09 08:53:49 +01:00
Peter Bieringer
9f0385fd67 add some autodetect cases, add 2 additional bcrypt algo 2025-03-09 08:51:20 +01:00
Peter Bieringer
3963bb4d82 extend logging, adjust loglevel for hash error 2025-03-09 08:50:53 +01:00
Peter Bieringer
cffb2aaae3 add support for additional bcrypt algo on autodetect, improve autodetect logic and log not matching hash length 2025-03-09 08:49:30 +01:00
Peter Bieringer
4f0e607583
Merge pull request #1728 from pbiering/catch-invalid-salt
Catch invalid salt
2025-03-08 17:35:56 +01:00
Peter Bieringer
2f1db01083 update 2025-03-08 17:29:47 +01:00
Peter Bieringer
95a8899002 quote error message 2025-03-08 17:28:35 +01:00
Peter Bieringer
41ab96e142 catch ValueError on verify, adjust log level for failed logins 2025-03-08 17:27:02 +01:00
Peter Bieringer
a284d18c16 make encryption visible to other functions 2025-03-08 17:26:28 +01:00
Peter Bieringer
30664f9346
Merge pull request #1726 from pbiering/extend-https-info
Extend https info in log
2025-03-08 17:04:43 +01:00
Peter Bieringer
36aba7a8b9 update related to SSL logging 2025-03-08 17:00:01 +01:00
Peter Bieringer
914320826f extend request log with HTTPS info 2025-03-08 16:50:35 +01:00
Peter Bieringer
9372344bb1 extend header information with HTTPS info 2025-03-08 16:49:28 +01:00
Peter Bieringer
c4a48828d3 extend copyright 2025-03-08 16:48:59 +01:00
Peter Bieringer
ebe0418a4c extend changelog regarding https://github.com/Kozea/Radicale/pull/1725 2025-03-07 07:40:39 +01:00
Peter Bieringer
c3c78db8ae
Merge pull request #1724 from pbiering/support-for-bundled-InfCloud-client
Support for bundled InfCloud client
2025-03-07 07:38:54 +01:00
Peter Bieringer
0fa50210c9
Merge pull request #1725 from przemub/imap-auth-plain
Use AUTHENTICATE PLAIN instead of LOGIN
2025-03-06 18:31:50 +01:00
Przemysław Buczkowski
25402ab641 Use AUTHENTICATE PLAIN instead of LOGIN
Makes imaplib use more modern AUTHENTICATE verb
rather than LOGIN.
The immediate benefit is that now the credentials
can be non-ASCII.
In the future, it may be used to add other
authentication methods, such as OAuth.

References:
* https://datatracker.ietf.org/doc/html/rfc6855.html#page-5
* https://bugs.python.org/issue13700
2025-03-06 13:08:51 +00:00
Peter Bieringer
76281ad1ff tox fixes 2025-03-06 08:52:54 +01:00
Peter Bieringer
1d0ff9e84a tox fix 2025-03-06 08:51:56 +01:00
Peter Bieringer
e52056dea3 InfCloud: update related to support of bundled package 2025-03-06 08:32:27 +01:00
Peter Bieringer
75711b46dc add specific version 2025-03-06 08:32:09 +01:00
Peter Bieringer
45df5a3b94 InfCloud support 2025-03-06 08:23:50 +01:00
Peter Bieringer
2ae1762daa Infcloud: on-the-fly link activation (if available) and default content adjustment 2025-03-06 08:22:34 +01:00
Peter Bieringer
7839ac5783 InfCloud: conditional display of link 2025-03-06 08:22:34 +01:00
Peter Bieringer
4086665d16 InfCloud: extension for link 2025-03-06 08:22:34 +01:00
Peter Bieringer
78dccbdc92 fix for lint 2025-03-05 20:59:41 +01:00
Peter Bieringer
63b98913e0 add client IP in case of SSL error 2025-03-05 19:57:25 +01:00
Peter Bieringer
b729a4c192
Merge pull request #1720 from pbiering/improvements-2
Adjustments related to reverse proxy
2025-03-02 10:35:22 +01:00
Peter Bieringer
a3eb754967 fix typo 2025-03-02 10:28:46 +01:00
Peter Bieringer
d89ada0c17 Review: Apache reverse proxy config example 2025-03-02 09:14:13 +01:00
Peter Bieringer
7afff7ad2b Review: Apache reverse proxy config example 2025-03-02 09:14:02 +01:00
Peter Bieringer
451712d01d push version 2025-03-02 09:10:29 +01:00
Peter Bieringer
d7013ce726 update changelog 2025-03-02 09:09:11 +01:00
Peter Bieringer
280968e694 remove base prefix from path (required for proper handling if called by reverse proxy) 2025-03-02 09:06:30 +01:00
Peter Bieringer
7b4da3a128 detect called by reverse proxy 2025-03-02 09:05:41 +01:00
Peter Bieringer
c6bd129fa2 script_name: config check 2025-03-02 09:05:12 +01:00
Peter Bieringer
bc2444bb9a script_name: DOC 2025-03-02 09:04:36 +01:00
Peter Bieringer
dc35d4d0ad script_name: config example 2025-03-02 09:02:37 +01:00
Peter Bieringer
68f0eafe7d script_name: add config option, fixes https://github.com/Kozea/Radicale/issues/1275 2025-03-02 09:02:10 +01:00
Peter Bieringer
aa248f2b97 move: catch OSerror 2025-03-02 07:54:52 +01:00
Peter Bieringer
a2cd430f64 move: log error in case cache file cannot be moved 2025-03-02 07:54:25 +01:00
Peter Bieringer
36e33ffee1 extend copyright 2025-03-02 07:53:07 +01:00
Peter Bieringer
b8c2bc29ec display internal authentication types in online help 2025-03-01 13:16:57 +01:00
Peter Bieringer
65ce0c57e5
Merge pull request #1719 from pbiering/webui-fixes-2
Webui fixes 2
2025-02-27 21:09:32 +01:00
Peter Bieringer
2958201454 update for WebUI 2025-02-27 21:03:01 +01:00
Peter Bieringer
73681a7767 extend copyright 2025-02-27 20:13:10 +01:00
Peter Bieringer
cdbad007b6 catch OSerror on metadata update 2025-02-27 20:08:19 +01:00
Peter Bieringer
78b94b1d4d add html to display error 2025-02-27 19:40:49 +01:00
Peter Bieringer
e3ae7b3ab5 add copyright 2025-02-27 19:40:37 +01:00
Peter Bieringer
4419aa2285 display error 2025-02-27 19:40:24 +01:00
Peter Bieringer
eb8dc61952 extend copyright 2025-02-27 19:40:11 +01:00
Peter Bieringer
3a4ec11733
Merge pull request #1718 from pbiering/webui-adjustments
Webui adjustments
2025-02-27 08:52:24 +01:00
Peter Bieringer
7318f592c8 use basename of uploaded file as default collection name, support https://github.com/Kozea/Radicale/issues/1633 2025-02-27 08:32:26 +01:00
Peter Bieringer
3910457a8d remove double / for PUT 2025-02-27 08:26:19 +01:00
Peter Bieringer
fcaee51ceb remove double / for MKCOL 2025-02-27 08:09:05 +01:00
Peter Bieringer
c2013ec901 permit dot inside collection name, but not as first char, fixes https://github.com/Kozea/Radicale/issues/1632 2025-02-27 07:50:41 +01:00
Peter Bieringer
29b1da4652 takeover hints from https://github.com/Kozea/Radicale/issues/1168 2025-02-25 06:33:50 +01:00
Peter Bieringer
36a0501484
Merge pull request #1717 from pbiering/extend-module-info-list
Extend module info list
2025-02-25 06:25:45 +01:00
Peter Bieringer
0b5dd82109 extend changelog 2025-02-25 06:21:15 +01:00
Peter Bieringer
9b671beceb extend module list to display version on start 2025-02-25 06:20:14 +01:00
Peter Bieringer
50f5d2e5ef extend copyright 2025-02-25 06:20:03 +01:00
Peter Bieringer
8218081f58 fix loglevel 2025-02-25 06:19:51 +01:00
Peter Bieringer
16ece44faf pam: extend changelog 2025-02-22 18:00:23 +01:00
Peter Bieringer
5302863f53
Merge pull request #1708 from pbiering/merge-pam-auth-from-v1
Merge pam auth from v1
2025-02-22 17:56:58 +01:00
Peter Bieringer
6518f1b63a pam: add to config selector 2025-02-22 17:51:06 +01:00
Peter Bieringer
7f3fedc048 log used python version 2025-02-22 17:50:42 +01:00
Peter Bieringer
0759673e67 pam: config parser 2025-02-22 17:50:24 +01:00
Peter Bieringer
855e3743ca pam: merge+adjust module from v1 2025-02-22 17:50:07 +01:00
Peter Bieringer
c8f650bc2c extend copyright 2025-02-22 17:49:52 +01:00
Peter Bieringer
046d39b1bd pam: add support 2025-02-22 17:49:36 +01:00
Peter Bieringer
954ddea006 pam: config 2025-02-22 17:49:13 +01:00
Peter Bieringer
6683775c81 pam: doc 2025-02-22 17:48:51 +01:00
Peter Bieringer
9791a4db0f pam: doc 2025-02-22 17:48:31 +01:00
Peter Bieringer
970d4ba468 add oauth2 to example 2025-02-22 16:22:18 +01:00
Peter Bieringer
809e35689b
Merge pull request #1707 from pbiering/mtime-check-location-change
Mtime check location change
2025-02-21 07:47:12 +01:00
Peter Bieringer
c3c61c692e update changelog 2025-02-21 07:41:54 +01:00
Peter Bieringer
53251231d4 change mtime test file location to collection-root 2025-02-21 07:41:01 +01:00
Peter Bieringer
63e414850e
Merge pull request #1706 from pbiering/relax-mtime-resolution-check
Relax mtime resolution check
2025-02-20 21:22:00 +01:00
Peter Bieringer
18338b3c6e flake8 fix 2025-02-20 21:17:34 +01:00
Peter Bieringer
d5cb05f817 extend copyright 2025-02-20 21:17:24 +01:00
Peter Bieringer
4ab1cedee3 update changelog 2025-02-20 21:13:43 +01:00
Peter Bieringer
13a78d7365 relax mtime check 2025-02-20 21:12:58 +01:00
Peter Bieringer
93970a1001
Merge pull request #1700 from pbiering/skip-bcrypt-test-if-missing
Skip bcrypt test if missing
2025-02-18 06:20:16 +01:00
Peter Bieringer
c60627141f
Merge pull request #1696 from pbiering/nginx-well-known-rewrite
extend example config for nginx reverse proxy related to well-known
2025-02-18 06:15:36 +01:00
Peter Bieringer
f6b5cb8a1e make flake8 happy 2025-02-18 06:14:59 +01:00
Peter Bieringer
3914735ec0 skip bcrypt related tests if module is missing 2025-02-18 06:13:39 +01:00
Peter Bieringer
48a634af9f check whether bcrypt module is available 2025-02-18 06:13:19 +01:00
Peter Bieringer
3d50ae4a70 update changelog 2025-02-18 06:13:11 +01:00
Peter Bieringer
018978edd8 update from https://github.com/Kozea/Radicale/issues/740 2025-02-11 17:00:05 +01:00
Peter Bieringer
aa35c678ce
Merge pull request #1695 from pbiering/issue-1693-fix-http-return-codes
Issue 1693 fix http return codes
2025-02-11 16:54:41 +01:00
Peter Bieringer
19a47158bd extend changelog 2025-02-11 16:48:48 +01:00
Peter Bieringer
a62da71aa2 fix loglevel 2025-02-11 16:44:30 +01:00
Peter Bieringer
67bbc9a31b catch os error 2025-02-11 16:42:45 +01:00
Peter Bieringer
dc83c6d7d0 extend copyright 2025-02-11 16:42:45 +01:00
Peter Bieringer
484616f363 catch os error 2025-02-11 16:42:45 +01:00
Peter Bieringer
718089e3bf extend copyright 2025-02-11 16:42:45 +01:00
Peter Bieringer
b078a8f002 catch os errors 2025-02-11 16:42:45 +01:00
Peter Bieringer
fde0ecb9b2 change loglevel 2025-02-11 16:42:45 +01:00
Peter Bieringer
803763729a extend copyright 2025-02-11 16:42:45 +01:00
Peter Bieringer
37b18cf5a2 catch error during create_collection 2025-02-11 16:42:45 +01:00
Peter Bieringer
cd51581f38 extend copyright 2025-02-11 16:42:45 +01:00
Peter Bieringer
88accdb672 catch server errors and return proper message 2025-02-11 16:42:45 +01:00
Peter Bieringer
c157dd7d19 extend copyright 2025-02-11 16:42:45 +01:00
Peter Bieringer
605fc65584 improve coding 2025-02-11 16:42:45 +01:00
Peter Bieringer
f0d06cbc7d catch server errors on put 2025-02-11 16:42:45 +01:00
Peter Bieringer
77f69f2b1e add new error code 2025-02-11 16:42:45 +01:00
Peter Bieringer
b011fa4e61 extend copyright year 2025-02-11 16:42:45 +01:00
Peter Bieringer
dcaec20681 extend copyright year 2025-02-11 16:42:45 +01:00
Peter Bieringer
d79abc2b7a
Merge pull request #1692 from raguilar127/master
Fix incorrect method override in authentication plugin example
2025-02-07 06:45:48 +00:00
Rob Aguilar
938f6a97fd
Update DOCUMENTATION.md
Corrected the method override in the authentication plugin example. The original example suggested overriding login(), but BaseAuth expects _login() to be implemented instead. Overriding login() causes a Too many values to unpack error.
2025-02-06 21:56:03 -05:00
Peter Bieringer
c2def71ce6
Merge pull request #1689 from pbiering/auth-oauth2
migrate oauth2 into upstream
2025-02-02 08:17:05 +00:00
Peter Bieringer
6f68a64855 oauth2 doc 2025-02-02 09:14:04 +01:00
Peter Bieringer
f3a7641baa 3.4.2.dev 2025-02-02 09:09:08 +01:00
Peter Bieringer
cfcfbbd231 oauth2 changelog 2025-02-02 09:08:57 +01:00
Peter Bieringer
e0d20edbcd oauth2 do not throw exception in case server is not reachable 2025-02-02 09:04:42 +01:00
Peter Bieringer
d2be086cd1 oauth2 adjustments to radicale changes in the past 2025-02-02 09:04:20 +01:00
Peter Bieringer
7b6146405f make tox happy 2025-02-02 09:04:06 +01:00
Peter Bieringer
04523e5087 oauth2 config option 2025-02-02 09:03:42 +01:00
Peter Bieringer
23a68b2fb1 extend mypy options 2025-02-02 09:03:25 +01:00
Peter Bieringer
87dc5538d2 oauth2 module enabling 2025-02-02 09:01:58 +01:00
Peter Bieringer
e28b719233 oauth2 example config 2025-02-02 09:01:40 +01:00
Peter Bieringer
937acf38f7 oauth2 config check improvement 2025-02-02 08:33:49 +01:00
Peter Bieringer
063883797c add copyright 2025-02-02 08:32:42 +01:00
Peter Bieringer
30389f4525 initial from https://gitlab.mim-libre.fr/alphabet/radicale_oauth/-/blob/dev/oauth2/radicale_auth_oauth2/__init__.py 2025-02-02 08:29:02 +01:00
Peter Bieringer
780aaa7e3e clarify quick installation guides 2025-01-26 12:10:37 +01:00
Peter Bieringer
98e65d88a4 release 3.4.1 2025-01-26 08:15:14 +01:00
Peter Bieringer
10a79b9483
Merge pull request #1682 from pbiering/minor-changes
add logging entries for dovecot, adjust for imap
2025-01-20 06:19:09 +00:00
Peter Bieringer
26637a1240 add logging entries for dovecot, adjust for imap 2025-01-20 06:31:56 +01:00
Peter Bieringer
f9457f00f7
Merge pull request #1681 from pbiering/merge-auth-imap
Merge auth imap
2025-01-16 05:17:54 +00:00
Peter Bieringer
3df5d28432 imap: mypy fix 2025-01-16 06:11:57 +01:00
Peter Bieringer
e80bf58901 imap: flake8 fixes 2025-01-16 06:05:14 +01:00
Peter Bieringer
bc939522dc imap: migrate from https://github.com/Unrud/RadicaleIMAP/ 2025-01-16 06:02:22 +01:00
Peter Bieringer
50b76f7114 imap: config parse 2025-01-16 06:02:06 +01:00
Peter Bieringer
72c7d32e44 dovecot: extend doc 2025-01-16 06:01:29 +01:00
Peter Bieringer
c24659c5ec imap: doc and default config 2025-01-16 06:01:01 +01:00
Peter Bieringer
3e18644423 imap: changelog 2025-01-16 05:59:52 +01:00
Peter Bieringer
a93af6f177 update changelog and doc and config for https://github.com/Kozea/Radicale/pull/1678 2025-01-14 08:57:35 +01:00
Peter Bieringer
ed6a5a834e add proper default for dovecot_host 2025-01-14 08:57:15 +01:00
Peter Bieringer
dd9bb2beff 3.4.1.dev 2025-01-14 08:48:58 +01:00
Peter Bieringer
0713041929
Merge pull request #1678 from HmBMvXXiSivMcLGFWoqc/dovecot-auth-ip
Add support for Dovecot auth over network
2025-01-14 07:47:27 +00:00
HmBMvXXiSivMcLGFWoqc
3f04914de4 Add support for Dovecot auth over network 2025-01-13 23:31:13 -08:00
Peter Bieringer
1c77fd819f add missing dovecot option 2025-01-12 06:09:45 +01:00
Peter Bieringer
08a35b19c8 doc bugfix 2025-01-10 07:21:26 +01:00
Peter Bieringer
1634ce9498 add note about install 2025-01-09 23:08:01 +01:00
Peter Bieringer
be64e57ae8 fix topic level 2025-01-09 22:50:51 +01:00
Peter Bieringer
8172b87077 3.4.0 2025-01-09 20:06:57 +01:00
Peter Bieringer
c853ec4a74
Merge pull request #1669 from marschap/more-LDAPauth-patches
More LDAP auth patches
2025-01-04 07:33:47 +00:00
Peter Marschall
5ebaf4ef1c changelog for https://github.com/Kozea/Radicale/pull/1669 2025-01-03 21:56:50 +01:00
Peter Marschall
d6c4e6487a LDAP auth: flexibilize parsing of 'ldap_groups_attribute'
Use helper methods from the LDAP modules to get individual elements
(like in our case the RDN value) out of attributes with DN syntax
in a standard compliant way instead fiddling around ourselves.

If these methods fail, fall back to using the whole attribute value,
which allows us to also use attributes with non-DN syntax for groups
and permissions.
2025-01-03 20:47:36 +01:00
Peter Marschall
f9dd3efc3a LDAP auth: remove config option 'ldap_load_groups'
The same effect can be achieved using the option 'ldap_groups_attribute' alone,
if it's default becomes unset instead of 'memberOf'

Benefit: one config option less to deal with.

While at it, also fix header level for 'ldap_user_attribute' in documentation.
2025-01-03 20:47:31 +01:00
Peter Marschall
6c1445d8db LDAP auth: introduce config option 'ldap_groups_attribute'
This attribute is supposed to hold the group membership information
if the config option 'ldap_load_groups' is True.
If not given, it defaults to 'memberOf' for Active Directory.

Introducing this options allows one to use radicale's LDAP auth with groups
even on LDAP servers that keep their group memberships in a different attribute
than 'memberOf', e.g. Novell eDirectory which uses 'groupMembership'.
2025-01-03 20:27:21 +01:00
Peter Marschall
1ca41e2128 LDAP auth: only ask for memberOf if ldap_load_groups = True
Ask for the 'memberOf' attribute to be returned in the user query only
if 'ldap_load_groups' is set to True.

This fixes the issue that currently LDAP authentication can only be used on
LDAP servers that know this non-standard (it's an Active Directory extension)
attribute.
Other LDAP servers either do not necessarily have the group memberships
stored in the user object (e.g. OpenLDAP), or use different attributes for
this purpose (e.g. Novell eDirectory uses 'groupMembership')
2025-01-03 14:34:51 +01:00
Peter Marschall
607b3af67b LDAP auth: calculate attributes to query in __init__()
Remove code duplication by factoring out the calculation of the
LDAP query attributes out of _login2() resp. _login3() into __init__().
2025-01-03 13:09:59 +01:00
Peter Bieringer
841df09312 changelog for https://github.com/Kozea/Radicale/pull/1666 2025-01-03 09:16:22 +01:00
Peter Bieringer
c81e19616c bump dev version 2025-01-03 09:14:01 +01:00
Peter Bieringer
b0d56f898b
Merge pull request #1668 from pbiering/login-cache
add optional cache for login result and htpasswd + fixes

final  version will be updated to 3.4.0 next
2025-01-03 07:51:06 +00:00
Peter Bieringer
73f8f950d0 add content from https://github.com/Kozea/Radicale/pull/1073 2025-01-03 07:19:33 +01:00
Peter Bieringer
976dfe4a3f drop Python 3.8 changelog 2025-01-03 00:42:08 +01:00
Peter Bieringer
b122002077 drop support of python 3.8, fixes https://github.com/Kozea/Radicale/issues/1628 2025-01-03 00:41:26 +01:00
Peter Bieringer
ad94acddf1 update changelog 2025-01-02 23:19:58 +01:00
Peter Bieringer
2442a794ae tox fixes 2025-01-02 23:17:34 +01:00
Peter Bieringer
a9f2e6fe7b improve code/adjustments 2025-01-03 07:14:32 +01:00
Peter Bieringer
5a00baab3f cosmetics 2025-01-03 07:11:51 +01:00
Peter Bieringer
cf914450ee remove obsolete code and comment as constant execution time is now done by __init__.py 2025-01-03 07:02:29 +01:00
Peter Bieringer
0d43a49ffb add variable sleep to have a constant execution time on failed login 2025-01-02 22:33:54 +01:00
Peter Bieringer
234be74b87
Merge pull request #1666 from marschap/LDAPauth-patches
LDAP auth patches - thank you!
2025-01-02 21:11:32 +00:00
Peter Bieringer
45f2a4cc0e
Merge pull request #1667 from jackwilsdon/fix-ipv6-test
Fix test failing on systems without IPv6 support - thank you very much!
2025-01-02 21:05:45 +00:00
Jack Wilsdon
532fad9ba6 Fix test failing on systems without IPv6 support 2025-01-02 12:18:53 +00:00
Peter Marschall
99f5ec389d LDAP auth: indroduce config option 'ldap_user_attribute'
This option gives us
- flexible authentication options where the name used for logging on
  does not have to be the account name
  e.g. use ldap_filter = (&(obhjectclass=inetOrgperson)(|(cn={0]})(mail={0})))
  to allow loginng on using the cn or the mail address
- automatically consistent / canonicalized username values
  (i.e. exactly the way the LDAP server returns them)
2025-01-02 12:05:39 +01:00
Peter Marschall
0253682c00 LDAP auth: do not blindly assume groups have a 2-letter naming attribute
Instead, strip away everything before (and including) the '=' sign of ther RDN.
2025-01-02 12:05:39 +01:00
Peter Marschall
8c2feb4726 LDAP auth: escape values used in LDAP filters to avoid possible injection of malicious code. 2025-01-02 12:05:39 +01:00
Peter Marschall
c243ae4ebf LDAP auth: require exactly one result when searching for the LDAP user DN
This makes sure not fail securely when the query returns multiple entries

- correct grammar in some cases
- we're doing _authentication here, not authorization
- uppercase LDAP in messages & comments
- rename variable _ldap_version to _ldap_module_version
  to avoid misunderstanding it as LDAP's protocol version
- align formatting & messages better between _login2() and _login3()
2025-01-02 12:05:39 +01:00
Peter Marschall
6f82333ff7 LDAP auth: harmonize _login2() and _login3() methods 2025-01-02 12:05:32 +01:00
Peter Bieringer
6f0ac545f0 code fix 2025-01-02 08:08:22 +01:00
Peter Bieringer
70c4a34eb8 fix/extend changelog 2025-01-01 17:36:33 +01:00
Peter Bieringer
3763f28ae4 tox fixes 2025-01-01 17:36:15 +01:00
Peter Bieringer
0a5ae5b0b4 extend startup logging for htpasswd 2025-01-01 17:31:16 +01:00
Peter Bieringer
5d48ba5d1e add test cases 2025-01-01 17:28:09 +01:00
Peter Bieringer
5a591b6471 use different token 2025-01-01 16:41:11 +01:00
Peter Bieringer
8604dacad0 fix typing 2025-01-01 16:40:55 +01:00
Peter Bieringer
ca665c4849 add a dummy delay action 2025-01-01 16:32:07 +01:00
Peter Bieringer
8fdbd0dbf6 log cosmetics 2025-01-01 16:31:47 +01:00
Peter Bieringer
46fe98f60b make htpasswd cache optional 2025-01-01 16:31:31 +01:00
Peter Bieringer
c10ce7ae46 add support for login info log 2025-01-01 16:30:34 +01:00
Peter Bieringer
6ebca08423 extend copyright 2025-01-01 15:47:22 +01:00
Peter Bieringer
c1be04abd1 fixes suggested by tox 2024-12-31 18:26:43 +01:00
Peter Bieringer
c00ab76c83 [auth] htpasswd: module 'bcrypt' is no longer mandatory in case digest method not used in file / changelog 2024-12-31 17:09:29 +01:00
Peter Bieringer
5357e692d9 [auth] htpasswd: module 'bcrypt' is no longer mandatory in case digest method not used in file 2024-12-31 17:09:21 +01:00
Peter Bieringer
9cac3008b7 extend changelog 2024-12-31 16:15:51 +01:00
Peter Bieringer
2489356dda implement htpasswd file caching 2024-12-31 16:14:38 +01:00
Peter Bieringer
5ce0cee8bf add chache cleanup and locking 2024-12-31 16:13:52 +01:00
Peter Bieringer
79ba07e16b change default cache times 2024-12-31 16:13:05 +01:00
Peter Bieringer
c0acbd4402 update changelog 2024-12-31 08:12:49 +01:00
Peter Bieringer
b75e303556 reorg code, disable caching on not required types 2024-12-31 08:11:19 +01:00
Peter Bieringer
a794a51885 fix failed_login cache, improve coding 2024-12-31 07:57:54 +01:00
Peter Bieringer
4f2990342d add additional debug line 2024-12-31 07:57:13 +01:00
Peter Bieringer
ac8abbd12c 3.3.4.dev 2024-12-30 08:15:55 +01:00
Peter Bieringer
9af15e6656 fixes triggered by tox 2024-12-30 05:25:10 +01:00
Peter Bieringer
30e2ab490e cache_logins+htpasswd 2024-12-30 08:19:20 +01:00
Peter Bieringer
ddd099accd debug log which password hash method was used 2024-12-30 08:17:59 +01:00
Peter Bieringer
8e97b709bf implement cache_logins* option 2024-12-30 08:17:59 +01:00
Peter Bieringer
74311560c9 add cache_logins* options 2024-12-30 08:17:59 +01:00
Peter Marschall
b22038c746 LDAP auth: a little bit of cleanup
- correct grammar in some cases
- we're doing authentication here, not authorization
- uppercase LDAP in messages & comments
- rename variable _ldap_version to _ldap_module_version
  to avoid misunderstanding it as LDAP's protocol version
2024-12-29 17:36:01 +01:00
Peter Bieringer
c2b2274dad update release 2024-12-28 08:05:39 +01:00
Peter Bieringer
2674f9a382 enhance and fix logwatch 2024-12-28 07:56:10 +01:00
Peter Bieringer
51960bcab8 extend doc related to config options used 2024-12-27 09:04:47 +01:00
Peter Bieringer
a5dd4d8a7d
Merge pull request #1665 from itglob/master
Disable overloading BaseAuth login method
2024-12-26 16:19:15 +00:00
IM
94898ef6c1 flake8 E302 2024-12-25 22:28:01 +03:00
IM
7df2fb35a7 Disable overloading BaseAuth login method 2024-12-25 21:56:04 +03:00
Peter Bieringer
a4266c9690
Merge pull request #1664 from pbiering/issue-1133
Fix for Issue 1133
2024-12-25 10:36:08 +00:00
Peter Bieringer
1e8d9eda50 fix found by mypy 2024-12-24 12:10:47 +01:00
Peter Bieringer
0b00218d75 log precondition result on PUT request / changelog 2024-12-24 12:04:09 +01:00
Peter Bieringer
7e23c603c1 log precondition result on PUT request 2024-12-24 12:04:05 +01:00
Peter Bieringer
6569e481df
Merge pull request #1663 from pbiering/logwatch
Contrib: logwatch config and script
2024-12-24 07:29:46 +00:00
Peter Bieringer
b19418f43c update 2024-12-24 08:25:31 +01:00
Peter Bieringer
e2934a12c0 Contrib: logwatch config and script 2024-12-24 08:24:13 +01:00
Peter Bieringer
c8010fa4be fix https://github.com/Kozea/Radicale/issues/1647 2024-12-23 07:07:43 +01:00
Peter Bieringer
b784f476b4
Merge pull request #1660 from pbiering/show-mtime-info-by-default
Show mtime info by default
2024-12-18 21:34:28 +00:00
Peter Bieringer
335584a6b7 make tox happy 2024-12-18 22:28:02 +01:00
Peter Bieringer
9e9d036387 display always mtime result 2024-12-18 22:18:38 +01:00
Peter Bieringer
006c2d2bc0
Merge pull request #1659 from pbiering/suppress-duplicate-log-on-startup
Improve: suppress duplicate log lines on startup
2024-12-18 20:43:07 +00:00
Peter Bieringer
b356edd6be Improve: suppress duplicate log lines on startup 2024-12-18 20:51:33 +01:00
Peter Bieringer
59450e8c2d add additional ReadWritePaths entry, fix existing one 2024-12-18 20:14:56 +01:00
Peter Bieringer
1a76e1ad50 cosmetics 2024-12-18 19:40:32 +01:00
Peter Bieringer
6ebe9aee76
Merge pull request #1657 from pbiering/storage-verify-nosync-cache-precision
Storage verify nosync / show cache precision
2024-12-16 20:03:25 +00:00
Peter Bieringer
6214111f4f make tox happy 2024-12-16 20:58:59 +01:00
Peter Bieringer
0f6dcb7192 disable fsync during storage verification 2024-12-16 20:43:10 +01:00
Peter Bieringer
4b1183ae00 disable fsync during storage verification 2024-12-16 20:43:10 +01:00
Peter Bieringer
c1c8ab2887 remove test code 2024-12-16 20:43:10 +01:00
Peter Bieringer
836827ac8f remove test code 2024-12-16 20:43:10 +01:00
Peter Bieringer
3d4cd7f034 Add: display mtime_ns precision of storage folder with condition warning if too less 2024-12-16 20:43:06 +01:00
Peter Bieringer
a606477e3f update for 3.3.2 2024-12-15 13:08:59 +01:00
Peter Bieringer
c33e96c5a3
Merge pull request #1655 from pbiering/item-cache-mtime-size
Item cache mtime size option
2024-12-15 11:59:33 +00:00
Peter Bieringer
dc51a74e5a add test case 2024-12-15 12:51:02 +01:00
Peter Bieringer
5f79b089c8 fix option name 2024-12-15 12:21:39 +01:00
Peter Bieringer
fc7c50b4cb add note about storage verification 2024-12-15 12:20:24 +01:00
Peter Bieringer
11dad85404 fix types (mpy) 2024-12-15 11:45:38 +01:00
Peter Bieringer
dc20f518dd item-cache-mtime-size: changelog 2024-12-15 11:41:08 +01:00
Peter Bieringer
62bdfeab40 item-cache-mtime-size: feature 2024-12-15 11:40:58 +01:00
Peter Bieringer
ff3f2fc3de item-cache-mtime-size new option doc 2024-12-15 11:40:20 +01:00
Peter Bieringer
4bb00e6070 item-cache-mtime-size: add new option 2024-12-15 11:40:02 +01:00
Peter Bieringer
b7ae6b378b
Merge pull request #1654 from pbiering/set-prodid-on-collection-upload
Set prodid on collection upload
2024-12-15 07:43:49 +00:00
Peter Bieringer
7597c7d4a5 make tox happy 2024-12-15 08:37:35 +01:00
Peter Bieringer
855e983ae2 Fix: set PRODID on collection upload (instead of vobject is inserting default one) 2024-12-15 08:29:53 +01:00
Peter Bieringer
0a5773a844 Extend copyright 2024-12-15 08:29:09 +01:00
Peter Bieringer
f1d007a51e Fix: set PRODID on collection upload 2024-12-15 08:28:37 +01:00
Peter Bieringer
4d04c85f2d
Merge pull request #1653 from pbiering/collection-cache-logging
Collection cache logging
2024-12-14 16:07:11 +00:00
Peter Bieringer
f7d6f6442f make tox happy 2024-12-14 17:02:31 +01:00
Peter Bieringer
a7ce8f032c Add: option [debug] storage_cache_action for conditional logging 2024-12-14 16:49:54 +01:00
Peter Bieringer
05b8172f8f
Merge pull request #1652 from pbiering/auth-uc-username-support
Add: option [auth] uc_username
2024-12-14 08:33:10 +00:00
Peter Bieringer
3ebe51a4cb Add: option [auth] uc_username for uppercase conversion (similar to existing lc_username) 2024-12-14 09:25:36 +01:00
Peter Bieringer
0d29de6db9
Merge pull request #1651 from pbiering/show_ldap_config_on_startup
Show ldap config on startup
2024-12-14 08:24:16 +00:00
Peter Bieringer
886f4ee8d0 make tox happy 2024-12-14 09:09:36 +01:00
Peter Bieringer
46acbfd987 Improve: auth.ldap config shown on startup, terminate in case no password is supplied for bind user 2024-12-14 09:04:15 +01:00
Peter Bieringer
0e0592e3b8 extend copyright 2024-12-14 09:02:36 +01:00
Peter Bieringer
be5eab8671
Merge pull request #1650 from pbiering/no-cache-invalidation-on-upgrade
No cache invalidation on upgrade
2024-12-14 07:42:28 +00:00
Peter Bieringer
9787f87cc7 make tox happy 2024-12-14 08:36:03 +01:00
Peter Bieringer
1e318c81cf fix copyright 2024-12-14 08:22:51 +01:00
Peter Bieringer
119cefce34 Improve: log important module versions on startup 2024-12-14 08:22:22 +01:00
Peter Bieringer
3983b5c887 Improve: avoid automatically invalid cache on upgrade in case no change on cache structure 2024-12-14 08:21:54 +01:00
Peter Bieringer
778f56cc4d
Merge pull request #1648 from pbiering/cache-extension-umask-fixes
Cache extension umask fixes
2024-12-10 08:11:04 +00:00
Peter Bieringer
2bb2d6385b default for filesystem_cache_folder is filesystem_folder 2024-12-10 08:52:51 +01:00
Peter Bieringer
b3d0c16407 fix code 2024-12-10 08:52:31 +01:00
Peter Bieringer
e1ee3d4529 also remove 'item' from cache on delete 2024-12-10 08:26:32 +01:00
Peter Bieringer
644548c866 rename function 2024-12-10 08:25:14 +01:00
Peter Bieringer
05d4e91856 implement umask feature 2024-12-10 08:24:41 +01:00
Peter Bieringer
99b6889d91 implement new options 2024-12-10 08:24:12 +01:00
Peter Bieringer
2d8903dc44 add new options 2024-12-10 08:23:32 +01:00
Peter Bieringer
5681b45298 update with latest changes 2024-12-10 08:23:00 +01:00
Peter Bieringer
5515d1e790 fix typos 2024-12-08 15:34:33 +01:00
Peter Bieringer
eef33f76d1
Merge pull request #1646 from TownCube/fix-issue-1611
Fix 'Replacement index 0 out of range for positional args tuple'
2024-12-08 08:51:27 +00:00
Peter Bieringer
05c349a15f
Update DOCUMENTATION.md
fix typo
2024-12-08 09:46:50 +01:00
TownCube
916c9db3c8 Skip group collection match when groups are not used 2024-12-07 18:24:29 +00:00
Peter Bieringer
ff5fae1663
Merge pull request #1644 from pbiering/fix-python3.14
Fixes for Python 3.14
2024-12-06 05:17:15 +00:00
Peter Bieringer
d9e15dd7c6 fix for https://github.com/Kozea/Radicale/issues/1516 2024-12-06 06:13:50 +01:00
Peter Bieringer
675c5ce8cf
Merge pull request #1643 from pbiering/fix-issue-1635
ignore RRULESET if empty in item
2024-12-05 07:20:40 +00:00
Peter Bieringer
b85c0758d8 extend copyright 2024-12-05 08:14:21 +01:00
Peter Bieringer
3232b34392 fix-issue-1635: add test case 2024-12-05 08:14:06 +01:00
Peter Bieringer
873bf80131 make flake8 happy 2024-12-05 08:03:00 +01:00
Peter Bieringer
38c236aa02 fix-issue-1635: code 2024-12-05 07:54:52 +01:00
Peter Bieringer
f725ee780f fix-issue-1635: update copyright 2024-12-05 07:54:32 +01:00
Peter Bieringer
804170a4d5 fix-issue-1635: extend changelog 2024-12-05 07:54:03 +01:00
Peter Bieringer
2a5b12e21c update version to next dev 2024-12-04 06:48:43 +01:00
Peter Bieringer
6943eb659f
Merge pull request #1642 from pbiering/storage-cache-separation
Storage cache separation
2024-12-03 21:21:13 +00:00
Peter Bieringer
24f5f9b98e make flake8 happy 2024-12-03 21:42:50 +01:00
Peter Bieringer
edd6d0a513 use_cache_subfolder_for_item: add test case 2024-12-03 21:34:11 +01:00
Peter Bieringer
92ce13e348 update copyrights 2024-12-03 21:34:00 +01:00
Peter Bieringer
0fe53e62db use_cache_subfolder_for_item: feature 2024-12-03 21:32:57 +01:00
Peter Bieringer
f754f28518 use_cache_subfolder_for_item: config option 2024-12-03 21:31:57 +01:00
Peter Bieringer
1d241d9e2f use_cache_subfolder_for_item: config 2024-12-03 21:31:28 +01:00
Peter Bieringer
d6bacc9047 use_cache_subfolder_for_item: doc 2024-12-03 21:31:12 +01:00
Peter Bieringer
43466078e7 use_cache_subfolder_for_item: extend changelog 2024-12-03 21:29:57 +01:00
Peter Bieringer
8f80e0eb92 update copyright 2024-12-03 21:20:44 +01:00
Peter Bieringer
a54fb10e17 Fix: debug logging in rights/from_file 2024-12-03 21:19:12 +01:00
Peter Bieringer
166d4ed27b
Merge pull request #1641 from claman/patch-1
Update DOCUMENTATION.md
2024-12-02 21:00:45 +00:00
Alex Claman
2c234b97d1
Update DOCUMENTATION.md
remove unnecessary call for `proxy_ssl_trusted_certificate` in example SSL-enabled reverse proxy config
2024-12-02 15:51:39 -05:00
Peter Bieringer
64acfe27f4 add reference to donation page 2024-11-27 06:06:44 +01:00
Peter Bieringer
48bab4b033 update version 2024-11-24 19:02:54 +01:00
Peter Bieringer
e07a248451 log if destination directory is not a collection 2024-11-24 18:53:00 +01:00
Peter Bieringer
62e6aad2d2 align logging of path 2024-11-24 18:30:59 +01:00
Peter Bieringer
37f7df2786 remove not supported option 2024-11-24 17:57:47 +01:00
Peter Bieringer
f26facba3e cosmetics related to logging doc 2024-11-24 17:10:07 +01:00
Peter Bieringer
4696d252f4 extend changelog 2024-11-24 16:56:00 +01:00
Peter Bieringer
287c0e7171
Merge pull request #1631 from pbiering/improve-hook
Improve storage hook
2024-11-24 15:50:57 +00:00
Peter Bieringer
fbb6b1684a replace eol URL 2024-11-24 16:46:19 +01:00
Peter Bieringer
82064f823a add doc hint from https://github.com/Kozea/Radicale/pull/913 2024-11-24 16:39:40 +01:00
Peter Bieringer
19f5aa0edd extend doc as partially suggested by https://github.com/Kozea/Radicale/pull/914 2024-11-24 16:37:35 +01:00
Peter Bieringer
92e5032278 fix result code 2024-11-24 16:30:13 +01:00
Peter Bieringer
6fa15dae4a extend hook doc in config 2024-11-24 16:29:48 +01:00
Peter Bieringer
5b64ef9fe7 extend hook doc 2024-11-24 16:29:14 +01:00
Peter Bieringer
69780dd0ee adjust test: verify that a request succeeded if the hook still fails (anyhow no rollback implemented) 2024-11-24 15:53:53 +01:00
Peter Bieringer
4781b48a1c review storage hook git part 2024-11-23 21:35:58 +01:00
Peter Bieringer
6f2c1037d5 catch errors during execution of hook, do not raise exception but log error 2024-11-23 21:34:07 +01:00
Peter Bieringer
e4daddc186
Merge pull request #1627 from pbiering/remove-unused-dateutil-references
remove unused dateutil references
2024-11-21 07:09:10 +00:00
Peter Bieringer
f7e46ebf39 change from 3.13.0.beta to final 2024-11-21 08:05:35 +01:00
Peter Bieringer
1ea782e3b2 drop test on pytest-3.8, anyhow EOL since 2024-10 2024-11-21 08:02:18 +01:00
Peter Bieringer
c13e0e60fd remove unused dateutil references https://github.com/Kozea/Radicale/issues/1626 2024-11-21 07:51:20 +01:00
Peter Bieringer
8fea1f907e
Merge pull request #1625 from Fang-/doc-need-ssl
warn of possible client-side SSL requirement
2024-11-18 06:27:18 +00:00
fang
a6b1e000e7
warn of possible client-side SSL requirement
MacOS's Calendar.app may not send auth credentials when connecting to a CalDAV
server over unsecured HTTP. Attempting to set up an account that way will give
error messages about authentication, even though the credential the user
entered are correct.

Here, we point out this case in the documentation, prompting users to try
enabling SSL if they encounter this.
2024-11-17 14:25:55 +01:00
Peter Bieringer
a64f0e1093 update related to nginx/apache proxy configs 2024-11-17 07:12:20 +01:00
Peter Bieringer
18e8ab1ccc add X-Forwarded-Proto 2024-11-17 07:08:52 +01:00
Peter Bieringer
7b0d3ed29d simplify X-Forwarded-Proto 2024-11-17 07:08:23 +01:00
Peter Bieringer
0baf67147e add X-Forwarded-Host/POrt 2024-11-14 19:25:08 +01:00
Peter Bieringer
0f9bf4c063
Merge pull request #1624 from pbiering/ssl-config
SSL socket protocol + ciphersuite option
2024-11-14 06:43:19 +00:00
Peter Bieringer
df5ca97442 update changelog 2024-11-14 07:38:03 +01:00
Peter Bieringer
416081a81f review, calculate also max TLS version 2024-11-14 07:38:03 +01:00
Peter Bieringer
07b7d28323 extend with OpenSSL hint 2024-11-14 07:38:03 +01:00
Peter Bieringer
5380629bda extend doc for SSL protocol/ciphersuite 2024-11-14 07:38:03 +01:00
Peter Bieringer
243b888c8e fix unsupported log level 2024-11-14 07:38:03 +01:00
Peter Bieringer
9ecb95ce37 feedback from isort 2024-11-14 07:38:03 +01:00
Peter Bieringer
6929f3d0b3 ignore: E261 at least two spaces before inline comment 2024-11-14 07:38:03 +01:00
Peter Bieringer
00dac0c030 add logging for ssl cert/key and cafile 2024-11-14 07:38:03 +01:00
Peter Bieringer
fb904320d2 add support for ssl protocol and ciphersuite 2024-11-14 07:38:03 +01:00
Peter Bieringer
1d07d72946 update 2024-11-14 07:28:57 +01:00
Peter Bieringer
d7840b8bff
Merge pull request #1622 from pieterhijma/expand-with-timezone
Expand taking timezone into account
2024-11-12 06:04:08 +00:00
Pieter Hijma
cfc1e94ad8 Expand taking timezone into account 2024-11-08 15:59:34 +01:00
Peter Bieringer
bf77844d34
Merge pull request #1619 from pieterhijma/expand-overridden
Expand overridden
2024-11-08 07:36:54 +00:00
Pieter Hijma
6a6fec5bdd Refactor test_expand 2024-11-07 21:02:19 +01:00
Pieter Hijma
b0d1ccc0f6 Factor expand tests out of base 2024-11-07 21:01:56 +01:00
Pieter Hijma
2d5dc5186b Expand overridden recurring events 2024-11-07 21:01:11 +01:00
Peter Bieringer
36ef753b0e
Merge pull request #1617 from pieterhijma/honor-start-end-expand
Honor start end expand
2024-11-07 17:00:01 +00:00
Pieter Hijma
74f4412761 Honor start and end times expand 2024-11-07 15:36:16 +01:00
Pieter Hijma
ae274911d5 Fix timezone in test file 2024-11-07 15:36:16 +01:00
Peter Bieringer
1ee93f32b2
Merge pull request #1613 from bishtawi/bishtawi/ldap-password-file
Support loading ldap secret from file
2024-11-05 08:57:22 +00:00
Bishtawi
ee2af306d7 Support loading ldap secret from file 2024-11-05 00:35:36 -08:00
Peter Bieringer
687624a403 fix spelling 2024-11-02 13:23:41 +01:00
Peter Bieringer
19cca41a43 update changelog related to auth/type=dovecot 2024-11-01 21:17:40 +01:00
Peter Bieringer
56c375fca2
Merge pull request #1610 from sysnux/master
Rebase galaxy4public patch on top of bf4f5834
2024-10-31 21:08:12 +01:00
Jean-Denis Girard
a1b8c65def Document Dovecot auth 2024-10-31 09:29:03 -10:00
Jean-Denis Girard
c6cc7f3486 Skip Dovecot auth tests on Windows (try again...) 2024-10-31 09:28:35 -10:00
Jean-Denis Girard
652e768650 Skip Dovecot auth tests on Windows 2024-10-31 06:36:47 -10:00
Jean-Denis Girard
f25a5fbc79 Rebase galaxy4public patch on top of bf4f5834 2024-10-30 10:33:10 -10:00
Peter Bieringer
bf4f5834af
Merge pull request #1609 from pbiering/improve-1186
log content in case of multiple main components error
2024-10-29 07:24:12 +01:00
Peter Bieringer
f7c731e189 changelog: log content in case of multiple main components error 2024-10-29 07:19:54 +01:00
Peter Bieringer
059afef35d log content in case of multiple main components error 2024-10-29 07:19:45 +01:00
Peter Bieringer
e0c04f2ae3 update version to 3.3.1.dev 2024-10-29 07:17:31 +01:00
Peter Bieringer
5cafd29d7f initial nginx config file 2024-10-18 08:23:16 +02:00
Peter Bieringer
0badab86a6
Merge pull request #1595 from individual-it/fixCasing
fix casing in docker file
2024-10-15 13:49:19 +02:00
Artur Neumann
b6fa3c47c3
fix casing in docker file 2024-10-15 14:50:08 +05:45
Peter Bieringer
c63d00a550 update minimum python version 2024-10-15 08:25:46 +02:00
Peter Bieringer
8bfed78926 pin ubuntu version to 22.04 (try to fix https://github.com/Kozea/Radicale/issues/1594) 2024-10-15 08:14:54 +02:00
Peter Bieringer
1670e4a793 release 3.3.0 2024-10-13 17:58:37 +02:00
Peter Bieringer
8e9fdf391a
Merge pull request #1593 from pbiering/legacy
Keep legacy setup.* files for packaging for EL8/EL9

relates to https://github.com/Kozea/Radicale/pull/1546
2024-10-12 20:04:31 +02:00
Peter Bieringer
ccddf877ee add legacy setup.* files, requird for RPM build on EL8/EL9 2024-10-12 19:59:56 +02:00
Peter Bieringer
48e4203856 add legacy setup.* files, requird for RPM build on EL8/EL9 2024-10-12 19:58:58 +02:00
Peter Bieringer
bd001fe1d5 add missing parts 2024-10-12 07:40:05 +02:00
Peter Bieringer
dbc939aff2 cosmetic additional new lines 2024-10-12 07:34:23 +02:00
Peter Bieringer
5ec34ed163 add lost option 2024-10-12 07:34:08 +02:00
Peter Bieringer
372e62bb54
Merge pull request #1592 from pbiering/fix-issue-1591
Fix: only expand VEVENT on REPORT request
2024-10-12 07:29:20 +02:00
Peter Bieringer
59c638461b Fix: only expand VEVENT on REPORT request 2024-10-12 07:25:29 +02:00
Peter Bieringer
a8baea9b19
Merge pull request #1589 from pbiering/fix-webcal-xml-url-encoding-issue-1582
Fix webcal xml url encoding issue 1582
2024-10-08 08:12:22 +02:00
Peter Bieringer
c438ccb215 fix #1852 2024-10-08 08:06:06 +02:00
Peter Bieringer
d7c09e218f extend copyright 2024-10-08 08:05:52 +02:00
Peter Bieringer
37148b7124
Merge pull request #1546 from deronnax/migrate-to-pyproject-toml
Migrate to pyproject.toml
2024-10-07 20:32:54 +02:00
Mathieu Dupuy
2c15b1b8f4
native toml config for tox 2024-10-07 10:45:17 +02:00
Mathieu Dupuy
6e103b9c7e
migrate setup.cfg to pyproject.toml 2024-10-07 10:09:12 +02:00
Peter Bieringer
a78e32de4d
Merge pull request #1553 from deronnax/pyupgrade-py38
pyupgrade --py38-plus
2024-10-05 22:55:36 +02:00
Mathieu Dupuy
9faf89880b
migrate setup.py to setup.cfg 2024-10-04 10:40:02 +02:00
Mathieu Dupuy
a01e53616e
empty commit 2024-10-04 10:11:41 +02:00
Peter Bieringer
e59e4d3aff
Merge pull request #1585 from pbiering/add-permit_overwrite_collection
Add option permit_overwrite_collection
2024-09-30 21:52:13 +02:00
Peter Bieringer
67362189f5 fix according to latest default change 2024-09-30 21:45:43 +02:00
Peter Bieringer
ba9776d688 change default, remove leftover 2024-09-30 21:43:50 +02:00
Peter Bieringer
0505b7b603 update changelog permit_overwrite_collection 2024-09-30 21:35:11 +02:00
Peter Bieringer
eed6bcee01 add collection rights for dedicated test cases 2024-09-30 21:33:29 +02:00
Peter Bieringer
110ee9d247 doucment new permissions 2024-09-30 21:28:42 +02:00
Peter Bieringer
457af284e1 whitelist new permissions 2024-09-30 21:28:29 +02:00
Peter Bieringer
e0594d5b33 permit_overwrite_collection doc + rights + test cases 2024-09-30 21:26:24 +02:00
Peter Bieringer
d41aa60d61 cosmetics 2024-09-30 21:26:24 +02:00
Peter Bieringer
973b26b2e9 add new option rights/permit_overwrite_collection 2024-09-30 21:26:24 +02:00
Peter Bieringer
bfe0ccc463
Merge pull request #1584 from pbiering/change-default-permit_delete_collection
permit_delete_collection per collection control
2024-09-30 21:25:58 +02:00
Peter Bieringer
77749cbbb9 update changelog 2024-09-30 21:21:33 +02:00
Peter Bieringer
fc77cf9d66 revert 0f87897e 2024-09-30 21:16:36 +02:00
Peter Bieringer
06a9cf2886 extend whitelisted permission chars 2024-09-30 21:15:10 +02:00
Peter Bieringer
53bc6167d3 add support for dedicated forbid/permit permission 2024-09-30 21:14:39 +02:00
Peter Bieringer
72e4c4fadd add explicit permit/forbid test cases 2024-09-30 21:14:29 +02:00
Peter Bieringer
3e478ee6da update doc 2024-09-30 21:14:14 +02:00
Peter Bieringer
0ab99d4e8f update doc related to changed default 2024-09-29 18:16:54 +02:00
Peter Bieringer
4ef5cad20f add test case 2024-09-29 18:10:53 +02:00
Peter Bieringer
a449d8774b enforce default for tests 2024-09-29 18:10:29 +02:00
Peter Bieringer
0f87897eb7 change default rights/permit_delete_collection from True to False (failsafe) 2024-09-29 17:44:44 +02:00
Peter Bieringer
40c8b3d038 log in case delete of collection is prevented 2024-09-29 17:44:21 +02:00
Peter Bieringer
d15e836079 extend copyright 2024-09-29 17:44:05 +02:00
Peter Bieringer
fce3f0b1df update related to auth/type=ldap 2024-09-29 17:29:28 +02:00
Peter Bieringer
499b37fd2f
Merge pull request #1579 from petervarkoly/master
Implementing ssl connection for ldap auth
2024-09-23 19:42:08 +02:00
Dipl. Ing. Péter Varkoly
e887b06d21 Fix syntax 2024-09-23 15:49:58 +02:00
Dipl. Ing. Péter Varkoly
b1c682de57 Enhance docomentation.
Fix imports
2024-09-23 15:46:08 +02:00
Dipl. Ing. Péter Varkoly
c000408429 Merge remote-tracking branch 'upstream/master' 2024-09-23 13:52:02 +02:00
Dipl. Ing. Péter Varkoly
0feca04086 Implementing ssl connection for ldap auth 2024-09-23 10:19:50 +02:00
Peter Bieringer
fdb014d068
Merge pull request #1576 from petervarkoly/master
Implementing group calendars and increase perfomance
2024-09-22 20:42:05 +02:00
Dipl. Ing. Péter Varkoly
ccb59444c3 Remove trailing whitespaces and unsused import. 2024-09-22 19:01:09 +02:00
Dipl. Ing. Péter Varkoly
97479190e8 Adapt imports. 2024-09-22 18:57:48 +02:00
Dipl. Ing. Péter Varkoly
d1ceb620e4 Adapt function template discovery to the implementation 2024-09-22 18:38:21 +02:00
Dipl. Ing. Péter Varkoly
040a433696 Merge branch 'master' of github.com:petervarkoly/Radicale 2024-09-22 18:15:43 +02:00
Dipl. Ing. Péter Varkoly
187886e797 Merge remote-tracking branch 'upstream/master' 2024-09-22 18:15:22 +02:00
Dipl Ing. Péter Varkoly
3cb9b73a16
Merge branch 'Kozea:master' into master 2024-09-22 18:12:44 +02:00
Dipl. Ing. Péter Varkoly
a272d3039e Implement using group calenders.
Based on the ldap groups the user is member of group calender usage is implemented.
The group calenders must be placed in the GROUPS directory based under collection_root_folder.
The name of the group calender directory is the base64 encoded name of the group to avoid trouble with spaces and special characters in name.
If the directory does not exist the group will be ignored.
2024-09-22 16:56:53 +02:00
Dipl. Ing. Péter Varkoly
98c5ffdc87 Increase performace: open and parse rigts file only by starting.
Hanlde right sections without user.
2024-09-21 18:39:39 +02:00
Dipl. Ing. Péter Varkoly
9945a9f65a Enhance comments.
Remove duplicate entry
2024-09-21 18:37:04 +02:00
Peter Bieringer
7fbc0e70e9
Merge pull request #1570 from petervarkoly/master
Fix issue #197
2024-09-17 20:28:40 +02:00
Dipl. Ing. Péter Varkoly
15ed41fa09 Merge remote-tracking branch 'origin/master' 2024-09-17 09:35:27 +02:00
Dipl. Ing. Péter Varkoly
a92a621b9b Merge remote-tracking branch 'upstream/master' 2024-09-17 09:34:53 +02:00
Dipl. Ing. Péter Varkoly
645619bac8 Fix format string 2024-09-17 09:33:31 +02:00
Dipl Ing. Péter Varkoly
b30cdbbabf
Merge branch 'Kozea:master' into master 2024-09-17 09:26:14 +02:00
Dipl. Ing. Péter Varkoly
b081b3ea06 Fix issue #197 [ERROR] An exception occurred during GET request on '/.web/': string indices must be integers, not 'str' when using LDAP
Enhance logging
2024-09-17 09:25:38 +02:00
Peter Bieringer
da844f48e6
Merge pull request #1218 from petervarkoly/master
Initial version of ldap authentication plugin.
2024-09-13 18:13:15 +02:00
Dipl. Ing. Péter Varkoly
a7f33c8795 Reorder imports. 2024-09-12 12:17:34 +02:00
Dipl. Ing. Péter Varkoly
b47c76e9ca Fix definition of _user_groups in rights 2024-09-12 00:59:26 +02:00
Dipl. Ing. Péter Varkoly
da04d95b75 Fixing type definition error. 2024-09-11 14:13:06 +02:00
Dipl. Ing. Péter Varkoly
e05fbeb950 Apply suggestions of mypy 2024-09-11 09:13:26 +02:00
Dipl. Ing. Péter Varkoly
d75b071fec Fix the problems found by flake8. 2024-09-11 08:12:08 +02:00
Dipl. Ing. Péter Varkoly
5cb16a3a2d Fix syntax 2024-09-09 09:42:30 +02:00
Dipl. Ing. Péter Varkoly
606bd30514 Rebase 2024-09-05 10:44:28 +02:00
Peter Bieringer
6a78466af4
Merge pull request #1562 from pbiering/fix-1561-md5-no-longer-default
Adjustment: option [auth] htpasswd_encryption change default from "md5" to "autodetect"
2024-09-01 17:25:00 +02:00
Peter Bieringer
c63dee71ec Adjustment: option [auth] htpasswd_encryption change default from "md5" to "autodetect" 2024-09-01 17:19:53 +02:00
Peter Bieringer
b1ce69882c revert to 3.dev 2024-09-01 17:13:08 +02:00
Peter Bieringer
e70486900d Version 3.2.3 2024-08-30 06:19:51 +02:00
Peter Bieringer
368c43137a
Merge pull request #1559 from pbiering/fixes-3.2.2
Fixes for 3.2.2
2024-08-28 09:13:23 +02:00
Peter Bieringer
3f62982e1d fix 2024-08-28 09:00:41 +02:00
Peter Bieringer
a79c2ad83e align option name 2024-08-28 08:59:32 +02:00
Peter Bieringer
6d11738243 fix/enhance Apache template for file authentication 2024-08-28 08:39:16 +02:00
Peter Bieringer
e852c887d7 Enhancement: add option to toggle debug log of right with doesn't match 2024-08-28 08:03:16 +02:00
Peter Bieringer
107fe1bc53 fix missing newline at the end 2024-08-28 08:02:54 +02:00
Peter Bieringer
4f1e8ce889 add overseen conditional request_content debug log 2024-08-28 07:49:48 +02:00
Peter Bieringer
39662fc680 fix config section info 2024-08-28 07:48:45 +02:00
Peter Bieringer
336972316e add missing entries 2024-08-27 21:44:57 +02:00
Peter Bieringer
7da46f392e fix logger.warn->warning 2024-08-27 21:34:52 +02:00
Dipl. Ing. Péter Varkoly
d7fa90a976 Adapt based on additional comments. 2024-08-27 17:06:11 +02:00
Dipl. Ing. Péter Varkoly
13d56f0918 Adapt based on additional comments. 2024-08-27 17:04:15 +02:00
Dipl. Ing. Péter Varkoly
8b8d7729a2 Now ldap auth can use ldap and ldap3 also. 2024-08-26 14:16:40 +02:00
Dipl. Ing. Péter Varkoly
5167f12624 Rebase rights/from_file.py.
Apply proposed/asked changes.
2024-08-26 11:21:53 +02:00
Dipl. Ing. Péter Varkoly
19e5972b4f Fix merge conflicts. 2024-08-25 14:11:48 +02:00
Peter Bieringer
bd66d58540
Merge pull request #1337 from react0r-com/react0r
Add basic free/busy reporting
2024-08-18 07:27:10 +02:00
Ray
408a03a3c0 Better freebusy occurrence limit handling and added documentation 2024-08-16 14:57:30 -06:00
ray-react0r
3cba4b32a3
Merge branch 'master' into react0r 2024-08-15 15:07:49 -06:00
Ray
906d391fe3 Drop VERSION change and move changelog inline with releases 2024-08-15 15:03:31 -06:00
Ray
d6c0a05771 Style fixes for tox linting 2024-08-14 11:15:30 -06:00
Ray
29b7cd8d54 Fix for free-busy fbtype statuses 2024-08-14 05:43:53 -06:00
Ray
204623d656 Package housekeeping 2024-08-14 05:43:53 -06:00
Ray
b0f131cac2 Improve free-busy report 2024-08-14 05:43:53 -06:00
Ray
4c1d295e81 Fix bug in free busy report serializing a datetime tzinfo 2024-08-14 05:43:53 -06:00
Ray
7b0d88ff0d Add basic free-busy report 2024-08-14 05:43:53 -06:00
Peter Bieringer
2d0496b888
Merge pull request #1554 from henning-schild/henning/staging0
hook: gracefully ignore non functional hooks and fall back to none
2024-08-07 08:09:58 +02:00
Henning Schild
773f09fe74 hook: gracefully ignore non functional hooks and fall back to none
In case a hook fails to load for some reason, fall back to the default
hook "none" and treat errors as warnings in the log.

This will gracefully ignore typos in hook names without crashing the
server, and it will also allow configuration of "rabbitmq" where i.e.
"pika" is missing.

Closes: #1490
Signed-off-by: Henning Schild <henning@hennsch.de>
2024-08-06 19:39:37 +02:00
Mathieu Dupuy
34b449f27f
chore: pyupgrade --py38-plus 2024-08-06 13:49:23 +02:00
Peter Bieringer
45f0b8809b
Merge pull request #1517 from pbiering/python-3.13
Python 3.13
2024-08-04 08:39:22 +02:00
Peter Bieringer
7388a095f5 remove unused requirement "typeguard" 2024-08-04 08:34:15 +02:00
Peter Bieringer
5ffaf6e837
Merge pull request #1552 from pbiering/remove-typeguard
remove unused requirement "typeguard"
2024-08-04 08:32:09 +02:00
Peter Bieringer
0f505222d9 remove unused requirement "typeguard" 2024-08-04 08:23:11 +02:00
Peter Bieringer
b1cf1f2e28 add Python 3.13 2024-07-25 16:07:41 +02:00
Peter Bieringer
01d4851581 add Python 3.13 2024-07-25 16:07:39 +02:00
Peter Bieringer
5019a3e974
Merge pull request #1549 from pbiering/fix-logger-warn-leftovers
fix logger-warn-leftovers
2024-07-25 15:54:17 +02:00
Peter Bieringer
c046c6ae34 fix logger-warn-leftovers 2024-07-25 15:48:24 +02:00
Peter Bieringer
897a679c1c
Merge pull request #1548 from deronnax/split-tox-jobs
split tox jobs
2024-07-25 15:30:44 +02:00
Mathieu Dupuy
b47a253ccb
pin flake, isort and mypy versions 2024-07-25 14:00:22 +02:00
Mathieu Dupuy
c499c313c2
split tox jobs 2024-07-25 00:22:11 +02:00
Peter Bieringer
1dceaf5385
Merge pull request #1547 from deronnax/fix-misspellings
fix misspellings
2024-07-24 15:40:09 +02:00
Mathieu Dupuy
47bc966a13
fix misspellings 2024-07-24 12:29:13 +02:00
Peter Bieringer
61be51e9f3
Merge pull request #1545 from deronnax/remove-setuptools-from-runtime-deps
remove setuptools from runtime dependencies
2024-07-23 18:13:19 +02:00
Mathieu Dupuy
e5096d31af
remove setuptools from runtime dependencies 2024-07-23 17:40:07 +02:00
Peter Bieringer
e5e80ebbe6
Merge pull request #1544 from pbiering/option-strip-domain-name-from-login
Option strip domain name from login
2024-07-18 06:56:25 +02:00
Peter Bieringer
13b1aaed39 add auth/strip_domain option 2024-07-18 06:50:29 +02:00
Peter Bieringer
f117fd06af add missing test for auth/lc_username 2024-07-18 06:49:10 +02:00
Peter Bieringer
055489f79c
Merge pull request #1541 from mldytech/master
Modified Reverse-Proxy Examples for Caddy
2024-07-12 17:59:54 +02:00
mldytech
53befe72db Modified Reverse-Proxy Examples for Caddy:
- Moved existing Caddy example with basicauth to the right section
- Added basic Caddy example without using basicauth
2024-07-12 10:29:35 +02:00
Peter Bieringer
7fd7ec7f7a
Merge pull request #1538 from pbiering/workflow-test
Workflow test
2024-07-12 05:54:43 +02:00
Peter Bieringer
9809fbcba4 pin workflow to python 3.12.3 as 3.12.4 causes issues 2024-07-12 05:41:13 +02:00
Peter Bieringer
fe3d9d3f48 update 2024-07-12 05:22:13 +02:00
Peter Bieringer
bb112784fd
Merge pull request #1537 from willsowerbutts/master
Remove unexpected control codes from ICS files
2024-07-12 05:18:20 +02:00
Will Sowerbutts
f1d84cea35 Remove unexpected control codes from ICS files 2024-07-11 13:38:48 +01:00
Peter Bieringer
fe33d79eb1 fix item level 2024-07-04 09:37:25 +02:00
Peter Bieringer
dd8b62eef5 continue with dev 2024-06-18 20:24:59 +02:00
Peter Bieringer
3094bc3936 fix version for release 2024-06-18 20:24:12 +02:00
Peter Varkoly
b0f8d37294 User the intersection built in function of set to make the code more readable. 2022-02-24 10:45:45 +01:00
Dipl. Ing. Péter Varkoly
c5b5910de4 Adapt base configuration to use with cranix-server. Only the certificate must be adapted 2022-02-22 11:35:46 +01:00
Peter Varkoly
8d19fd7a64 Now rights can be add to user groups too. 2022-02-21 17:15:21 +01:00
Dipl. Ing. Péter Varkoly
eda8309a04 Implementing group based collection matching.
Optimize rights evaluation.
2022-02-21 08:36:10 +01:00
Peter Varkoly
2dc0fd29dc Initial version of ldap authentication backend. 2022-02-19 11:57:58 +01:00
82 changed files with 4984 additions and 890 deletions

1
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1 @@
custom: https://github.com/Kozea/Radicale/wiki/Donations

View file

@ -6,10 +6,8 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12', pypy-3.8, pypy-3.9]
python-version: ['3.9', '3.10', '3.11', '3.12.3', '3.13.0', pypy-3.9]
exclude:
- os: windows-latest
python-version: pypy-3.8
- os: windows-latest
python-version: pypy-3.9
runs-on: ${{ matrix.os }}
@ -21,7 +19,7 @@ jobs:
- name: Install Test dependencies
run: pip install tox
- name: Test
run: tox
run: tox -e py
- name: Install Coveralls
if: github.event_name == 'push'
run: pip install coveralls
@ -46,3 +44,15 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github --finish
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install tox
run: pip install tox
- name: Lint
run: tox -e flake8,mypy,isort

View file

@ -1,6 +1,106 @@
# Changelog
## 3.dev
## 3.5.1.dev
* Fix: auth/htpasswd related to detection and use of bcrypt
* Add: option [auth] ldap_ignore_attribute_create_modify_timestamp for support of Authentik LDAP server
* Extend: [storage] hook supports now placeholder for "cwd" and "path" (and catches unsupported placeholders)
* Fix: location of lock file for in case of dedicated cache folder is activated
* Extend: log and create base folders if not existing during startup
## 3.5.0
* Add: option [auth] type oauth2 by code migration from https://gitlab.mim-libre.fr/alphabet/radicale_oauth/-/blob/dev/oauth2/
* Fix: catch OS errors on PUT MKCOL MKCALENDAR MOVE PROPPATCH (insufficient storage, access denied, internal server error)
* Test: skip bcrypt related tests if module is missing
* Improve: relax mtime check on storage filesystem, change test file location to "collection-root" directory
* Add: option [auth] type pam by code migration from v1, add new option pam_serivce
* Cosmetics: extend list of used modules with their version on startup
* Improve: WebUI
* Add: option [server] script_name for reverse proxy base_prefix handling
* Fix: proper base_prefix stripping if running behind reverse proxy
* Review: Apache reverse proxy config example
* Add: on-the-fly link activation and default content adjustment in case of bundled InfCloud (tested with 0.13.1)
* Adjust: [auth] imap: use AUTHENTICATE PLAIN instead of LOGIN towards remote IMAP server
* Improve: log client IP on SSL error and SSL protocol+cipher if successful
* Improve: catch htpasswd hash verification errors
* Improve: add support for more bcrypt algos on autodetection, extend logging for autodetection fallback to PLAIN in case of hash length is not matching
* Add: warning in case of started standalone and not listen on loopback interface but trusting external authentication
* Adjust: Change default [auth] type from "none" to "denyall" for secure-by-default
## 3.4.1
* Add: option [auth] dovecot_connection_type / dovecot_host / dovecot_port
* Add: option [auth] type imap by code migration from https://github.com/Unrud/RadicaleIMAP/
## 3.4.0
* Add: option [auth] cache_logins/cache_successful_logins_expiry/cache_failed_logins for caching logins
* Improve: [auth] log used hash method and result on debug for htpasswd authentication
* Improve: [auth] htpasswd file now read and verified on start
* Add: option [auth] htpasswd_cache to automatic re-read triggered on change (mtime or size) instead reading on each request
* Improve: [auth] htpasswd: module 'bcrypt' is no longer mandatory in case digest method not used in file
* Improve: [auth] successful/failed login logs now type and whether result was taken from cache
* Improve: [auth] constant execution time for failed logins independent of external backend or by htpasswd used digest method
* Drop: support for Python 3.8
* Add: option [auth] ldap_user_attribute
* Add: option [auth] ldap_groups_attribute as a more flexible replacement of removed ldap_load_groups
## 3.3.3
* Add: display mtime_ns precision of storage folder with condition warning if too less
* Improve: disable fsync during storage verification
* Improve: suppress duplicate log lines on startup
* Contrib: logwatch config and script
* Improve: log precondition result on PUT request
## 3.3.2
* Fix: debug logging in rights/from_file
* Add: option [storage] use_cache_subfolder_for_item for storing 'item' cache outside collection-root
* Fix: ignore empty RRULESET in item
* Add: option [storage] filesystem_cache_folder for defining location of cache outside collection-root
* Add: option [storage] use_cache_subfolder_for_history for storing 'history' cache outside collection-root
* Add: option [storage] use_cache_subfolder_for_synctoken for storing 'sync-token' cache outside collection-root
* Add: option [storage] folder_umask for configuration of umask (overwrite system-default)
* Fix: also remove 'item' from cache on delete
* Improve: avoid automatically invalid cache on upgrade in case no change on cache structure
* Improve: log important module versions on startup
* Improve: auth.ldap config shown on startup, terminate in case no password is supplied for bind user
* Add: option [auth] uc_username for uppercase conversion (similar to existing lc_username)
* Add: option [logging] storage_cache_action_on_debug for conditional logging
* Fix: set PRODID on collection upload (instead of vobject is inserting default one)
* Add: option [storage] use_mtime_and_size_for_item_cache for changing cache lookup from SHA256 to mtime_ns + size
* Fix: buggy cache file content creation on collection upload
## 3.3.1
* Add: option [auth] type=dovecot
* Enhancement: log content in case of multiple main components error
* Fix: expand does not take timezones into account
* Fix: expand does not support overridden recurring events
* Fix: expand does not honor start and end times
* Add: option [server] protocol + ciphersuite for optional restrictions on SSL socket
* Enhancement: [storage] hook documentation, logging, error behavior (no longer throwing an exception)
## 3.3.0
* Adjustment: option [auth] htpasswd_encryption change default from "md5" to "autodetect"
* Add: option [auth] type=ldap with (group) rights management via LDAP/LDAPS
* Enhancement: permit_delete_collection can be now controlled also per collection by rights 'D' or 'd'
* Add: option [rights] permit_overwrite_collection (default=True) which can be also controlled per collection by rights 'O' or 'o'
* Fix: only expand VEVENT on REPORT request containing 'expand'
* Adjustment: switch from setup.py to pyproject.toml (but keep files for legacy packaging)
* Adjustment: 'rights' file is now read only during startup
* Cleanup: Python 3.7 leftovers
## 3.2.3
* Add: support for Python 3.13
* Fix: Using icalendar's tzinfo on created datetime to fix issue with icalendar
* Fix: typos in code
* Enhancement: Added free-busy report
* Enhancement: Added 'max_freebusy_occurrences` setting to avoid potential DOS on reports
* Enhancement: remove unexpected control codes from uploaded items
* Enhancement: add 'strip_domain' setting for username handling
* Enhancement: add option to toggle debug log of rights rule with doesn't match
* Drop: remove unused requirement "typeguard"
* Improve: Refactored some date parsing code
## 3.2.2
* Enhancement: add support for auth.type=denyall (will be default for security reasons in upcoming releases)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,11 @@
# This file is intended to be used apart from the containing source code tree.
FROM python:3-alpine as builder
FROM python:3-alpine AS builder
# Version of Radicale (e.g. v3)
ARG VERSION=master
# Optional dependencies (e.g. bcrypt)
# Optional dependencies (e.g. bcrypt or ldap)
ARG DEPENDENCIES=bcrypt
RUN apk add --no-cache --virtual gcc libffi-dev musl-dev \

View file

@ -1,6 +1,6 @@
FROM python:3-alpine as builder
FROM python:3-alpine AS builder
# Optional dependencies (e.g. bcrypt)
# Optional dependencies (e.g. bcrypt or ldap)
ARG DEPENDENCIES=bcrypt
COPY . /app

151
config
View file

@ -40,6 +40,15 @@
# TCP traffic between Radicale and a reverse proxy
#certificate_authority =
# SSL protocol, secure configuration: ALL -SSLv3 -TLSv1 -TLSv1.1
#protocol = (default)
# SSL ciphersuite, secure configuration: DHE:ECDHE:-NULL:-SHA (see also "man openssl-ciphers")
#ciphersuite = (default)
# script name to strip from URI if called by reverse proxy
#script_name = (default taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME)
[encoding]
@ -53,8 +62,83 @@
[auth]
# Authentication method
# Value: none | htpasswd | remote_user | http_x_remote_user | denyall
#type = none
# Value: none | htpasswd | remote_user | http_x_remote_user | dovecot | ldap | oauth2 | pam | denyall
#type = denyall
# Cache logins for until expiration time
#cache_logins = false
# Expiration time for caching successful logins in seconds
#cache_successful_logins_expiry = 15
## Expiration time of caching failed logins in seconds
#cache_failed_logins_expiry = 90
# Ignore modifyTimestamp and createTimestamp attributes. Required e.g. for Authentik LDAP server
#ldap_ignore_attribute_create_modify_timestamp = false
# URI to the LDAP server
#ldap_uri = ldap://localhost
# The base DN where the user accounts have to be searched
#ldap_base = ##BASE_DN##
# The reader DN of the LDAP server
#ldap_reader_dn = CN=ldapreader,CN=Users,##BASE_DN##
# Password of the reader DN
#ldap_secret = ldapreader-secret
# Path of the file containing password of the reader DN
#ldap_secret_file = /run/secrets/ldap_password
# the attribute to read the group memberships from in the user's LDAP entry (default: not set)
#ldap_groups_attribute = memberOf
# The filter to find the DN of the user. This filter must contain a python-style placeholder for the login
#ldap_filter = (&(objectClass=person)(uid={0}))
# the attribute holding the value to be used as username after authentication
#ldap_user_attribute = cn
# Use ssl on the ldap connection
#ldap_use_ssl = False
# The certificate verification mode. NONE, OPTIONAL, default is REQUIRED
#ldap_ssl_verify_mode = REQUIRED
# The path to the CA file in pem format which is used to certificate the server certificate
#ldap_ssl_ca_file =
# Connection type for dovecot authentication (AF_UNIX|AF_INET|AF_INET6)
# Note: credentials are transmitted in cleartext
#dovecot_connection_type = AF_UNIX
# The path to the Dovecot client authentication socket (eg. /run/dovecot/auth-client on Fedora). Radicale must have read / write access to the socket.
#dovecot_socket = /var/run/dovecot/auth-client
# Host of via network exposed dovecot socket
#dovecot_host = localhost
# Port of via network exposed dovecot socket
#dovecot_port = 12345
# IMAP server hostname
# Syntax: address | address:port | [address]:port | imap.server.tld
#imap_host = localhost
# Secure the IMAP connection
# Value: tls | starttls | none
#imap_security = tls
# OAuth2 token endpoint URL
#oauth2_token_endpoint = <URL>
# PAM service
#pam_serivce = radicale
# PAM group user should be member of
#pam_group_membership =
# Htpasswd filename
#htpasswd_filename = /etc/radicale/users
@ -62,7 +146,10 @@
# Htpasswd encryption method
# Value: plain | bcrypt | md5 | sha256 | sha512 | autodetect
# bcrypt requires the installation of 'bcrypt' module.
#htpasswd_encryption = md5
#htpasswd_encryption = autodetect
# Enable caching of htpasswd file based on size and mtime_ns
#htpasswd_cache = False
# Incorrect authentication delay (seconds)
#delay = 1
@ -73,11 +160,14 @@
# Convert username to lowercase, must be true for case-insensitive auth providers
#lc_username = False
# Strip domain name from username
#strip_domain = False
[rights]
# Rights backend
# Value: none | authenticated | owner_only | owner_write | from_file
# Value: authenticated | owner_only | owner_write | from_file
#type = owner_only
# File for rights management from_file
@ -86,6 +176,9 @@
# Permit delete of a collection (global)
#permit_delete_collection = True
# Permit overwrite of a collection (global)
#permit_overwrite_collection = True
[storage]
@ -96,14 +189,47 @@
# Folder for storing local collections, created if not present
#filesystem_folder = /var/lib/radicale/collections
# Folder for storing cache of local collections, created if not present
# Note: only used in case of use_cache_subfolder_* options are active
# Note: can be used on multi-instance setup to cache files on local node (see below)
#filesystem_cache_folder = (filesystem_folder)
# Use subfolder 'collection-cache' for 'item' cache file structure instead of inside collection folder
# Note: can be used on multi-instance setup to cache 'item' on local node
#use_cache_subfolder_for_item = False
# Use subfolder 'collection-cache' for 'history' cache file structure instead of inside collection folder
# Note: use only on single-instance setup, will break consistency with client in multi-instance setup
#use_cache_subfolder_for_history = False
# Use subfolder 'collection-cache' for 'sync-token' cache file structure instead of inside collection folder
# Note: use only on single-instance setup, will break consistency with client in multi-instance setup
#use_cache_subfolder_for_synctoken = False
# Use last modifiction time (nanoseconds) and size (bytes) for 'item' cache instead of SHA256 (improves speed)
# Note: check used filesystem mtime precision before enabling
# Note: conversion is done on access, bulk conversion can be done offline using storage verification option: radicale --verify-storage
#use_mtime_and_size_for_item_cache = False
# Use configured umask for folder creation (not applicable for OS Windows)
# Useful value: 0077 | 0027 | 0007 | 0022
#folder_umask = (system default, usual 0022)
# Delete sync token that are older (seconds)
#max_sync_token_age = 2592000
# Skip broken item instead of triggering an exception
#skip_broken_item = True
# Command that is run after changes to storage
# Example: ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by \"%(user)s\"")
# Command that is run after changes to storage, default is emtpy
# Supported placeholders:
# %(user)s: logged-in user
# %(cwd)s : current working directory
# %(path)s: full path of item
# Command will be executed with base directory defined in filesystem_folder
# For "git" check DOCUMENTATION.md for bootstrap instructions
# Example(test): echo \"user=%(user)s path=%(path)s cwd=%(cwd)s\"
# Example(git): git add -A && (git diff --cached --quiet || git commit -m "Changes by \"%(user)s\"")
#hook =
# Create predefined user collections
@ -156,12 +282,18 @@
# Log response content on level=debug
#response_content_on_debug = False
# Log rights rule which doesn't match on level=debug
#rights_rule_doesnt_match_on_debug = False
# Log storage cache actions on level=debug
#storage_cache_actions_on_debug = False
[headers]
# Additional HTTP headers
#Access-Control-Allow-Origin = *
[hook]
# Hook types
@ -170,3 +302,10 @@
#rabbitmq_endpoint =
#rabbitmq_topic =
#rabbitmq_queue_type = classic
[reporting]
# When returning a free-busy report, limit the number of returned
# occurences per event to prevent DOS attacks.
#max_freebusy_occurrence = 10000

View file

@ -4,6 +4,7 @@
## Apache acting as reverse proxy and forward requests via ProxyPass to a running "radicale" server
# SELinux WARNING: To use this correctly, you will need to set:
# setsebool -P httpd_can_network_connect=1
# URI prefix: /radicale
#Define RADICALE_SERVER_REVERSE_PROXY
@ -11,11 +12,12 @@
# MAY CONFLICT with other WSG servers on same system -> use then inside a VirtualHost
# SELinux WARNING: To use this correctly, you will need to set:
# setsebool -P httpd_can_read_write_radicale=1
# URI prefix: /radicale
#Define RADICALE_SERVER_WSGI
### Extra options
## Apache starting a dedicated VHOST with SSL
## Apache starting a dedicated VHOST with SSL without "/radicale" prefix in URI on port 8443
#Define RADICALE_SERVER_VHOST_SSL
@ -27,8 +29,13 @@
#Define RADICALE_ENFORCE_SSL
### enable authentication by web server (config: [auth] type = http_x_remote_user)
#Define RADICALE_SERVER_USER_AUTHENTICATION
### Particular configuration EXAMPLES, adjust/extend/override to your needs
##########################
### default host
##########################
@ -37,33 +44,56 @@
## RADICALE_SERVER_REVERSE_PROXY
<IfDefine RADICALE_SERVER_REVERSE_PROXY>
RewriteEngine On
RewriteRule ^/radicale$ /radicale/ [R,L]
<Location /radicale>
RewriteCond %{REQUEST_METHOD} GET
RewriteRule ^/radicale/$ /radicale/.web/ [R,L]
<LocationMatch "^/radicale/\.web.*>
# Internal WebUI does not need authentication at all
RequestHeader set X-Script-Name /radicale
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader unset X-Forwarded-Proto
<If "%{HTTPS} =~ /on/">
RequestHeader set X-Forwarded-Proto "https"
</If>
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
<LocationMatch "^/radicale(?!/\.web)">
RequestHeader set X-Script-Name /radicale
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
<IfDefine RADICALE_ENFORCE_SSL>
<IfModule !ssl_module>
@ -71,7 +101,7 @@
</IfModule>
SSLRequireSSL
</IfDefine>
</Location>
</LocationMatch>
</IfDefine>
@ -97,22 +127,38 @@
WSGIScriptAlias /radicale /usr/share/radicale/radicale.wsgi
<Location /radicale>
# Internal WebUI does not need authentication at all
<LocationMatch "^/radicale/\.web.*>
RequestHeader set X-Script-Name /radicale
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
<LocationMatch "^/radicale(?!/\.web)">
RequestHeader set X-Script-Name /radicale
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
<IfDefine RADICALE_ENFORCE_SSL>
<IfModule !ssl_module>
@ -120,7 +166,7 @@
</IfModule>
SSLRequireSSL
</IfDefine>
</Location>
</LocationMatch>
</IfModule>
<IfModule !wsgi_module>
Error "RADICALE_SERVER_WSGI selected but wsgi module not loaded/enabled"
@ -164,29 +210,51 @@ CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
## RADICALE_SERVER_REVERSE_PROXY
<IfDefine RADICALE_SERVER_REVERSE_PROXY>
<Location />
RequestHeader set X-Script-Name /
RewriteEngine On
RewriteCond %{REQUEST_METHOD} GET
RewriteRule ^/$ /.web/ [R,L]
<LocationMatch "^/\.web.*>
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
</Location>
<LocationMatch "^(?!/\.web)">
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
</LocationMatch>
</IfDefine>
@ -212,23 +280,27 @@ CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
WSGIScriptAlias / /usr/share/radicale/radicale.wsgi
<Location />
RequestHeader set X-Script-Name /
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
<LocationMatch "^/(?!/\.web)">
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
</Location>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
</LocationMatch>
</IfModule>
<IfModule !wsgi_module>
Error "RADICALE_SERVER_WSGI selected but wsgi module not loaded/enabled"

193
contrib/logwatch/radicale Normal file
View file

@ -0,0 +1,193 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (script)
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# Detail levels
# >= 5: Logins
# >= 10: ResponseTimes
$Detail = $ENV{'LOGWATCH_DETAIL_LEVEL'} || 0;
my %ResponseTimes;
my %Responses;
my %Requests;
my %Logins;
my %Loglevel;
my %OtherEvents;
my $sum;
my $length;
sub ResponseTimesMinMaxSum($$) {
my $req = $_[0];
my $time = $_[1];
$ResponseTimes{$req}->{'cnt'}++;
if (! defined $ResponseTimes{$req}->{'min'}) {
$ResponseTimes{$req}->{'min'} = $time;
} elsif ($ResponseTimes->{$req}->{'min'} > $time) {
$ResponseTimes{$req}->{'min'} = $time;
}
if (! defined $ResponseTimes{$req}->{'max'}) {
$ResponseTimes{$req}{'max'} = $time;
} elsif ($ResponseTimes{$req}->{'max'} < $time) {
$ResponseTimes{$req}{'max'} = $time;
}
$ResponseTimes{$req}->{'sum'} += $time;
}
sub Sum($) {
my $phash = $_[0];
my $sum = 0;
foreach my $entry (keys %$phash) {
$sum += $phash->{$entry};
}
return $sum;
}
sub MaxLength($) {
my $phash = $_[0];
my $length = 0;
foreach my $entry (keys %$phash) {
$length = length($entry) if (length($entry) > $length);
}
return $length;
}
while (defined($ThisLine = <STDIN>)) {
# count loglevel
if ( $ThisLine =~ /\[(DEBUG|INFO|WARNING|ERROR|CRITICAL)\] /o ) {
$Loglevel{$1}++
}
# parse log for events
if ( $ThisLine =~ /Radicale server ready/o ) {
$OtherEvents{"Radicale server started"}++;
}
elsif ( $ThisLine =~ /Stopping Radicale/o ) {
$OtherEvents{"Radicale server stopped"}++;
}
elsif ( $ThisLine =~ / (\S+) response status/o ) {
my $req = $1;
if ( $ThisLine =~ / \S+ response status for .* with depth '(\d)' in ([0-9.]+) seconds: (\d+)/o ) {
$req .= ":D=" . $1 . ":R=" . $3;
ResponseTimesMinMaxSum($req, $2) if ($Detail >= 10);
} elsif ( $ThisLine =~ / \S+ response status for .* in ([0-9.]+) seconds: (\d+)/ ) {
$req .= ":R=" . $2;
ResponseTimesMinMaxSum($req, $1) if ($Detail >= 10);
}
$Responses{$req}++;
}
elsif ( $ThisLine =~ / (\S+) request for/o ) {
my $req = $1;
if ( $ThisLine =~ / \S+ request for .* with depth '(\d)' received/o ) {
$req .= ":D=" . $1;
}
$Requests{$req}++;
}
elsif ( $ThisLine =~ / (Successful login): '([^']+)'/o ) {
$Logins{$2}++ if ($Detail >= 5);
$OtherEvents{$1}++;
}
elsif ( $ThisLine =~ / (Failed login attempt) /o ) {
$OtherEvents{$1}++;
}
elsif ( $ThisLine =~ /\[(DEBUG|INFO)\] /o ) {
# skip if DEBUG+INFO
}
else {
# Report any unmatched entries...
$ThisLine =~ s/^\[\d+(\/Thread-\d+)?\] //; # remove process/Thread ID
chomp($ThisLine);
$OtherList{$ThisLine}++;
}
}
if ($Started) {
print "\nStatistics:\n";
print " Radicale started: $Started Time(s)\n";
}
if (keys %Loglevel) {
$sum = Sum(\%Loglevel);
print "\n**Loglevel counters**\n";
printf "%-18s | %7s | %5s |\n", "Loglevel", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $level (sort keys %Loglevel) {
printf "%-18s | %7d | %3d%% |\n", $level, $Loglevel{$level}, int(($Loglevel{$level} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Requests) {
$sum = Sum(\%Requests);
print "\n**Request counters (D=<depth>)**\n";
printf "%-18s | %7s | %5s |\n", "Request", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $req (sort keys %Requests) {
printf "%-18s | %7d | %3d%% |\n", $req, $Requests{$req}, int(($Requests{$req} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Responses) {
$sum = Sum(\%Responses);
print "\n**Response result counters ((D=<depth> R=<result>)**\n";
printf "%-18s | %7s | %5s |\n", "Response", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $req (sort keys %Responses) {
printf "%-18s | %7d | %3d%% |\n", $req, $Responses{$req}, int(($Responses{$req} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Logins) {
$sum = Sum(\%Logins);
$length = MaxLength(\%Logins);
print "\n**Successful login counters**\n";
printf "%-" . $length . "s | %7s | %5s |\n", "Login", "cnt", "ratio";
print "-" x($length + 20) . "\n";
foreach my $login (sort keys %Logins) {
printf "%-" . $length . "s | %7d | %3d%% |\n", $login, $Logins{$login}, int(($Logins{$login} * 100) / $sum);
}
print "-" x($length + 20) . "\n";
printf "%-" . $length . "s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %ResponseTimes) {
print "\n**Response timings (counts, seconds) (D=<depth> R=<result>)**\n";
printf "%-18s | %7s | %7s | %7s | %7s |\n", "Response", "cnt", "min", "max", "avg";
print "-" x60 . "\n";
foreach my $req (sort keys %ResponseTimes) {
printf "%-18s | %7d | %7.3f | %7.3f | %7.3f |\n", $req
, $ResponseTimes{$req}->{'cnt'}
, $ResponseTimes{$req}->{'min'}
, $ResponseTimes{$req}->{'max'}
, $ResponseTimes{$req}->{'sum'} / $ResponseTimes{$req}->{'cnt'};
}
print "-" x60 . "\n";
}
if (keys %OtherEvents) {
print "\n**Other Events**\n";
foreach $ThisOne (sort keys %OtherEvents) {
print "$ThisOne: $OtherEvents{$ThisOne} Time(s)\n";
}
}
if (keys %OtherList) {
print "\n**Unmatched Entries**\n";
foreach $ThisOne (sort keys %OtherList) {
print "$ThisOne: $OtherList{$ThisOne} Time(s)\n";
}
}
exit(0);
# vim: shiftwidth=3 tabstop=3 syntax=perl et smartindent

View file

@ -0,0 +1,11 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (config) - input from journald
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
Title = "Radicale"
LogFile = none
*JournalCtl = "--output=cat --unit=radicale.service"
# vi: shiftwidth=3 tabstop=3 et

View file

@ -0,0 +1,13 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (config) - input from syslog file
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
Title = "Radicale"
LogFile = messages
*OnlyService = radicale
*RemoveHeaders
# vi: shiftwidth=3 tabstop=3 et

View file

@ -0,0 +1,31 @@
### Proxy Forward to local running "radicale" server
###
### Usual configuration file location: /etc/nginx/default.d/
## "well-known" redirect at least for Apple devices
rewrite ^/.well-known/carddav /radicale/ redirect;
rewrite ^/.well-known/caldav /radicale/ redirect;
## Base URI: /radicale/
location /radicale/ {
proxy_pass http://localhost:5232/;
proxy_set_header X-Script-Name /radicale;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass_header Authorization;
}
## Base URI: /
#location / {
# proxy_pass http://localhost:5232/;
# proxy_set_header X-Script-Name /radicale;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Host $host;
# proxy_set_header X-Forwarded-Port $server_port;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Host $http_host;
# proxy_pass_header Authorization;
#}

128
pyproject.toml Normal file
View file

@ -0,0 +1,128 @@
[project]
name = "Radicale"
# When the version is updated, a new section in the CHANGELOG.md file must be
# added too.
readme = "README.md"
version = "3.5.1.dev"
authors = [{name = "Guillaume Ayoub", email = "guillaume.ayoub@kozea.fr"}, {name = "Unrud", email = "unrud@outlook.com"}, {name = "Peter Bieringer", email = "pb@bieringer.de"}]
license = {text = "GNU GPL v3"}
description = "CalDAV and CardDAV Server"
keywords = ["calendar", "addressbook", "CalDAV", "CardDAV"]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Office/Business :: Groupware",
]
urls = {Homepage = "https://radicale.org/"}
requires-python = ">=3.9.0"
dependencies = [
"defusedxml",
"passlib",
"vobject>=0.9.6",
"pika>=1.1.0",
"requests",
]
[project.optional-dependencies]
test = ["pytest>=7", "waitress", "bcrypt"]
bcrypt = ["bcrypt"]
ldap = ["ldap3"]
[project.scripts]
radicale = "radicale.__main__:run"
[build-system]
requires = ["setuptools>=61.2"]
build-backend = "setuptools.build_meta"
[tool.tox]
min_version = "4.0"
envlist = ["py", "flake8", "isort", "mypy"]
[tool.tox.env.py]
extras = ["test"]
deps = [
"pytest",
"pytest-cov"
]
commands = [["pytest", "-r", "s", "--cov", "--cov-report=term", "--cov-report=xml", "."]]
[tool.tox.env.flake8]
deps = ["flake8==7.1.0"]
commands = [["flake8", "."]]
skip_install = true
[tool.tox.env.isort]
deps = ["isort==5.13.2"]
commands = [["isort", "--check", "--diff", "."]]
skip_install = true
[tool.tox.env.mypy]
deps = ["mypy==1.11.0"]
commands = [["mypy", "--install-types", "--non-interactive", "."]]
skip_install = true
[tool.setuptools]
platforms = ["Any"]
include-package-data = false
[tool.setuptools.packages.find]
exclude = ["*.tests"] # *.tests.*; tests.*; tests
namespaces = false
[tool.setuptools.package-data]
radicale = [
"web/internal_data/css/icon.png",
"web/internal_data/css/loading.svg",
"web/internal_data/css/logo.svg",
"web/internal_data/css/main.css",
"web/internal_data/css/icons/delete.svg",
"web/internal_data/css/icons/download.svg",
"web/internal_data/css/icons/edit.svg",
"web/internal_data/css/icons/new.svg",
"web/internal_data/css/icons/upload.svg",
"web/internal_data/fn.js",
"web/internal_data/index.html",
"py.typed",
]
[tool.isort]
known_standard_library = "_dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib"
known_third_party = "defusedxml,passlib,pkg_resources,pytest,vobject"
[tool.mypy]
ignore_missing_imports = true
show_error_codes = true
exclude = "(^|/)build($|/)"
[tool.coverage.run]
branch = true
source = ["radicale"]
omit = ["tests/*", "*/tests/*"]
[tool.coverage.report]
# Regexes for lines to exclude from consideration
exclude_lines = [
# Have to re-enable the standard pragma
"pragma: no cover",
# Don't complain if tests don't hit defensive assertion code:
"raise AssertionError",
"raise NotImplementedError",
# Don't complain if non-runnable code isn't run:
"if __name__ == .__main__.:",
]

View file

@ -3,4 +3,8 @@ Radicale WSGI file (mod_wsgi and uWSGI compliant).
"""
import os
from radicale import application
# set an environment variable
os.environ.setdefault('SERVER_GATEWAY_INTERFACE', 'Web')

View file

@ -61,7 +61,7 @@ def _get_application_instance(config_path: str, wsgi_errors: types.ErrorStream
if not miss and source != "default config":
default_config_active = False
if default_config_active:
logger.warn("%s", "No config file found/readable - only default config is active")
logger.warning("%s", "No config file found/readable - only default config is active")
_application_instance = Application(configuration)
if _application_config_path != config_path:
raise ValueError("RADICALE_CONFIG must not change: %r != %r" %

View file

@ -175,7 +175,7 @@ def run() -> None:
default_config_active = False
if default_config_active:
logger.warn("%s", "No config file found/readable - only default config is active")
logger.warning("%s", "No config file found/readable - only default config is active")
if args_ns.verify_storage:
logger.info("Verifying storage")
@ -183,7 +183,7 @@ def run() -> None:
storage_ = storage.load(configuration)
with storage_.acquire_lock("r"):
if not storage_.verify():
logger.critical("Storage verifcation failed")
logger.critical("Storage verification failed")
sys.exit(1)
except Exception as e:
logger.critical("An exception occurred during storage "

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -68,8 +68,10 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
_internal_server: bool
_max_content_length: int
_auth_realm: str
_script_name: str
_extra_headers: Mapping[str, str]
_permit_delete_collection: bool
_permit_overwrite_collection: bool
def __init__(self, configuration: config.Configuration) -> None:
"""Initialize Application.
@ -86,11 +88,26 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
self._response_content_on_debug = configuration.get("logging", "response_content_on_debug")
self._auth_delay = configuration.get("auth", "delay")
self._internal_server = configuration.get("server", "_internal_server")
self._script_name = configuration.get("server", "script_name")
if self._script_name:
if self._script_name[0] != "/":
logger.error("server.script_name must start with '/': %r", self._script_name)
raise RuntimeError("server.script_name option has to start with '/'")
else:
if self._script_name.endswith("/"):
logger.error("server.script_name must not end with '/': %r", self._script_name)
raise RuntimeError("server.script_name option must not end with '/'")
else:
logger.info("Provided script name to strip from URI if called by reverse proxy: %r", self._script_name)
else:
logger.info("Default script name to strip from URI if called by reverse proxy is taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME")
self._max_content_length = configuration.get(
"server", "max_content_length")
self._auth_realm = configuration.get("auth", "realm")
self._permit_delete_collection = configuration.get("rights", "permit_delete_collection")
logger.info("permit delete of collection: %s", self._permit_delete_collection)
self._permit_overwrite_collection = configuration.get("rights", "permit_overwrite_collection")
logger.info("permit overwrite of collection: %s", self._permit_overwrite_collection)
self._extra_headers = dict()
for key in self.configuration.options("headers"):
self._extra_headers[key] = configuration.get("headers", key)
@ -133,6 +150,7 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
time_begin = datetime.datetime.now()
request_method = environ["REQUEST_METHOD"].upper()
unsafe_path = environ.get("PATH_INFO", "")
https = environ.get("HTTPS", "")
"""Manage a request."""
def response(status: int, headers: types.WSGIResponseHeaders,
@ -146,7 +164,7 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
if self._response_content_on_debug:
logger.debug("Response content:\n%s", answer)
else:
logger.debug("Response content: suppressed by config/option [auth] response_content_on_debug")
logger.debug("Response content: suppressed by config/option [logging] response_content_on_debug")
headers["Content-Type"] += "; charset=%s" % self._encoding
answer = answer.encode(self._encoding)
accept_encoding = [
@ -175,50 +193,71 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
# Return response content
return status_text, list(headers.items()), answers
reverse_proxy = False
remote_host = "unknown"
if environ.get("REMOTE_HOST"):
remote_host = repr(environ["REMOTE_HOST"])
elif environ.get("REMOTE_ADDR"):
remote_host = environ["REMOTE_ADDR"]
if environ.get("HTTP_X_FORWARDED_FOR"):
reverse_proxy = True
remote_host = "%s (forwarded for %r)" % (
remote_host, environ["HTTP_X_FORWARDED_FOR"])
if environ.get("HTTP_X_FORWARDED_HOST") or environ.get("HTTP_X_FORWARDED_PROTO") or environ.get("HTTP_X_FORWARDED_SERVER"):
reverse_proxy = True
remote_useragent = ""
if environ.get("HTTP_USER_AGENT"):
remote_useragent = " using %r" % environ["HTTP_USER_AGENT"]
depthinfo = ""
if environ.get("HTTP_DEPTH"):
depthinfo = " with depth %r" % environ["HTTP_DEPTH"]
logger.info("%s request for %r%s received from %s%s",
if https:
https_info = " " + environ.get("SSL_PROTOCOL", "") + " " + environ.get("SSL_CIPHER", "")
else:
https_info = ""
logger.info("%s request for %r%s received from %s%s%s",
request_method, unsafe_path, depthinfo,
remote_host, remote_useragent)
remote_host, remote_useragent, https_info)
if self._request_header_on_debug:
logger.debug("Request header:\n%s",
pprint.pformat(self._scrub_headers(environ)))
else:
logger.debug("Request header: suppressed by config/option [auth] request_header_on_debug")
logger.debug("Request header: suppressed by config/option [logging] request_header_on_debug")
# SCRIPT_NAME is already removed from PATH_INFO, according to the
# WSGI specification.
# Reverse proxies can overwrite SCRIPT_NAME with X-SCRIPT-NAME header
base_prefix_src = ("HTTP_X_SCRIPT_NAME" if "HTTP_X_SCRIPT_NAME" in
environ else "SCRIPT_NAME")
base_prefix = environ.get(base_prefix_src, "")
if base_prefix and base_prefix[0] != "/":
logger.error("Base prefix (from %s) must start with '/': %r",
base_prefix_src, base_prefix)
if base_prefix_src == "HTTP_X_SCRIPT_NAME":
return response(*httputils.BAD_REQUEST)
return response(*httputils.INTERNAL_SERVER_ERROR)
if base_prefix.endswith("/"):
logger.warning("Base prefix (from %s) must not end with '/': %r",
base_prefix_src, base_prefix)
base_prefix = base_prefix.rstrip("/")
logger.debug("Base prefix (from %s): %r", base_prefix_src, base_prefix)
if self._script_name and (reverse_proxy is True):
base_prefix_src = "config"
base_prefix = self._script_name
else:
base_prefix_src = ("HTTP_X_SCRIPT_NAME" if "HTTP_X_SCRIPT_NAME" in
environ else "SCRIPT_NAME")
base_prefix = environ.get(base_prefix_src, "")
if base_prefix and base_prefix[0] != "/":
logger.error("Base prefix (from %s) must start with '/': %r",
base_prefix_src, base_prefix)
if base_prefix_src == "HTTP_X_SCRIPT_NAME":
return response(*httputils.BAD_REQUEST)
return response(*httputils.INTERNAL_SERVER_ERROR)
if base_prefix.endswith("/"):
logger.warning("Base prefix (from %s) must not end with '/': %r",
base_prefix_src, base_prefix)
base_prefix = base_prefix.rstrip("/")
if base_prefix:
logger.debug("Base prefix (from %s): %r", base_prefix_src, base_prefix)
# Sanitize request URI (a WSGI server indicates with an empty path,
# that the URL targets the application root without a trailing slash)
path = pathutils.sanitize_path(unsafe_path)
logger.debug("Sanitized path: %r", path)
if (reverse_proxy is True) and (len(base_prefix) > 0):
if path.startswith(base_prefix):
path_new = path.removeprefix(base_prefix)
logger.debug("Called by reverse proxy, remove base prefix %r from path: %r => %r", base_prefix, path, path_new)
path = path_new
else:
logger.warning("Called by reverse proxy, cannot removed base prefix %r from path: %r as not matching", base_prefix, path)
# Get function corresponding to method
function = getattr(self, "do_%s" % request_method, None)
@ -232,7 +271,7 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
path.rstrip("/").endswith("/.well-known/carddav")):
return response(*httputils.redirect(
base_prefix + "/", client.MOVED_PERMANENTLY))
# Return NOT FOUND for all other paths containing ".well-knwon"
# Return NOT FOUND for all other paths containing ".well-known"
if path.endswith("/.well-known") or "/.well-known/" in path:
return response(*httputils.NOT_FOUND)
@ -249,18 +288,24 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
self.configuration, environ, base64.b64decode(
authorization.encode("ascii"))).split(":", 1)
user = self._auth.login(login, password) or "" if login else ""
(user, info) = self._auth.login(login, password) or ("", "") if login else ("", "")
if self.configuration.get("auth", "type") == "ldap":
try:
logger.debug("Groups %r", ",".join(self._auth._ldap_groups))
self._rights._user_groups = self._auth._ldap_groups
except AttributeError:
pass
if user and login == user:
logger.info("Successful login: %r", user)
logger.info("Successful login: %r (%s)", user, info)
elif user:
logger.info("Successful login: %r -> %r", login, user)
logger.info("Successful login: %r -> %r (%s)", login, user, info)
elif login:
logger.warning("Failed login attempt from %s: %r",
remote_host, login)
logger.warning("Failed login attempt from %s: %r (%s)",
remote_host, login, info)
# Random delay to avoid timing oracles and bruteforce attacks
if self._auth_delay > 0:
random_delay = self._auth_delay * (0.5 + random.random())
logger.debug("Sleeping %.3f seconds", random_delay)
logger.debug("Failed login, sleeping random: %.3f sec", random_delay)
time.sleep(random_delay)
if user and not pathutils.is_safe_path_component(user):

View file

@ -40,6 +40,7 @@ class ApplicationBase:
_web: web.BaseWeb
_encoding: str
_permit_delete_collection: bool
_permit_overwrite_collection: bool
_hook: hook.BaseHook
def __init__(self, configuration: config.Configuration) -> None:
@ -51,6 +52,7 @@ class ApplicationBase:
self._encoding = configuration.get("encoding", "request")
self._log_bad_put_request_content = configuration.get("logging", "bad_put_request_content")
self._response_content_on_debug = configuration.get("logging", "response_content_on_debug")
self._request_content_on_debug = configuration.get("logging", "request_content_on_debug")
self._hook = hook.load(configuration)
def _read_xml_request_body(self, environ: types.WSGIEnviron
@ -66,17 +68,20 @@ class ApplicationBase:
logger.debug("Request content (Invalid XML):\n%s", content)
raise RuntimeError("Failed to parse XML: %s" % e) from e
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Request content:\n%s",
xmlutils.pretty_xml(xml_content))
if self._request_content_on_debug:
logger.debug("Request content (XML):\n%s",
xmlutils.pretty_xml(xml_content))
else:
logger.debug("Request content (XML): suppressed by config/option [logging] request_content_on_debug")
return xml_content
def _xml_response(self, xml_content: ET.Element) -> bytes:
if logger.isEnabledFor(logging.DEBUG):
if self._response_content_on_debug:
logger.debug("Response content:\n%s",
logger.debug("Response content (XML):\n%s",
xmlutils.pretty_xml(xml_content))
else:
logger.debug("Response content: suppressed by config/option [auth] response_content_on_debug")
logger.debug("Response content (XML): suppressed by config/option [logging] response_content_on_debug")
f = io.BytesIO()
ET.ElementTree(xml_content).write(f, encoding=self._encoding,
xml_declaration=True)
@ -121,7 +126,7 @@ class Access:
def check(self, permission: str,
item: Optional[types.CollectionOrItem] = None) -> bool:
if permission not in "rw":
if permission not in "rwdDoO":
raise ValueError("Invalid permission argument: %r" % permission)
if not item:
permissions = permission + permission.upper()

View file

@ -3,6 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -24,6 +25,7 @@ from typing import Optional
from radicale import httputils, storage, types, xmlutils
from radicale.app.base import Access, ApplicationBase
from radicale.hook import HookNotificationItem, HookNotificationItemTypes
from radicale.log import logger
def xml_delete(base_prefix: str, path: str, collection: storage.BaseCollection,
@ -71,17 +73,22 @@ class ApplicationPartDelete(ApplicationBase):
hook_notification_item_list = []
if isinstance(item, storage.BaseCollection):
if self._permit_delete_collection:
for i in item.get_all():
hook_notification_item_list.append(
HookNotificationItem(
HookNotificationItemTypes.DELETE,
access.path,
i.uid
)
)
xml_answer = xml_delete(base_prefix, path, item)
if access.check("d", item):
logger.info("delete of collection is permitted by config/option [rights] permit_delete_collection but explicit forbidden by permission 'd': %s", path)
return httputils.NOT_ALLOWED
else:
return httputils.NOT_ALLOWED
if not access.check("D", item):
logger.info("delete of collection is prevented by config/option [rights] permit_delete_collection and not explicit allowed by permission 'D': %s", path)
return httputils.NOT_ALLOWED
for i in item.get_all():
hook_notification_item_list.append(
HookNotificationItem(
HookNotificationItemTypes.DELETE,
access.path,
i.uid
)
)
xml_answer = xml_delete(base_prefix, path, item)
else:
assert item.collection is not None
assert item.href is not None

View file

@ -66,6 +66,8 @@ class ApplicationPartGet(ApplicationBase):
if path == "/.web" or path.startswith("/.web/"):
# Redirect to sanitized path for all subpaths of /.web
unsafe_path = environ.get("PATH_INFO", "")
if len(base_prefix) > 0:
unsafe_path = unsafe_path.removeprefix(base_prefix)
if unsafe_path != path:
location = base_prefix + path
logger.info("Redirecting to sanitized path: %r ==> %r",

View file

@ -2,7 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -17,7 +18,9 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
import socket
from http import client
@ -70,7 +73,20 @@ class ApplicationPartMkcalendar(ApplicationBase):
try:
self._storage.create_collection(path, props=props)
except ValueError as e:
logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MKCALENDAR request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.CREATED, {}, None

View file

@ -2,7 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -17,7 +18,9 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
import socket
from http import client
@ -74,8 +77,21 @@ class ApplicationPartMkcol(ApplicationBase):
try:
self._storage.create_collection(path, props=props)
except ValueError as e:
logger.warning(
"Bad MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
return httputils.BAD_REQUEST
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
return httputils.BAD_REQUEST
logger.info("MKCOL request %r (type:%s): %s", path, collection_type, "successful")
return client.CREATED, {}, None

View file

@ -2,7 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2023 Unrud <unrud@outlook.com>
# Copyright © 2023-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -17,6 +18,7 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
from http import client
@ -109,7 +111,20 @@ class ApplicationPartMove(ApplicationBase):
try:
self._storage.move(item, to_collection, to_href)
except ValueError as e:
logger.warning(
"Bad MOVE request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MOVE request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MOVE request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.NO_CONTENT if to_item else client.CREATED, {}, None

View file

@ -322,13 +322,13 @@ def xml_propfind_response(
responses[404 if is404 else 200].append(element)
for status_code, childs in responses.items():
if not childs:
for status_code, children in responses.items():
if not children:
continue
propstat = ET.Element(xmlutils.make_clark("D:propstat"))
response.append(propstat)
prop = ET.Element(xmlutils.make_clark("D:prop"))
prop.extend(childs)
prop.extend(children)
propstat.append(prop)
status = ET.Element(xmlutils.make_clark("D:status"))
status.text = xmlutils.make_response(status_code)
@ -392,7 +392,8 @@ class ApplicationPartPropfind(ApplicationBase):
return httputils.REQUEST_TIMEOUT
with self._storage.acquire_lock("r", user):
items_iter = iter(self._storage.discover(
path, environ.get("HTTP_DEPTH", "0")))
path, environ.get("HTTP_DEPTH", "0"),
None, self._rights._user_groups))
# take root item for rights checking
item = next(items_iter, None)
if not item:

View file

@ -2,7 +2,9 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2020-2020 Tuna Celik <tuna@jakpark.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -17,6 +19,8 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import re
import socket
import xml.etree.ElementTree as ET
from http import client
@ -107,7 +111,20 @@ class ApplicationPartProppatch(ApplicationBase):
)
self._hook.notify(hook_notification_item)
except ValueError as e:
logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed PROPPATCH request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.MULTI_STATUS, headers, self._xml_response(xml_answer)

View file

@ -2,8 +2,9 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2020-2023 Tuna Celik <tuna@jakpark.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,8 +19,10 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import itertools
import posixpath
import re
import socket
import sys
from http import client
@ -29,7 +32,8 @@ from typing import Iterator, List, Mapping, MutableMapping, Optional, Tuple
import vobject
import radicale.item as radicale_item
from radicale import httputils, pathutils, rights, storage, types, xmlutils
from radicale import (httputils, pathutils, rights, storage, types, utils,
xmlutils)
from radicale.app.base import Access, ApplicationBase
from radicale.hook import HookNotificationItem, HookNotificationItemTypes
from radicale.log import logger
@ -37,6 +41,8 @@ from radicale.log import logger
MIMETYPE_TAGS: Mapping[str, str] = {value: key for key, value in
xmlutils.MIMETYPES.items()}
PRODID = u"-//Radicale//NONSGML Version " + utils.package_version("radicale") + "//EN"
def prepare(vobject_items: List[vobject.base.Component], path: str,
content_type: str, permission: bool, parent_permission: bool,
@ -80,6 +86,7 @@ def prepare(vobject_items: List[vobject.base.Component], path: str,
vobject_collection = vobject.iCalendar()
for component in components:
vobject_collection.add(component)
vobject_collection.add(vobject.base.ContentLine("PRODID", [], PRODID))
item = radicale_item.Item(collection_path=collection_path,
vobject_item=vobject_collection)
item.prepare()
@ -150,7 +157,7 @@ class ApplicationPartPut(ApplicationBase):
if self._log_bad_put_request_content:
logger.warning("Bad PUT request content of %r:\n%s", path, content)
else:
logger.debug("Bad PUT request content: suppressed by config/option [auth] bad_put_request_content")
logger.debug("Bad PUT request content: suppressed by config/option [logging] bad_put_request_content")
return httputils.BAD_REQUEST
(prepared_items, prepared_tag, prepared_write_whole_collection,
prepared_props, prepared_exc_info) = prepare(
@ -158,7 +165,7 @@ class ApplicationPartPut(ApplicationBase):
bool(rights.intersect(access.permissions, "Ww")),
bool(rights.intersect(access.parent_permissions, "w")))
with self._storage.acquire_lock("w", user):
with self._storage.acquire_lock("w", user, path=path):
item = next(iter(self._storage.discover(path)), None)
parent_item = next(iter(
self._storage.discover(access.parent_path)), None)
@ -176,22 +183,39 @@ class ApplicationPartPut(ApplicationBase):
if write_whole_collection:
if ("w" if tag else "W") not in access.permissions:
if not parent_item.tag:
logger.warning("Not a collection (check .Radicale.props): %r", parent_item.path)
return httputils.NOT_ALLOWED
if not self._permit_overwrite_collection:
if ("O") not in access.permissions:
logger.info("overwrite of collection is prevented by config/option [rights] permit_overwrite_collection and not explicit allowed by permssion 'O': %r", path)
return httputils.NOT_ALLOWED
else:
if ("o") in access.permissions:
logger.info("overwrite of collection is allowed by config/option [rights] permit_overwrite_collection but explicit forbidden by permission 'o': %r", path)
return httputils.NOT_ALLOWED
elif "w" not in access.parent_permissions:
return httputils.NOT_ALLOWED
etag = environ.get("HTTP_IF_MATCH", "")
if not item and etag:
# Etag asked but no item found: item has been removed
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_MATCH: %s, item not existing)", path, etag)
return httputils.PRECONDITION_FAILED
if item and etag and item.etag != etag:
# Etag asked but item not matching: item has changed
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_MATCH: %s, item has different etag: %s)", path, etag, item.etag)
return httputils.PRECONDITION_FAILED
if item and etag:
logger.debug("Precondition passed on PUT request for %r (HTTP_IF_MATCH: %s, item has etag: %s)", path, etag, item.etag)
match = environ.get("HTTP_IF_NONE_MATCH", "") == "*"
if item and match:
# Creation asked but item found: item can't be replaced
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_NONE_MATCH: *, creation requested but item found with etag: %s)", path, item.etag)
return httputils.PRECONDITION_FAILED
if match:
logger.debug("Precondition passed on PUT request for %r (HTTP_IF_NONE_MATCH: *)", path)
if (tag != prepared_tag or
prepared_write_whole_collection != write_whole_collection):
@ -242,9 +266,22 @@ class ApplicationPartPut(ApplicationBase):
)
self._hook.notify(hook_notification_item)
except ValueError as e:
logger.warning(
"Bad PUT request on %r (upload): %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed PUT request on %r (upload): %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad PUT request on %r (upload): %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"ETag": etag}
return client.CREATED, headers, None

View file

@ -2,7 +2,11 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Pieter Hijma <pieterhijma@users.noreply.github.com>
# Copyright © 2024-2024 Ray <ray@react0r.com>
# Copyright © 2024-2024 Georgiy <metallerok@gmail.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -24,10 +28,11 @@ import posixpath
import socket
import xml.etree.ElementTree as ET
from http import client
from typing import (Any, Callable, Iterable, Iterator, List, Optional,
Sequence, Tuple, Union)
from typing import (Callable, Iterable, Iterator, List, Optional, Sequence,
Tuple, Union)
from urllib.parse import unquote, urlparse
import vobject
import vobject.base
from vobject.base import ContentLine
@ -38,11 +43,110 @@ from radicale.item import filter as radicale_filter
from radicale.log import logger
def free_busy_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
collection: storage.BaseCollection, encoding: str,
unlock_storage_fn: Callable[[], None],
max_occurrence: int
) -> Tuple[int, Union[ET.Element, str]]:
# NOTE: this function returns both an Element and a string because
# free-busy reports are an edge-case on the return type according
# to the spec.
multistatus = ET.Element(xmlutils.make_clark("D:multistatus"))
if xml_request is None:
return client.MULTI_STATUS, multistatus
root = xml_request
if (root.tag == xmlutils.make_clark("C:free-busy-query") and
collection.tag != "VCALENDAR"):
logger.warning("Invalid REPORT method %r on %r requested",
xmlutils.make_human_tag(root.tag), path)
return client.FORBIDDEN, xmlutils.webdav_error("D:supported-report")
time_range_element = root.find(xmlutils.make_clark("C:time-range"))
assert isinstance(time_range_element, ET.Element)
# Build a single filter from the free busy query for retrieval
# TODO: filter for VFREEBUSY in additional to VEVENT but
# test_filter doesn't support that yet.
vevent_cf_element = ET.Element(xmlutils.make_clark("C:comp-filter"),
attrib={'name': 'VEVENT'})
vevent_cf_element.append(time_range_element)
vcalendar_cf_element = ET.Element(xmlutils.make_clark("C:comp-filter"),
attrib={'name': 'VCALENDAR'})
vcalendar_cf_element.append(vevent_cf_element)
filter_element = ET.Element(xmlutils.make_clark("C:filter"))
filter_element.append(vcalendar_cf_element)
filters = (filter_element,)
# First pull from storage
retrieved_items = list(collection.get_filtered(filters))
# !!! Don't access storage after this !!!
unlock_storage_fn()
cal = vobject.iCalendar()
collection_tag = collection.tag
while retrieved_items:
# Second filtering before evaluating occurrences.
# ``item.vobject_item`` might be accessed during filtering.
# Don't keep reference to ``item``, because VObject requires a lot of
# memory.
item, filter_matched = retrieved_items.pop(0)
if not filter_matched:
try:
if not test_filter(collection_tag, item, filter_element):
continue
except ValueError as e:
raise ValueError("Failed to free-busy filter item %r from %r: %s" %
(item.href, collection.path, e)) from e
except Exception as e:
raise RuntimeError("Failed to free-busy filter item %r from %r: %s" %
(item.href, collection.path, e)) from e
fbtype = None
if item.component_name == 'VEVENT':
transp = getattr(item.vobject_item.vevent, 'transp', None)
if transp and transp.value != 'OPAQUE':
continue
status = getattr(item.vobject_item.vevent, 'status', None)
if not status or status.value == 'CONFIRMED':
fbtype = 'BUSY'
elif status.value == 'CANCELLED':
fbtype = 'FREE'
elif status.value == 'TENTATIVE':
fbtype = 'BUSY-TENTATIVE'
else:
# Could do fbtype = status.value for x-name, I prefer this
fbtype = 'BUSY'
# TODO: coalesce overlapping periods
if max_occurrence > 0:
n_occurrences = max_occurrence+1
else:
n_occurrences = 0
occurrences = radicale_filter.time_range_fill(item.vobject_item,
time_range_element,
"VEVENT",
n=n_occurrences)
if len(occurrences) >= max_occurrence:
raise ValueError("FREEBUSY occurrences limit of {} hit"
.format(max_occurrence))
for occurrence in occurrences:
vfb = cal.add('vfreebusy')
vfb.add('dtstamp').value = item.vobject_item.vevent.dtstamp.value
vfb.add('dtstart').value, vfb.add('dtend').value = occurrence
if fbtype:
vfb.add('fbtype').value = fbtype
return (client.OK, cal.serialize())
def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
collection: storage.BaseCollection, encoding: str,
unlock_storage_fn: Callable[[], None]
) -> Tuple[int, ET.Element]:
"""Read and answer REPORT requests.
"""Read and answer REPORT requests that return XML.
Read rfc3253-3.6 for info.
@ -71,7 +175,11 @@ def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
xmlutils.make_human_tag(root.tag), path)
return client.FORBIDDEN, xmlutils.webdav_error("D:supported-report")
props: Union[ET.Element, List] = root.find(xmlutils.make_clark("D:prop")) or []
props: Union[ET.Element, List]
if root.find(xmlutils.make_clark("D:prop")) is not None:
props = root.find(xmlutils.make_clark("D:prop")) # type: ignore[assignment]
else:
props = []
hreferences: Iterable[str]
if root.tag in (
@ -157,7 +265,7 @@ def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
element.text = item.serialize()
expand = prop.find(xmlutils.make_clark("C:expand"))
if expand is not None:
if expand is not None and item.component_name == 'VEVENT':
start = expand.get('start')
end = expand.get('end')
@ -196,103 +304,184 @@ def _expand(
start: datetime.datetime,
end: datetime.datetime,
) -> ET.Element:
dt_format = '%Y%m%dT%H%M%SZ'
vevent_component: vobject.base.Component = copy.copy(item.vobject_item)
if type(item.vobject_item.vevent.dtstart.value) is datetime.date:
# If an event comes to us with a dt_start specified as a date
# Split the vevents included in the component into one that contains the
# recurrence information and others that contain a recurrence id to
# override instances.
vevent_recurrence, vevents_overridden = _split_overridden_vevents(vevent_component)
dt_format = '%Y%m%dT%H%M%SZ'
all_day_event = False
if type(vevent_recurrence.dtstart.value) is datetime.date:
# If an event comes to us with a dtstart specified as a date
# then in the response we return the date, not datetime
dt_format = '%Y%m%d'
all_day_event = True
# In case of dates, we need to remove timezone information since
# rruleset.between computes with datetimes without timezone information
start = start.replace(tzinfo=None)
end = end.replace(tzinfo=None)
expanded_item, rruleset = _make_vobject_expanded_item(item, dt_format)
for vevent in vevents_overridden:
_strip_single_event(vevent, dt_format)
duration = None
if hasattr(vevent_recurrence, "dtend"):
duration = vevent_recurrence.dtend.value - vevent_recurrence.dtstart.value
rruleset = None
if hasattr(vevent_recurrence, 'rrule'):
rruleset = vevent_recurrence.getrruleset()
if rruleset:
# This function uses datetimes internally without timezone info for dates
recurrences = rruleset.between(start, end, inc=True)
expanded: vobject.base.Component = copy.copy(expanded_item.vobject_item)
is_expanded_filled: bool = False
_strip_component(vevent_component)
_strip_single_event(vevent_recurrence, dt_format)
is_component_filled: bool = False
i_overridden = 0
for recurrence_dt in recurrences:
recurrence_utc = recurrence_dt.astimezone(datetime.timezone.utc)
i_overridden, vevent = _find_overridden(i_overridden, vevents_overridden, recurrence_utc, dt_format)
vevent = copy.deepcopy(expanded.vevent)
vevent.recurrence_id = ContentLine(
name='RECURRENCE-ID',
value=recurrence_utc.strftime(dt_format), params={}
)
if not vevent:
# We did not find an overridden instance, so create a new one
vevent = copy.deepcopy(vevent_recurrence)
if is_expanded_filled is False:
expanded.vevent = vevent
is_expanded_filled = True
# For all day events, the system timezone may influence the
# results, so use recurrence_dt
recurrence_id = recurrence_dt if all_day_event else recurrence_utc
vevent.recurrence_id = ContentLine(
name='RECURRENCE-ID',
value=recurrence_id, params={}
)
_convert_to_utc(vevent, 'recurrence_id', dt_format)
vevent.dtstart = ContentLine(
name='DTSTART',
value=recurrence_id.strftime(dt_format), params={}
)
if duration:
vevent.dtend = ContentLine(
name='DTEND',
value=(recurrence_id + duration).strftime(dt_format), params={}
)
if not is_component_filled:
vevent_component.vevent = vevent
is_component_filled = True
else:
expanded.add(vevent)
vevent_component.add(vevent)
element.text = expanded.serialize()
else:
element.text = expanded_item.vobject_item.serialize()
element.text = vevent_component.serialize()
return element
def _make_vobject_expanded_item(
item: radicale_item.Item,
dt_format: str,
) -> Tuple[radicale_item.Item, Optional[Any]]:
# https://www.rfc-editor.org/rfc/rfc4791#section-9.6.5
# The returned calendar components MUST NOT use recurrence
# properties (i.e., EXDATE, EXRULE, RDATE, and RRULE) and MUST NOT
# have reference to or include VTIMEZONE components. Date and local
# time with reference to time zone information MUST be converted
# into date with UTC time.
item = copy.copy(item)
vevent = item.vobject_item.vevent
if type(vevent.dtstart.value) is datetime.date:
start_utc = datetime.datetime.fromordinal(
vevent.dtstart.value.toordinal()
).replace(tzinfo=datetime.timezone.utc)
else:
start_utc = vevent.dtstart.value.astimezone(datetime.timezone.utc)
vevent.dtstart = ContentLine(name='DTSTART', value=start_utc, params=[])
dt_end = getattr(vevent, 'dtend', None)
if dt_end is not None:
if type(vevent.dtend.value) is datetime.date:
end_utc = datetime.datetime.fromordinal(
dt_end.value.toordinal()
def _convert_timezone(vevent: vobject.icalendar.RecurringComponent,
name_prop: str,
name_content_line: str):
prop = getattr(vevent, name_prop, None)
if prop:
if type(prop.value) is datetime.date:
date_time = datetime.datetime.fromordinal(
prop.value.toordinal()
).replace(tzinfo=datetime.timezone.utc)
else:
end_utc = dt_end.value.astimezone(datetime.timezone.utc)
date_time = prop.value.astimezone(datetime.timezone.utc)
vevent.dtend = ContentLine(name='DTEND', value=end_utc, params={})
setattr(vevent, name_prop, ContentLine(name=name_content_line, value=date_time, params=[]))
rruleset = None
if hasattr(item.vobject_item.vevent, 'rrule'):
rruleset = vevent.getrruleset()
# There is something strage behavour during serialization native datetime, so converting manualy
vevent.dtstart.value = vevent.dtstart.value.strftime(dt_format)
if dt_end is not None:
vevent.dtend.value = vevent.dtend.value.strftime(dt_format)
def _convert_to_utc(vevent: vobject.icalendar.RecurringComponent,
name_prop: str,
dt_format: str):
prop = getattr(vevent, name_prop, None)
if prop:
setattr(vevent, name_prop, ContentLine(name=prop.name, value=prop.value.strftime(dt_format), params=[]))
def _strip_single_event(vevent: vobject.icalendar.RecurringComponent, dt_format: str) -> None:
_convert_timezone(vevent, 'dtstart', 'DTSTART')
_convert_timezone(vevent, 'dtend', 'DTEND')
_convert_timezone(vevent, 'recurrence_id', 'RECURRENCE-ID')
# There is something strange behaviour during serialization native datetime, so converting manually
_convert_to_utc(vevent, 'dtstart', dt_format)
_convert_to_utc(vevent, 'dtend', dt_format)
_convert_to_utc(vevent, 'recurrence_id', dt_format)
try:
delattr(vevent, 'rrule')
delattr(vevent, 'exdate')
delattr(vevent, 'exrule')
delattr(vevent, 'rdate')
except AttributeError:
pass
def _strip_component(vevent: vobject.base.Component) -> None:
timezones_to_remove = []
for component in item.vobject_item.components():
for component in vevent.components():
if component.name == 'VTIMEZONE':
timezones_to_remove.append(component)
for timezone in timezones_to_remove:
item.vobject_item.remove(timezone)
vevent.remove(timezone)
try:
delattr(item.vobject_item.vevent, 'rrule')
delattr(item.vobject_item.vevent, 'exdate')
delattr(item.vobject_item.vevent, 'exrule')
delattr(item.vobject_item.vevent, 'rdate')
except AttributeError:
pass
return item, rruleset
def _split_overridden_vevents(
component: vobject.base.Component,
) -> Tuple[
vobject.icalendar.RecurringComponent,
List[vobject.icalendar.RecurringComponent]
]:
vevent_recurrence = None
vevents_overridden = []
for vevent in component.vevent_list:
if hasattr(vevent, 'recurrence_id'):
vevents_overridden += [vevent]
elif vevent_recurrence:
raise ValueError(
f"component with UID {vevent.uid} "
f"has more than one vevent with recurrence information"
)
else:
vevent_recurrence = vevent
if vevent_recurrence:
return (
vevent_recurrence, sorted(
vevents_overridden,
key=lambda vevent: vevent.recurrence_id.value
)
)
else:
raise ValueError(
f"component with UID {vevent.uid} "
f"does not have a vevent without a recurrence_id"
)
def _find_overridden(
start: int,
vevents: List[vobject.icalendar.RecurringComponent],
dt: datetime.datetime,
dt_format: str
) -> Tuple[int, Optional[vobject.icalendar.RecurringComponent]]:
for i in range(start, len(vevents)):
dt_event = datetime.datetime.strptime(
vevents[i].recurrence_id.value,
dt_format
).replace(tzinfo=datetime.timezone.utc)
if dt_event == dt:
return (i + 1, vevents[i])
return (start, None)
def xml_item_response(base_prefix: str, href: str,
@ -426,13 +615,28 @@ class ApplicationPartReport(ApplicationBase):
else:
assert item.collection is not None
collection = item.collection
try:
status, xml_answer = xml_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/xml; charset=%s" % self._encoding}
return status, headers, self._xml_response(xml_answer)
if xml_content is not None and \
xml_content.tag == xmlutils.make_clark("C:free-busy-query"):
max_occurrence = self.configuration.get("reporting", "max_freebusy_occurrence")
try:
status, body = free_busy_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close, max_occurrence)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/calendar; charset=%s" % self._encoding}
return status, headers, str(body)
else:
try:
status, xml_answer = xml_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/xml; charset=%s" % self._encoding}
return status, headers, self._xml_response(xml_answer)

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -29,29 +29,83 @@ Take a look at the class ``BaseAuth`` if you want to implement your own.
"""
from typing import Sequence, Tuple, Union
import hashlib
import os
import threading
import time
from typing import List, Sequence, Set, Tuple, Union, final
from radicale import config, types, utils
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("none", "remote_user", "http_x_remote_user",
"denyall",
"htpasswd")
"htpasswd",
"ldap",
"imap",
"oauth2",
"pam",
"dovecot")
CACHE_LOGIN_TYPES: Sequence[str] = (
"dovecot",
"ldap",
"htpasswd",
"imap",
"oauth2",
"pam",
)
INSECURE_IF_NO_LOOPBACK_TYPES: Sequence[str] = (
"remote_user",
"http_x_remote_user",
)
AUTH_SOCKET_FAMILY: Sequence[str] = ("AF_UNIX", "AF_INET", "AF_INET6")
def load(configuration: "config.Configuration") -> "BaseAuth":
"""Load the authentication module chosen in configuration."""
if configuration.get("auth", "type") == "none":
logger.warning("No user authentication is selected: '[auth] type=none' (insecure)")
if configuration.get("auth", "type") == "denyall":
logger.warning("All access is blocked by: '[auth] type=denyall'")
_type = configuration.get("auth", "type")
if _type == "none":
logger.warning("No user authentication is selected: '[auth] type=none' (INSECURE)")
elif _type == "denyall":
logger.warning("All user authentication is blocked by: '[auth] type=denyall'")
elif _type in INSECURE_IF_NO_LOOPBACK_TYPES:
sgi = os.environ.get('SERVER_GATEWAY_INTERFACE') or None
if not sgi:
hosts: List[Tuple[str, int]] = configuration.get("server", "hosts")
localhost_only = True
address_lo = []
address = []
for address_port in hosts:
if address_port[0] in ["localhost", "localhost6", "127.0.0.1", "::1"]:
address_lo.append(utils.format_address(address_port))
else:
address.append(utils.format_address(address_port))
localhost_only = False
if localhost_only is False:
logger.warning("User authentication '[auth] type=%s' is selected but server is not only listen on loopback address (potentially INSECURE): %s", _type, " ".join(address))
return utils.load_plugin(INTERNAL_TYPES, "auth", "Auth", BaseAuth,
configuration)
class BaseAuth:
_ldap_groups: Set[str] = set([])
_lc_username: bool
_uc_username: bool
_strip_domain: bool
_auth_delay: float
_failed_auth_delay: float
_type: str
_cache_logins: bool
_cache_successful: dict # login -> (digest, time_ns)
_cache_successful_logins_expiry: int
_cache_failed: dict # digest_failed -> (time_ns, login)
_cache_failed_logins_expiry: int
_cache_failed_logins_salt_ns: int # persistent over runtime
_lock: threading.Lock
def __init__(self, configuration: "config.Configuration") -> None:
"""Initialize BaseAuth.
@ -63,6 +117,45 @@ class BaseAuth:
"""
self.configuration = configuration
self._lc_username = configuration.get("auth", "lc_username")
self._uc_username = configuration.get("auth", "uc_username")
self._strip_domain = configuration.get("auth", "strip_domain")
logger.info("auth.strip_domain: %s", self._strip_domain)
logger.info("auth.lc_username: %s", self._lc_username)
logger.info("auth.uc_username: %s", self._uc_username)
if self._lc_username is True and self._uc_username is True:
raise RuntimeError("auth.lc_username and auth.uc_username cannot be enabled together")
self._auth_delay = configuration.get("auth", "delay")
logger.info("auth.delay: %f", self._auth_delay)
self._failed_auth_delay = 0
self._lock = threading.Lock()
# cache_successful_logins
self._cache_logins = configuration.get("auth", "cache_logins")
self._type = configuration.get("auth", "type")
if (self._type in CACHE_LOGIN_TYPES) or (self._cache_logins is False):
logger.info("auth.cache_logins: %s", self._cache_logins)
else:
logger.info("auth.cache_logins: %s (but not required for type '%s' and disabled therefore)", self._cache_logins, self._type)
self._cache_logins = False
if self._cache_logins is True:
self._cache_successful_logins_expiry = configuration.get("auth", "cache_successful_logins_expiry")
if self._cache_successful_logins_expiry < 0:
raise RuntimeError("self._cache_successful_logins_expiry cannot be < 0")
self._cache_failed_logins_expiry = configuration.get("auth", "cache_failed_logins_expiry")
if self._cache_failed_logins_expiry < 0:
raise RuntimeError("self._cache_failed_logins_expiry cannot be < 0")
logger.info("auth.cache_successful_logins_expiry: %s seconds", self._cache_successful_logins_expiry)
logger.info("auth.cache_failed_logins_expiry: %s seconds", self._cache_failed_logins_expiry)
# cache init
self._cache_successful = dict()
self._cache_failed = dict()
self._cache_failed_logins_salt_ns = time.time_ns()
def _cache_digest(self, login: str, password: str, salt: str) -> str:
h = hashlib.sha3_512()
h.update(salt.encode())
h.update(login.encode())
h.update(password.encode())
return str(h.digest())
def get_external_login(self, environ: types.WSGIEnviron) -> Union[
Tuple[()], Tuple[str, str]]:
@ -90,5 +183,132 @@ class BaseAuth:
raise NotImplementedError
def login(self, login: str, password: str) -> str:
return self._login(login, password).lower() if self._lc_username else self._login(login, password)
def _sleep_for_constant_exec_time(self, time_ns_begin: int):
"""Sleep some time to reach a constant execution time for failed logins
Independent of time required by external backend or used digest methods
Increase final execution time in case initial limit exceeded
See also issue 591
"""
time_delta = (time.time_ns() - time_ns_begin) / 1000 / 1000 / 1000
with self._lock:
# avoid that another thread is changing global value at the same time
failed_auth_delay = self._failed_auth_delay
failed_auth_delay_old = failed_auth_delay
if time_delta > failed_auth_delay:
# set new
failed_auth_delay = time_delta
# store globally
self._failed_auth_delay = failed_auth_delay
if (failed_auth_delay_old != failed_auth_delay):
logger.debug("Failed login constant execution time need increase of failed_auth_delay: %.9f -> %.9f sec", failed_auth_delay_old, failed_auth_delay)
# sleep == 0
else:
sleep = failed_auth_delay - time_delta
logger.debug("Failed login constant exection time alignment, sleeping: %.9f sec", sleep)
time.sleep(sleep)
@final
def login(self, login: str, password: str) -> Tuple[str, str]:
time_ns_begin = time.time_ns()
result_from_cache = False
if self._lc_username:
login = login.lower()
if self._uc_username:
login = login.upper()
if self._strip_domain:
login = login.split('@')[0]
if self._cache_logins is True:
# time_ns is also used as salt
result = ""
digest = ""
time_ns = time.time_ns()
# cleanup failed login cache to avoid out-of-memory
cache_failed_entries = len(self._cache_failed)
if cache_failed_entries > 0:
logger.debug("Login failed cache investigation start (entries: %d)", cache_failed_entries)
self._lock.acquire()
cache_failed_cleanup = dict()
for digest in self._cache_failed:
(time_ns_cache, login_cache) = self._cache_failed[digest]
age_failed = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
if age_failed > self._cache_failed_logins_expiry:
cache_failed_cleanup[digest] = (login_cache, age_failed)
cache_failed_cleanup_entries = len(cache_failed_cleanup)
logger.debug("Login failed cache cleanup start (entries: %d)", cache_failed_cleanup_entries)
if cache_failed_cleanup_entries > 0:
for digest in cache_failed_cleanup:
(login, age_failed) = cache_failed_cleanup[digest]
logger.debug("Login failed cache entry for user+password expired: '%s' (age: %d > %d sec)", login_cache, age_failed, self._cache_failed_logins_expiry)
del self._cache_failed[digest]
self._lock.release()
logger.debug("Login failed cache investigation finished")
# check for cache failed login
digest_failed = login + ":" + self._cache_digest(login, password, str(self._cache_failed_logins_salt_ns))
if self._cache_failed.get(digest_failed):
# login+password found in cache "failed" -> shortcut return
(time_ns_cache, login_cache) = self._cache_failed[digest]
age_failed = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
logger.debug("Login failed cache entry for user+password found: '%s' (age: %d sec)", login_cache, age_failed)
self._sleep_for_constant_exec_time(time_ns_begin)
return ("", self._type + " / cached")
if self._cache_successful.get(login):
# login found in cache "successful"
(digest_cache, time_ns_cache) = self._cache_successful[login]
digest = self._cache_digest(login, password, str(time_ns_cache))
if digest == digest_cache:
age_success = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
if age_success > self._cache_successful_logins_expiry:
logger.debug("Login successful cache entry for user+password found but expired: '%s' (age: %d > %d sec)", login, age_success, self._cache_successful_logins_expiry)
# delete expired success from cache
del self._cache_successful[login]
digest = ""
else:
logger.debug("Login successful cache entry for user+password found: '%s' (age: %d sec)", login, age_success)
result = login
result_from_cache = True
else:
logger.debug("Login successful cache entry for user+password not matching: '%s'", login)
else:
# login not found in cache, caculate always to avoid timing attacks
digest = self._cache_digest(login, password, str(time_ns))
if result == "":
# verify login+password via configured backend
logger.debug("Login verification for user+password via backend: '%s'", login)
result = self._login(login, password)
if result != "":
logger.debug("Login successful for user+password via backend: '%s'", login)
if digest == "":
# successful login, but expired, digest must be recalculated
digest = self._cache_digest(login, password, str(time_ns))
# store successful login in cache
self._lock.acquire()
self._cache_successful[login] = (digest, time_ns)
self._lock.release()
logger.debug("Login successful cache for user set: '%s'", login)
if self._cache_failed.get(digest_failed):
logger.debug("Login failed cache for user cleared: '%s'", login)
del self._cache_failed[digest_failed]
else:
logger.debug("Login failed for user+password via backend: '%s'", login)
self._lock.acquire()
self._cache_failed[digest_failed] = (time_ns, login)
self._lock.release()
logger.debug("Login failed cache for user set: '%s'", login)
if result_from_cache is True:
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type + " / cached")
else:
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type)
else:
# self._cache_logins is False
result = self._login(login, password)
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type)

192
radicale/auth/dovecot.py Normal file
View file

@ -0,0 +1,192 @@
# This file is part of Radicale Server - Calendar Server
# Copyright © 2014 Giel van Schijndel
# Copyright © 2019 (GalaxyMaster)
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import base64
import itertools
import os
import socket
from contextlib import closing
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration):
super().__init__(configuration)
self.timeout = 5
self.request_id_gen = itertools.count(1)
config_family = configuration.get("auth", "dovecot_connection_type")
if config_family == "AF_UNIX":
self.family = socket.AF_UNIX
self.address = configuration.get("auth", "dovecot_socket")
logger.info("auth dovecot socket: %r", self.address)
return
self.address = configuration.get("auth", "dovecot_host"), configuration.get("auth", "dovecot_port")
logger.warning("auth dovecot address: %r (INSECURE, credentials are transmitted in clear text)", self.address)
if config_family == "AF_INET":
self.family = socket.AF_INET
else:
self.family = socket.AF_INET6
def _login(self, login, password):
"""Validate credentials.
Check if the ``login``/``password`` pair is valid according to Dovecot.
This implementation communicates with a Dovecot server through the
Dovecot Authentication Protocol v1.1.
https://dovecot.org/doc/auth-protocol.txt
"""
logger.info("Authentication request (dovecot): '{}'".format(login))
if not login or not password:
return ""
with closing(socket.socket(
self.family,
socket.SOCK_STREAM)
) as sock:
try:
sock.settimeout(self.timeout)
sock.connect(self.address)
buf = bytes()
supported_mechs = []
done = False
seen_part = [0, 0, 0]
# Upon the initial connection we only care about the
# handshake, which is usually just around 100 bytes long,
# e.g.
#
# VERSION 1 2
# MECH PLAIN plaintext
# SPID 22901
# CUID 1
# COOKIE 2dbe4116a30fb4b8a8719f4448420af7
# DONE
#
# Hence, we try to read just once with a buffer big
# enough to hold all of it.
buf = sock.recv(1024)
while b'\n' in buf and not done:
line, buf = buf.split(b'\n', 1)
parts = line.split(b'\t')
first, parts = parts[0], parts[1:]
if first == b'VERSION':
if seen_part[0]:
logger.warning(
"Server presented multiple VERSION "
"tokens, ignoring"
)
continue
version = parts
logger.debug("Dovecot server version: '{}'".format(
(b'.'.join(version)).decode()
))
if int(version[0]) != 1:
logger.fatal(
"Only Dovecot 1.x versions are supported!"
)
return ""
seen_part[0] += 1
elif first == b'MECH':
supported_mechs.append(parts[0])
seen_part[1] += 1
elif first == b'DONE':
seen_part[2] += 1
if not (seen_part[0] and seen_part[1]):
logger.fatal(
"An unexpected end of the server "
"handshake received!"
)
return ""
done = True
if not done:
logger.fatal("Encountered a broken server handshake!")
return ""
logger.debug(
"Supported auth methods: '{}'"
.format((b"', '".join(supported_mechs)).decode())
)
if b'PLAIN' not in supported_mechs:
logger.info(
"Authentication method 'PLAIN' is not supported, "
"but is required!"
)
return ""
# Handshake
logger.debug("Sending auth handshake")
sock.send(b'VERSION\t1\t1\n')
sock.send(b'CPID\t%u\n' % os.getpid())
request_id = next(self.request_id_gen)
logger.debug(
"Authenticating with request id: '{}'"
.format(request_id)
)
sock.send(
b'AUTH\t%u\tPLAIN\tservice=radicale\tresp=%b\n' %
(
request_id, base64.b64encode(
b'\0%b\0%b' %
(login.encode(), password.encode())
)
)
)
logger.debug("Processing auth response")
buf = sock.recv(1024)
line = buf.split(b'\n', 1)[0]
parts = line.split(b'\t')[:2]
resp, reply_id, params = (
parts[0], int(parts[1]),
dict(part.split('=', 1) for part in parts[2:])
)
logger.debug(
"Auth response: result='{}', id='{}', parameters={}"
.format(resp.decode(), reply_id, params)
)
if request_id != reply_id:
logger.fatal(
"Unexpected reply ID {} received (expected {})"
.format(
reply_id, request_id
)
)
return ""
if resp == b'OK':
return login
except socket.error as e:
logger.fatal(
"Failed to communicate with Dovecot: %s" %
(e)
)
return ""

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -36,7 +36,7 @@ pointed to by the ``htpasswd_filename`` configuration value while assuming
the password encryption method specified via the ``htpasswd_encryption``
configuration value.
The following htpasswd password encrpytion methods are supported by Radicale
The following htpasswd password encryption methods are supported by Radicale
out-of-the-box:
- plain-text (created by htpasswd -p ...) -- INSECURE
- MD5-APR1 (htpasswd -m ...) -- htpasswd's default method, INSECURE
@ -50,7 +50,11 @@ When bcrypt is installed:
import functools
import hmac
from typing import Any
import os
import re
import threading
import time
from typing import Any, Tuple
from passlib.hash import apr_md5_crypt, sha256_crypt, sha512_crypt
@ -61,72 +65,202 @@ class Auth(auth.BaseAuth):
_filename: str
_encoding: str
_htpasswd: dict # login -> digest
_htpasswd_mtime_ns: int
_htpasswd_size: int
_htpasswd_ok: bool
_htpasswd_not_ok_time: float
_htpasswd_not_ok_reminder_seconds: int
_htpasswd_bcrypt_use: int
_htpasswd_cache: bool
_has_bcrypt: bool
_encryption: str
_lock: threading.Lock
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
self._filename = configuration.get("auth", "htpasswd_filename")
logger.info("auth htpasswd file: %r", self._filename)
self._encoding = configuration.get("encoding", "stock")
encryption: str = configuration.get("auth", "htpasswd_encryption")
logger.info("auth htpasswd file encoding: %r", self._encoding)
self._htpasswd_cache = configuration.get("auth", "htpasswd_cache")
logger.info("auth htpasswd cache: %s", self._htpasswd_cache)
self._encryption: str = configuration.get("auth", "htpasswd_encryption")
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s'", self._encryption)
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s'", encryption)
self._has_bcrypt = False
self._htpasswd_ok = False
self._htpasswd_not_ok_reminder_seconds = 60 # currently hardcoded
(self._htpasswd_ok, self._htpasswd_bcrypt_use, self._htpasswd, self._htpasswd_size, self._htpasswd_mtime_ns) = self._read_htpasswd(True, False)
self._lock = threading.Lock()
if encryption == "plain":
if self._encryption == "plain":
self._verify = self._plain
elif encryption == "md5":
elif self._encryption == "md5":
self._verify = self._md5apr1
elif encryption == "sha256":
elif self._encryption == "sha256":
self._verify = self._sha256
elif encryption == "sha512":
elif self._encryption == "sha512":
self._verify = self._sha512
elif encryption == "bcrypt" or encryption == "autodetect":
elif self._encryption == "bcrypt" or self._encryption == "autodetect":
try:
import bcrypt
except ImportError as e:
raise RuntimeError(
"The htpasswd encryption method 'bcrypt' or 'autodetect' requires "
"the bcrypt module.") from e
if encryption == "bcrypt":
if (self._encryption == "autodetect") and (self._htpasswd_bcrypt_use == 0):
logger.warning("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' which can require bycrypt module, but currently no entries found", self._encryption)
else:
raise RuntimeError(
"The htpasswd encryption method 'bcrypt' or 'autodetect' requires "
"the bcrypt module (entries found: %d)." % self._htpasswd_bcrypt_use) from e
else:
self._has_bcrypt = True
if self._encryption == "autodetect":
if self._htpasswd_bcrypt_use == 0:
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' and bycrypt module found, but currently not required", self._encryption)
else:
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' and bycrypt module found (bcrypt entries found: %d)", self._encryption, self._htpasswd_bcrypt_use)
if self._encryption == "bcrypt":
self._verify = functools.partial(self._bcrypt, bcrypt)
else:
self._verify = self._autodetect
self._verify_bcrypt = functools.partial(self._bcrypt, bcrypt)
if self._htpasswd_bcrypt_use:
self._verify_bcrypt = functools.partial(self._bcrypt, bcrypt)
else:
raise RuntimeError("The htpasswd encryption method %r is not "
"supported." % encryption)
"supported." % self._encryption)
def _plain(self, hash_value: str, password: str) -> bool:
def _plain(self, hash_value: str, password: str) -> tuple[str, bool]:
"""Check if ``hash_value`` and ``password`` match, plain method."""
return hmac.compare_digest(hash_value.encode(), password.encode())
return ("PLAIN", hmac.compare_digest(hash_value.encode(), password.encode()))
def _bcrypt(self, bcrypt: Any, hash_value: str, password: str) -> bool:
return bcrypt.checkpw(password=password.encode('utf-8'), hashed_password=hash_value.encode())
def _plain_fallback(self, method_orig, hash_value: str, password: str) -> tuple[str, bool]:
"""Check if ``hash_value`` and ``password`` match, plain method / fallback in case of hash length is not matching on autodetection."""
info = "PLAIN/fallback as hash length not matching for " + method_orig + ": " + str(len(hash_value))
return (info, hmac.compare_digest(hash_value.encode(), password.encode()))
def _md5apr1(self, hash_value: str, password: str) -> bool:
return apr_md5_crypt.verify(password, hash_value.strip())
def _bcrypt(self, bcrypt: Any, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 60:
return self._plain_fallback("BCRYPT", hash_value, password)
else:
return ("BCRYPT", bcrypt.checkpw(password=password.encode('utf-8'), hashed_password=hash_value.encode()))
def _sha256(self, hash_value: str, password: str) -> bool:
return sha256_crypt.verify(password, hash_value.strip())
def _md5apr1(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 37:
return self._plain_fallback("MD5-APR1", hash_value, password)
else:
return ("MD5-APR1", apr_md5_crypt.verify(password, hash_value.strip()))
def _sha512(self, hash_value: str, password: str) -> bool:
return sha512_crypt.verify(password, hash_value.strip())
def _sha256(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 63:
return self._plain_fallback("SHA-256", hash_value, password)
else:
return ("SHA-256", sha256_crypt.verify(password, hash_value.strip()))
def _autodetect(self, hash_value: str, password: str) -> bool:
if hash_value.startswith("$apr1$", 0, 6) and len(hash_value) == 37:
def _sha512(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 106:
return self._plain_fallback("SHA-512", hash_value, password)
else:
return ("SHA-512", sha512_crypt.verify(password, hash_value.strip()))
def _autodetect(self, hash_value: str, password: str) -> tuple[str, bool]:
if hash_value.startswith("$apr1$", 0, 6):
# MD5-APR1
return self._md5apr1(hash_value, password)
elif hash_value.startswith("$2y$", 0, 4) and len(hash_value) == 60:
elif re.match(r"^\$2(a|b|x|y)?\$", hash_value):
# BCRYPT
return self._verify_bcrypt(hash_value, password)
elif hash_value.startswith("$5$", 0, 3) and len(hash_value) == 63:
elif hash_value.startswith("$5$", 0, 3):
# SHA-256
return self._sha256(hash_value, password)
elif hash_value.startswith("$6$", 0, 3) and len(hash_value) == 106:
elif hash_value.startswith("$6$", 0, 3):
# SHA-512
return self._sha512(hash_value, password)
else:
# assumed plaintext
return self._plain(hash_value, password)
def _read_htpasswd(self, init: bool, suppress: bool) -> Tuple[bool, int, dict, int, int]:
"""Read htpasswd file
init == True: stop on error
init == False: warn/skip on error and set mark to log reminder every interval
suppress == True: suppress warnings, change info to debug (used in non-caching mode)
suppress == False: do not suppress warnings (used in caching mode)
"""
htpasswd_ok = True
bcrypt_use = 0
if (init is True) or (suppress is True):
info = "Read"
else:
info = "Re-read"
if suppress is False:
logger.info("%s content of htpasswd file start: %r", info, self._filename)
else:
logger.debug("%s content of htpasswd file start: %r", info, self._filename)
htpasswd: dict[str, str] = dict()
entries = 0
duplicates = 0
errors = 0
try:
with open(self._filename, encoding=self._encoding) as f:
line_num = 0
for line in f:
line_num += 1
line = line.rstrip("\n")
if line.lstrip() and not line.lstrip().startswith("#"):
try:
login, digest = line.split(":", maxsplit=1)
skip = False
if login == "" or digest == "":
if init is True:
raise ValueError("htpasswd file contains problematic line not matching <login>:<digest> in line: %d" % line_num)
else:
errors += 1
logger.warning("htpasswd file contains problematic line not matching <login>:<digest> in line: %d (ignored)", line_num)
htpasswd_ok = False
skip = True
else:
if htpasswd.get(login):
duplicates += 1
if init is True:
raise ValueError("htpasswd file contains duplicate login: '%s'", login, line_num)
else:
logger.warning("htpasswd file contains duplicate login: '%s' (line: %d / ignored)", login, line_num)
htpasswd_ok = False
skip = True
else:
if re.match(r"^\$2(a|b|x|y)?\$", digest) and len(digest) == 60:
if init is True:
bcrypt_use += 1
else:
if self._has_bcrypt is False:
logger.warning("htpasswd file contains bcrypt digest login: '%s' (line: %d / ignored because module is not loaded)", login, line_num)
skip = True
htpasswd_ok = False
if skip is False:
htpasswd[login] = digest
entries += 1
except ValueError as e:
if init is True:
raise RuntimeError("Invalid htpasswd file %r: %s" % (self._filename, e)) from e
except OSError as e:
if init is True:
raise RuntimeError("Failed to load htpasswd file %r: %s" % (self._filename, e)) from e
else:
logger.warning("Failed to load htpasswd file on re-read: %r" % self._filename)
htpasswd_ok = False
htpasswd_size = os.stat(self._filename).st_size
htpasswd_mtime_ns = os.stat(self._filename).st_mtime_ns
if suppress is False:
logger.info("%s content of htpasswd file done: %r (entries: %d, duplicates: %d, errors: %d)", info, self._filename, entries, duplicates, errors)
else:
logger.debug("%s content of htpasswd file done: %r (entries: %d, duplicates: %d, errors: %d)", info, self._filename, entries, duplicates, errors)
if htpasswd_ok is True:
self._htpasswd_not_ok_time = 0
else:
self._htpasswd_not_ok_time = time.time()
return (htpasswd_ok, bcrypt_use, htpasswd, htpasswd_size, htpasswd_mtime_ns)
def _login(self, login: str, password: str) -> str:
"""Validate credentials.
@ -134,30 +268,52 @@ class Auth(auth.BaseAuth):
hash (encrypted password) and check hash against password,
using the method specified in the Radicale config.
The content of the file is not cached because reading is generally a
very cheap operation, and it's useful to get live updates of the
htpasswd file.
Optional: the content of the file is cached and live updates will be detected by
comparing mtime_ns and size
"""
try:
with open(self._filename, encoding=self._encoding) as f:
for line in f:
line = line.rstrip("\n")
if line.lstrip() and not line.lstrip().startswith("#"):
try:
hash_login, hash_value = line.split(
":", maxsplit=1)
# Always compare both login and password to avoid
# timing attacks, see #591.
login_ok = hmac.compare_digest(
hash_login.encode(), login.encode())
password_ok = self._verify(hash_value, password)
if login_ok and password_ok:
return login
except ValueError as e:
raise RuntimeError("Invalid htpasswd file %r: %s" %
(self._filename, e)) from e
except OSError as e:
raise RuntimeError("Failed to load htpasswd file %r: %s" %
(self._filename, e)) from e
login_ok = False
digest: str
if self._htpasswd_cache is True:
# check and re-read file if required
with self._lock:
htpasswd_size = os.stat(self._filename).st_size
htpasswd_mtime_ns = os.stat(self._filename).st_mtime_ns
if (htpasswd_size != self._htpasswd_size) or (htpasswd_mtime_ns != self._htpasswd_mtime_ns):
(self._htpasswd_ok, self._htpasswd_bcrypt_use, self._htpasswd, self._htpasswd_size, self._htpasswd_mtime_ns) = self._read_htpasswd(False, False)
self._htpasswd_not_ok_time = 0
# log reminder of problemantic file every interval
current_time = time.time()
if (self._htpasswd_ok is False):
if (self._htpasswd_not_ok_time > 0):
if (current_time - self._htpasswd_not_ok_time) > self._htpasswd_not_ok_reminder_seconds:
logger.warning("htpasswd file still contains issues (REMINDER, check warnings in the past): %r" % self._filename)
self._htpasswd_not_ok_time = current_time
else:
self._htpasswd_not_ok_time = current_time
if self._htpasswd.get(login):
digest = self._htpasswd[login]
login_ok = True
else:
# read file on every request
(htpasswd_ok, htpasswd_bcrypt_use, htpasswd, htpasswd_size, htpasswd_mtime_ns) = self._read_htpasswd(False, True)
if htpasswd.get(login):
digest = htpasswd[login]
login_ok = True
if login_ok is True:
try:
(method, password_ok) = self._verify(digest, password)
except ValueError as e:
logger.error("Login verification failed for user: '%s' (htpasswd/%s) with errror '%s'", login, self._encryption, e)
return ""
if password_ok:
logger.debug("Login verification successful for user: '%s' (htpasswd/%s/%s)", login, self._encryption, method)
return login
else:
logger.warning("Login verification failed for user: '%s' (htpasswd/%s/%s)", login, self._encryption, method)
else:
logger.warning("Login verification user not found (htpasswd): '%s'", login)
return ""

View file

@ -2,7 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by

73
radicale/auth/imap.py Normal file
View file

@ -0,0 +1,73 @@
# RadicaleIMAP IMAP authentication plugin for Radicale.
# Copyright © 2017, 2020 Unrud <unrud@outlook.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import imaplib
import ssl
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
"""Authenticate user with IMAP."""
def __init__(self, configuration) -> None:
super().__init__(configuration)
self._host, self._port = self.configuration.get("auth", "imap_host")
logger.info("auth imap host: %r", self._host)
self._security = self.configuration.get("auth", "imap_security")
if self._security == "none":
logger.warning("auth imap security: %s (INSECURE, credentials are transmitted in clear text)", self._security)
else:
logger.info("auth imap security: %s", self._security)
if self._security == "tls":
if self._port is None:
self._port = 993
logger.info("auth imap port (autoselected): %d", self._port)
else:
logger.info("auth imap port: %d", self._port)
else:
if self._port is None:
self._port = 143
logger.info("auth imap port (autoselected): %d", self._port)
else:
logger.info("auth imap port: %d", self._port)
def _login(self, login, password) -> str:
try:
connection: imaplib.IMAP4 | imaplib.IMAP4_SSL
if self._security == "tls":
connection = imaplib.IMAP4_SSL(
host=self._host, port=self._port,
ssl_context=ssl.create_default_context())
else:
connection = imaplib.IMAP4(host=self._host, port=self._port)
if self._security == "starttls":
connection.starttls(ssl.create_default_context())
try:
connection.authenticate(
"PLAIN",
lambda _: "{0}\x00{0}\x00{1}".format(login, password).encode(),
)
except imaplib.IMAP4.error as e:
logger.warning("IMAP authentication failed for user %r: %s", login, e, exc_info=False)
return ""
connection.logout()
return login
except (OSError, imaplib.IMAP4.error) as e:
logger.error("Failed to communicate with IMAP server %r: %s" % ("[%s]:%d" % (self._host, self._port), e))
return ""

269
radicale/auth/ldap.py Normal file
View file

@ -0,0 +1,269 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2022-2024 Peter Varkoly
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Authentication backend that checks credentials with a LDAP server.
Following parameters are needed in the configuration:
ldap_uri The LDAP URL to the server like ldap://localhost
ldap_base The baseDN of the LDAP server
ldap_reader_dn The DN of a LDAP user with read access to get the user accounts
ldap_secret The password of the ldap_reader_dn
ldap_secret_file The path of the file containing the password of the ldap_reader_dn
ldap_filter The search filter to find the user to authenticate by the username
ldap_user_attribute The attribute to be used as username after authentication
ldap_groups_attribute The attribute containing group memberships in the LDAP user entry
Following parameters controls SSL connections:
ldap_use_ssl If the connection
ldap_ssl_verify_mode The certificate verification mode. NONE, OPTIONAL, default is REQUIRED
ldap_ssl_ca_file
"""
import ssl
from radicale import auth, config
from radicale.log import logger
class Auth(auth.BaseAuth):
_ldap_uri: str
_ldap_base: str
_ldap_reader_dn: str
_ldap_secret: str
_ldap_filter: str
_ldap_attributes: list[str] = []
_ldap_user_attr: str
_ldap_groups_attr: str
_ldap_module_version: int = 3
_ldap_use_ssl: bool = False
_ldap_ssl_verify_mode: int = ssl.CERT_REQUIRED
_ldap_ssl_ca_file: str = ""
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
try:
import ldap3
self.ldap3 = ldap3
except ImportError:
try:
import ldap
self._ldap_module_version = 2
self.ldap = ldap
except ImportError as e:
raise RuntimeError("LDAP authentication requires the ldap3 module") from e
self._ldap_ignore_attribute_create_modify_timestamp = configuration.get("auth", "ldap_ignore_attribute_create_modify_timestamp")
if self._ldap_ignore_attribute_create_modify_timestamp:
self.ldap3.utils.config._ATTRIBUTES_EXCLUDED_FROM_CHECK.extend(['createTimestamp', 'modifyTimestamp'])
logger.info("auth.ldap_ignore_attribute_create_modify_timestamp applied")
self._ldap_uri = configuration.get("auth", "ldap_uri")
self._ldap_base = configuration.get("auth", "ldap_base")
self._ldap_reader_dn = configuration.get("auth", "ldap_reader_dn")
self._ldap_secret = configuration.get("auth", "ldap_secret")
self._ldap_filter = configuration.get("auth", "ldap_filter")
self._ldap_user_attr = configuration.get("auth", "ldap_user_attribute")
self._ldap_groups_attr = configuration.get("auth", "ldap_groups_attribute")
ldap_secret_file_path = configuration.get("auth", "ldap_secret_file")
if ldap_secret_file_path:
with open(ldap_secret_file_path, 'r') as file:
self._ldap_secret = file.read().rstrip('\n')
if self._ldap_module_version == 3:
self._ldap_use_ssl = configuration.get("auth", "ldap_use_ssl")
if self._ldap_use_ssl:
self._ldap_ssl_ca_file = configuration.get("auth", "ldap_ssl_ca_file")
tmp = configuration.get("auth", "ldap_ssl_verify_mode")
if tmp == "NONE":
self._ldap_ssl_verify_mode = ssl.CERT_NONE
elif tmp == "OPTIONAL":
self._ldap_ssl_verify_mode = ssl.CERT_OPTIONAL
logger.info("auth.ldap_uri : %r" % self._ldap_uri)
logger.info("auth.ldap_base : %r" % self._ldap_base)
logger.info("auth.ldap_reader_dn : %r" % self._ldap_reader_dn)
logger.info("auth.ldap_filter : %r" % self._ldap_filter)
if self._ldap_user_attr:
logger.info("auth.ldap_user_attribute : %r" % self._ldap_user_attr)
else:
logger.info("auth.ldap_user_attribute : (not provided)")
if self._ldap_groups_attr:
logger.info("auth.ldap_groups_attribute: %r" % self._ldap_groups_attr)
else:
logger.info("auth.ldap_groups_attribute: (not provided)")
if ldap_secret_file_path:
logger.info("auth.ldap_secret_file_path: %r" % ldap_secret_file_path)
if self._ldap_secret:
logger.info("auth.ldap_secret : (from file)")
else:
logger.info("auth.ldap_secret_file_path: (not provided)")
if self._ldap_secret:
logger.info("auth.ldap_secret : (from config)")
if self._ldap_reader_dn and not self._ldap_secret:
logger.error("auth.ldap_secret : (not provided)")
raise RuntimeError("LDAP authentication requires ldap_secret for ldap_reader_dn")
logger.info("auth.ldap_use_ssl : %s" % self._ldap_use_ssl)
if self._ldap_use_ssl is True:
logger.info("auth.ldap_ssl_verify_mode : %s" % self._ldap_ssl_verify_mode)
if self._ldap_ssl_ca_file:
logger.info("auth.ldap_ssl_ca_file : %r" % self._ldap_ssl_ca_file)
else:
logger.info("auth.ldap_ssl_ca_file : (not provided)")
"""Extend attributes to to be returned in the user query"""
if self._ldap_groups_attr:
self._ldap_attributes.append(self._ldap_groups_attr)
if self._ldap_user_attr:
self._ldap_attributes.append(self._ldap_user_attr)
logger.info("ldap_attributes : %r" % self._ldap_attributes)
def _login2(self, login: str, password: str) -> str:
try:
"""Bind as reader dn"""
logger.debug(f"_login2 {self._ldap_uri}, {self._ldap_reader_dn}")
conn = self.ldap.initialize(self._ldap_uri)
conn.protocol_version = 3
conn.set_option(self.ldap.OPT_REFERRALS, 0)
conn.simple_bind_s(self._ldap_reader_dn, self._ldap_secret)
"""Search for the dn of user to authenticate"""
escaped_login = self.ldap.filter.escape_filter_chars(login)
logger.debug(f"_login2 login escaped for LDAP filters: {escaped_login}")
res = conn.search_s(
self._ldap_base,
self.ldap.SCOPE_SUBTREE,
filterstr=self._ldap_filter.format(escaped_login),
attrlist=self._ldap_attributes
)
if len(res) != 1:
"""User could not be found unambiguously"""
logger.debug(f"_login2 no unique DN found for '{login}'")
return ""
user_entry = res[0]
user_dn = user_entry[0]
logger.debug(f"_login2 found LDAP user DN {user_dn}")
"""Close LDAP connection"""
conn.unbind()
except Exception as e:
raise RuntimeError(f"Invalid LDAP configuration:{e}")
try:
"""Bind as user to authenticate"""
conn = self.ldap.initialize(self._ldap_uri)
conn.protocol_version = 3
conn.set_option(self.ldap.OPT_REFERRALS, 0)
conn.simple_bind_s(user_dn, password)
tmp: list[str] = []
if self._ldap_groups_attr:
tmp = []
for g in user_entry[1][self._ldap_groups_attr]:
"""Get group g's RDN's attribute value"""
try:
rdns = self.ldap.dn.explode_dn(g, notypes=True)
tmp.append(rdns[0])
except Exception:
tmp.append(g.decode('utf8'))
self._ldap_groups = set(tmp)
logger.debug("_login2 LDAP groups of user: %s", ",".join(self._ldap_groups))
if self._ldap_user_attr:
if user_entry[1][self._ldap_user_attr]:
tmplogin = user_entry[1][self._ldap_user_attr][0]
login = tmplogin.decode('utf-8')
logger.debug(f"_login2 user set to: '{login}'")
conn.unbind()
logger.debug(f"_login2 {login} successfully authenticated")
return login
except self.ldap.INVALID_CREDENTIALS:
return ""
def _login3(self, login: str, password: str) -> str:
"""Connect the server"""
try:
logger.debug(f"_login3 {self._ldap_uri}, {self._ldap_reader_dn}")
if self._ldap_use_ssl:
tls = self.ldap3.Tls(validate=self._ldap_ssl_verify_mode)
if self._ldap_ssl_ca_file != "":
tls = self.ldap3.Tls(
validate=self._ldap_ssl_verify_mode,
ca_certs_file=self._ldap_ssl_ca_file
)
server = self.ldap3.Server(self._ldap_uri, use_ssl=True, tls=tls)
else:
server = self.ldap3.Server(self._ldap_uri)
conn = self.ldap3.Connection(server, self._ldap_reader_dn, password=self._ldap_secret)
except self.ldap3.core.exceptions.LDAPSocketOpenError:
raise RuntimeError("Unable to reach LDAP server")
except Exception as e:
logger.debug(f"_login3 error 1 {e}")
pass
if not conn.bind():
logger.debug("_login3 cannot bind")
raise RuntimeError("Unable to read from LDAP server")
logger.debug(f"_login3 bind as {self._ldap_reader_dn}")
"""Search the user dn"""
escaped_login = self.ldap3.utils.conv.escape_filter_chars(login)
logger.debug(f"_login3 login escaped for LDAP filters: {escaped_login}")
conn.search(
search_base=self._ldap_base,
search_filter=self._ldap_filter.format(escaped_login),
search_scope=self.ldap3.SUBTREE,
attributes=self._ldap_attributes
)
if len(conn.entries) != 1:
"""User could not be found unambiguously"""
logger.debug(f"_login3 no unique DN found for '{login}'")
return ""
user_entry = conn.response[0]
conn.unbind()
user_dn = user_entry['dn']
logger.debug(f"_login3 found LDAP user DN {user_dn}")
try:
"""Try to bind as the user itself"""
conn = self.ldap3.Connection(server, user_dn, password=password)
if not conn.bind():
logger.debug(f"_login3 user '{login}' cannot be found")
return ""
tmp: list[str] = []
if self._ldap_groups_attr:
tmp = []
for g in user_entry['attributes'][self._ldap_groups_attr]:
"""Get group g's RDN's attribute value"""
try:
rdns = self.ldap3.utils.dn.parse_dn(g)
tmp.append(rdns[0][1])
except Exception:
tmp.append(g)
self._ldap_groups = set(tmp)
logger.debug("_login3 LDAP groups of user: %s", ",".join(self._ldap_groups))
if self._ldap_user_attr:
if user_entry['attributes'][self._ldap_user_attr]:
login = user_entry['attributes'][self._ldap_user_attr]
logger.debug(f"_login3 user set to: '{login}'")
conn.unbind()
logger.debug(f"_login3 {login} successfully authenticated")
return login
except Exception as e:
logger.debug(f"_login3 error 2 {e}")
pass
return ""
def _login(self, login: str, password: str) -> str:
"""Validate credentials.
In first step we make a connection to the LDAP server with the ldap_reader_dn credential.
In next step the DN of the user to authenticate will be searched.
In the last step the authentication of the user will be proceeded.
"""
if self._ldap_module_version == 2:
return self._login2(login, password)
return self._login3(login, password)

66
radicale/auth/oauth2.py Normal file
View file

@ -0,0 +1,66 @@
# This file is part of Radicale Server - Calendar Server
#
# Original from https://gitlab.mim-libre.fr/alphabet/radicale_oauth/
# Copyright © 2021-2022 Bruno Boiget
# Copyright © 2022-2022 Daniel Dehennin
#
# Since migration into upstream
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Authentication backend that checks credentials against an oauth2 server auth endpoint
"""
import requests
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration):
super().__init__(configuration)
self._endpoint = configuration.get("auth", "oauth2_token_endpoint")
if not self._endpoint:
logger.error("auth.oauth2_token_endpoint URL missing")
raise RuntimeError("OAuth2 token endpoint URL is required")
logger.info("auth OAuth2 token endpoint: %s" % (self._endpoint))
def _login(self, login, password):
"""Validate credentials.
Sends login credentials to oauth token endpoint and checks that a token is returned
"""
try:
# authenticate to authentication endpoint and return login if ok, else ""
req_params = {
"username": login,
"password": password,
"grant_type": "password",
"client_id": "radicale",
}
req_headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = requests.post(
self._endpoint, data=req_params, headers=req_headers
)
if (
response.status_code == requests.codes.ok
and "access_token" in response.json()
):
return login
except OSError as e:
logger.critical("Failed to authenticate against OAuth2 server %s: %s" % (self._endpoint, e))
logger.warning("User failed to authenticate using OAuth2: %r" % login)
return ""

105
radicale/auth/pam.py Normal file
View file

@ -0,0 +1,105 @@
# -*- coding: utf-8 -*-
#
# This file is part of Radicale Server - Calendar Server
# Copyright © 2011 Henry-Nicolas Tourneur
# Copyright © 2021-2021 Unrud <unrud@outlook.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
PAM authentication.
Authentication using the ``pam-python`` module.
Important: radicale user need access to /etc/shadow by e.g.
chgrp radicale /etc/shadow
chmod g+r
"""
import grp
import pwd
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration) -> None:
super().__init__(configuration)
try:
import pam
self.pam = pam
except ImportError as e:
raise RuntimeError("PAM authentication requires the Python pam module") from e
self._service = configuration.get("auth", "pam_service")
logger.info("auth.pam_service: %s" % self._service)
self._group_membership = configuration.get("auth", "pam_group_membership")
if (self._group_membership):
logger.info("auth.pam_group_membership: %s" % self._group_membership)
else:
logger.warning("auth.pam_group_membership: (empty, nothing to check / INSECURE)")
def pam_authenticate(self, *args, **kwargs):
return self.pam.authenticate(*args, **kwargs)
def _login(self, login: str, password: str) -> str:
"""Check if ``user``/``password`` couple is valid."""
if login is None or password is None:
return ""
# Check whether the user exists in the PAM system
try:
pwd.getpwnam(login).pw_uid
except KeyError:
logger.debug("PAM user not found: %r" % login)
return ""
else:
logger.debug("PAM user found: %r" % login)
# Check whether the user has a primary group (mandatory)
try:
# Get user primary group
primary_group = grp.getgrgid(pwd.getpwnam(login).pw_gid).gr_name
logger.debug("PAM user %r has primary group: %r" % (login, primary_group))
except KeyError:
logger.debug("PAM user has no primary group: %r" % login)
return ""
# Obtain supplementary groups
members = []
if (self._group_membership):
try:
members = grp.getgrnam(self._group_membership).gr_mem
except KeyError:
logger.debug(
"PAM membership required group doesn't exist: %r" %
self._group_membership)
return ""
# Check whether the user belongs to the required group
# (primary or supplementary)
if (self._group_membership):
if (primary_group != self._group_membership) and (login not in members):
logger.warning("PAM user %r belongs not to the required group: %r" % (login, self._group_membership))
return ""
else:
logger.debug("PAM user %r belongs to the required group: %r" % (login, self._group_membership))
# Check the password
if self.pam_authenticate(login, password, service=self._service):
return login
else:
logger.debug("PAM authentication not successful for user: %r (service %r)" % (login, self._service))
return ""

View file

@ -2,7 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -104,6 +104,29 @@ def _convert_to_bool(value: Any) -> bool:
return RawConfigParser.BOOLEAN_STATES[value.lower()]
def imap_address(value):
if "]" in value:
pre_address, pre_address_port = value.rsplit("]", 1)
else:
pre_address, pre_address_port = "", value
if ":" in pre_address_port:
pre_address2, port = pre_address_port.rsplit(":", 1)
address = pre_address + pre_address2
else:
address, port = pre_address + pre_address_port, None
try:
return (address.strip(string.whitespace + "[]"),
None if port is None else int(port))
except ValueError:
raise ValueError("malformed IMAP address: %r" % value)
def imap_security(value):
if value not in ("tls", "starttls", "none"):
raise ValueError("unsupported IMAP security: %r" % value)
return value
def json_str(value: Any) -> dict:
if not value:
return {}
@ -141,6 +164,14 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"aliases": ("-s", "--ssl",),
"opposite_aliases": ("-S", "--no-ssl",),
"type": bool}),
("protocol", {
"value": "",
"help": "SSL/TLS protocol (Apache SSLProtocol format)",
"type": str}),
("ciphersuite", {
"value": "",
"help": "SSL/TLS Cipher Suite (OpenSSL cipher list format)",
"type": str}),
("certificate", {
"value": "/etc/ssl/radicale.cert.pem",
"help": "set certificate file",
@ -156,6 +187,10 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"help": "set CA certificate for validating clients",
"aliases": ("--certificate-authority",),
"type": filepath}),
("script_name", {
"value": "",
"help": "script name to strip from URI if called by reverse proxy (default taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME)",
"type": str}),
("_internal_server", {
"value": "False",
"help": "the internal server is used",
@ -171,18 +206,51 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"type": str})])),
("auth", OrderedDict([
("type", {
"value": "none",
"help": "authentication method",
"value": "denyall",
"help": "authentication method (" + "|".join(auth.INTERNAL_TYPES) + ")",
"type": str_or_callable,
"internal": auth.INTERNAL_TYPES}),
("cache_logins", {
"value": "false",
"help": "cache successful/failed logins for until expiration time",
"type": bool}),
("cache_successful_logins_expiry", {
"value": "15",
"help": "expiration time for caching successful logins in seconds",
"type": int}),
("cache_failed_logins_expiry", {
"value": "90",
"help": "expiration time for caching failed logins in seconds",
"type": int}),
("htpasswd_filename", {
"value": "/etc/radicale/users",
"help": "htpasswd filename",
"type": filepath}),
("htpasswd_encryption", {
"value": "md5",
"value": "autodetect",
"help": "htpasswd encryption method",
"type": str}),
("htpasswd_cache", {
"value": "False",
"help": "enable caching of htpasswd file",
"type": bool}),
("dovecot_connection_type", {
"value": "AF_UNIX",
"help": "Connection type for dovecot authentication",
"type": str_or_callable,
"internal": auth.AUTH_SOCKET_FAMILY}),
("dovecot_socket", {
"value": "/var/run/dovecot/auth-client",
"help": "dovecot auth AF_UNIX socket",
"type": str}),
("dovecot_host", {
"value": "localhost",
"help": "dovecot auth AF_INET or AF_INET6 host",
"type": str}),
("dovecot_port", {
"value": "12345",
"help": "dovecot auth port",
"type": int}),
("realm", {
"value": "Radicale - Password Required",
"help": "message displayed when a password is needed",
@ -191,6 +259,82 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "1",
"help": "incorrect authentication delay",
"type": positive_float}),
("ldap_ignore_attribute_create_modify_timestamp", {
"value": "false",
"help": "Ignore modifyTimestamp and createTimestamp attributes. Need if Authentik LDAP server is used.",
"type": bool}),
("ldap_uri", {
"value": "ldap://localhost",
"help": "URI to the ldap server",
"type": str}),
("ldap_base", {
"value": "",
"help": "LDAP base DN of the ldap server",
"type": str}),
("ldap_reader_dn", {
"value": "",
"help": "the DN of a ldap user with read access to get the user accounts",
"type": str}),
("ldap_secret", {
"value": "",
"help": "the password of the ldap_reader_dn",
"type": str}),
("ldap_secret_file", {
"value": "",
"help": "path of the file containing the password of the ldap_reader_dn",
"type": str}),
("ldap_filter", {
"value": "(cn={0})",
"help": "the search filter to find the user DN to authenticate by the username",
"type": str}),
("ldap_user_attribute", {
"value": "",
"help": "the attribute to be used as username after authentication",
"type": str}),
("ldap_groups_attribute", {
"value": "",
"help": "attribute to read the group memberships from",
"type": str}),
("ldap_use_ssl", {
"value": "False",
"help": "Use ssl on the ldap connection",
"type": bool}),
("ldap_ssl_verify_mode", {
"value": "REQUIRED",
"help": "The certificate verification mode. NONE, OPTIONAL, default is REQUIRED",
"type": str}),
("ldap_ssl_ca_file", {
"value": "",
"help": "The path to the CA file in pem format which is used to certificate the server certificate",
"type": str}),
("imap_host", {
"value": "localhost",
"help": "IMAP server hostname: address|address:port|[address]:port|*localhost*",
"type": imap_address}),
("imap_security", {
"value": "tls",
"help": "Secure the IMAP connection: *tls*|starttls|none",
"type": imap_security}),
("oauth2_token_endpoint", {
"value": "",
"help": "OAuth2 token endpoint URL",
"type": str}),
("pam_group_membership", {
"value": "",
"help": "PAM group user should be member of",
"type": str}),
("pam_service", {
"value": "radicale",
"help": "PAM service",
"type": str}),
("strip_domain", {
"value": "False",
"help": "strip domain from username",
"type": bool}),
("uc_username", {
"value": "False",
"help": "convert username to uppercase, must be true for case-insensitive auth providers",
"type": bool}),
("lc_username", {
"value": "False",
"help": "convert username to lowercase, must be true for case-insensitive auth providers",
@ -205,6 +349,10 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "True",
"help": "permit delete of a collection",
"type": bool}),
("permit_overwrite_collection", {
"value": "True",
"help": "permit overwrite of a collection",
"type": bool}),
("file", {
"value": "/etc/radicale/rights",
"help": "file for rights management from_file",
@ -219,6 +367,30 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "/var/lib/radicale/collections",
"help": "path where collections are stored",
"type": filepath}),
("filesystem_cache_folder", {
"value": "",
"help": "path where cache of collections is stored in case of use_cache_subfolder_* options are active",
"type": filepath}),
("use_cache_subfolder_for_item", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'item' cache file structure instead of inside collection folder",
"type": bool}),
("use_cache_subfolder_for_history", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'history' cache file structure instead of inside collection folder",
"type": bool}),
("use_cache_subfolder_for_synctoken", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'sync-token' cache file structure instead of inside collection folder",
"type": bool}),
("use_mtime_and_size_for_item_cache", {
"value": "False",
"help": "use mtime and file size instead of SHA256 for 'item' cache (improves speed)",
"type": bool}),
("folder_umask", {
"value": "",
"help": "umask for folder creation (empty: system default)",
"type": str}),
("max_sync_token_age", {
"value": "2592000", # 30 days
"help": "delete sync token that are older",
@ -288,12 +460,26 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "False",
"help": "log response content on level=debug",
"type": bool}),
("rights_rule_doesnt_match_on_debug", {
"value": "False",
"help": "log rights rules which doesn't match on level=debug",
"type": bool}),
("storage_cache_actions_on_debug", {
"value": "False",
"help": "log storage cache action on level=debug",
"type": bool}),
("mask_passwords", {
"value": "True",
"help": "mask passwords in logs",
"type": bool})])),
("headers", OrderedDict([
("_allow_extra", str)]))])
("_allow_extra", str)])),
("reporting", OrderedDict([
("max_freebusy_occurrence", {
"value": "10000",
"help": "number of occurrences per event when reporting",
"type": positive_int})]))
])
def parse_compound_paths(*compound_paths: Optional[str]
@ -345,7 +531,7 @@ def load(paths: Optional[Iterable[Tuple[str, bool]]] = None
config_source = "config file %r" % path
config: types.CONFIG
try:
with open(path, "r") as f:
with open(path) as f:
parser.read_file(f)
config = {s: {o: parser[s][o] for o in parser.options(s)}
for s in parser.sections()}

View file

@ -3,14 +3,23 @@ from enum import Enum
from typing import Sequence
from radicale import pathutils, utils
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("none", "rabbitmq")
def load(configuration):
"""Load the storage module chosen in configuration."""
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
try:
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
except Exception as e:
logger.warning(e)
logger.warning("Hook \"%s\" failed to load, falling back to \"none\"." % configuration.get("hook", "type"))
configuration = configuration.copy()
configuration.update({"hook": {"type": "none"}}, "hook", privileged=True)
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
class BaseHook:

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -42,7 +42,10 @@ else:
import importlib.abc
from importlib import resources
_TRAVERSABLE_LIKE_TYPE = Union[importlib.abc.Traversable, pathlib.Path]
if sys.version_info < (3, 13):
_TRAVERSABLE_LIKE_TYPE = Union[importlib.abc.Traversable, pathlib.Path]
else:
_TRAVERSABLE_LIKE_TYPE = Union[importlib.resources.abc.Traversable, pathlib.Path]
NOT_ALLOWED: types.WSGIResponse = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
@ -76,6 +79,9 @@ REMOTE_DESTINATION: types.WSGIResponse = (
DIRECTORY_LISTING: types.WSGIResponse = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
"Directory listings are not supported.")
INSUFFICIENT_STORAGE: types.WSGIResponse = (
client.INSUFFICIENT_STORAGE, (("Content-Type", "text/plain"),),
"Insufficient Storage. Please contact the administrator.")
INTERNAL_SERVER_ERROR: types.WSGIResponse = (
client.INTERNAL_SERVER_ERROR, (("Content-Type", "text/plain"),),
"A server error occurred. Please contact the administrator.")
@ -146,7 +152,7 @@ def read_request_body(configuration: "config.Configuration",
if configuration.get("logging", "request_content_on_debug"):
logger.debug("Request content:\n%s", content)
else:
logger.debug("Request content: suppressed by config/option [auth] request_content_on_debug")
logger.debug("Request content: suppressed by config/option [logging] request_content_on_debug")
return content
@ -190,6 +196,24 @@ def _serve_traversable(
"%a, %d %b %Y %H:%M:%S GMT",
time.gmtime(traversable.stat().st_mtime))
answer = traversable.read_bytes()
if path == "/.web/index.html" or path == "/.web/":
# enable link on the fly in index.html if InfCloud index.html is existing
# class="infcloudlink-hidden" -> class="infcloudlink"
path_posix = str(traversable)
path_posix_infcloud = path_posix.replace("/internal_data/index.html", "/internal_data/infcloud/index.html")
if os.path.isfile(path_posix_infcloud):
# logger.debug("Enable InfCloud link in served page: %r", path)
answer = answer.replace(b"infcloudlink-hidden", b"infcloud")
elif path == "/.web/infcloud/config.js":
# adjust on the fly default config.js of InfCloud installation
# logger.debug("Adjust on-the-fly default InfCloud config.js in served page: %r", path)
answer = answer.replace(b"location.pathname.replace(RegExp('/+[^/]+/*(index\\.html)?$'),'')+", b"location.pathname.replace(RegExp('/\\.web\\.infcloud/(index\\.html)?$'),'')+")
answer = answer.replace(b"'/caldav.php/',", b"'/',")
answer = answer.replace(b"settingsAccount: true,", b"settingsAccount: false,")
elif path == "/.web/infcloud/main.js":
# adjust on the fly default main.js of InfCloud installation
logger.debug("Adjust on-the-fly default InfCloud main.js in served page: %r", path)
answer = answer.replace(b"'InfCloud - the open source CalDAV/CardDAV web client'", b"'InfCloud - the open source CalDAV/CardDAV web client - served through Radicale CalDAV/CardDAV server'")
return client.OK, headers, answer

View file

@ -49,6 +49,12 @@ def read_components(s: str) -> List[vobject.base.Component]:
s = re.sub(r"^(PHOTO(?:;[^:\r\n]*)?;ENCODING=b(?:;[^:\r\n]*)?:)"
r"data:[^;,\r\n]*;base64,", r"\1", s,
flags=re.MULTILINE | re.IGNORECASE)
# Workaround for bug with malformed ICS files containing control codes
# Filter out all control codes except those we expect to find:
# * 0x09 Horizontal Tab
# * 0x0A Line Feed
# * 0x0D Carriage Return
s = re.sub(r'[\x00-\x08\x0B\x0C\x0E-\x1F]', '', s)
return list(vobject.readComponents(s, allowQP=True))
@ -298,7 +304,7 @@ def find_time_range(vobject_item: vobject.base.Component, tag: str
Returns a tuple (``start``, ``end``) where ``start`` and ``end`` are
POSIX timestamps.
This is intened to be used for matching against simplified prefilters.
This is intended to be used for matching against simplified prefilters.
"""
if not tag:

View file

@ -2,7 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2015 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -48,10 +49,34 @@ def date_to_datetime(d: date) -> datetime:
if not isinstance(d, datetime):
d = datetime.combine(d, datetime.min.time())
if not d.tzinfo:
d = d.replace(tzinfo=timezone.utc)
# NOTE: using vobject's UTC as it wasn't playing well with datetime's.
d = d.replace(tzinfo=vobject.icalendar.utc)
return d
def parse_time_range(time_filter: ET.Element) -> Tuple[datetime, datetime]:
start_text = time_filter.get("start")
end_text = time_filter.get("end")
if start_text:
start = datetime.strptime(
start_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc)
else:
start = DATETIME_MIN
if end_text:
end = datetime.strptime(
end_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc)
else:
end = DATETIME_MAX
return start, end
def time_range_timestamps(time_filter: ET.Element) -> Tuple[int, int]:
start, end = parse_time_range(time_filter)
return (math.floor(start.timestamp()), math.ceil(end.timestamp()))
def comp_match(item: "item.Item", filter_: ET.Element, level: int = 0) -> bool:
"""Check whether the ``item`` matches the comp ``filter_``.
@ -147,21 +172,10 @@ def time_range_match(vobject_item: vobject.base.Component,
"""Check whether the component/property ``child_name`` of
``vobject_item`` matches the time-range ``filter_``."""
start_text = filter_.get("start")
end_text = filter_.get("end")
if not start_text and not end_text:
if not filter_.get("start") and not filter_.get("end"):
return False
if start_text:
start = datetime.strptime(start_text, "%Y%m%dT%H%M%SZ")
else:
start = datetime.min
if end_text:
end = datetime.strptime(end_text, "%Y%m%dT%H%M%SZ")
else:
end = datetime.max
start = start.replace(tzinfo=timezone.utc)
end = end.replace(tzinfo=timezone.utc)
start, end = parse_time_range(filter_)
matched = False
def range_fn(range_start: datetime, range_end: datetime,
@ -181,6 +195,35 @@ def time_range_match(vobject_item: vobject.base.Component,
return matched
def time_range_fill(vobject_item: vobject.base.Component,
filter_: ET.Element, child_name: str, n: int = 1
) -> List[Tuple[datetime, datetime]]:
"""Create a list of ``n`` occurances from the component/property ``child_name``
of ``vobject_item``."""
if not filter_.get("start") and not filter_.get("end"):
return []
start, end = parse_time_range(filter_)
ranges: List[Tuple[datetime, datetime]] = []
def range_fn(range_start: datetime, range_end: datetime,
is_recurrence: bool) -> bool:
nonlocal ranges
if start < range_end and range_start < end:
ranges.append((range_start, range_end))
if n > 0 and len(ranges) >= n:
return True
if end < range_start and not is_recurrence:
return True
return False
def infinity_fn(range_start: datetime) -> bool:
return False
visit_time_ranges(vobject_item, child_name, range_fn, infinity_fn)
return ranges
def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
range_fn: Callable[[datetime, datetime, bool], bool],
infinity_fn: Callable[[datetime], bool]) -> None:
@ -199,7 +242,7 @@ def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
"""
# HACK: According to rfc5545-3.8.4.4 an recurrance that is resheduled
# HACK: According to rfc5545-3.8.4.4 a recurrence that is rescheduled
# with Recurrence ID affects the recurrence itself and all following
# recurrences too. This is not respected and client don't seem to bother
# either.
@ -231,13 +274,16 @@ def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
if hasattr(comp, "recurrence_id") and comp.recurrence_id.value:
recurrences.append(comp.recurrence_id.value)
if comp.rruleset:
# Prevent possible infinite loop
raise ValueError("Overwritten recurrence with RRULESET")
if comp.rruleset._len is None:
logger.warning("Ignore empty RRULESET in item at RECURRENCE-ID with value '%s' and UID '%s'", comp.recurrence_id.value, comp.uid.value)
else:
# Prevent possible infinite loop
raise ValueError("Overwritten recurrence with RRULESET")
rec_main = comp
yield comp, True, []
else:
if main is not None:
raise ValueError("Multiple main components")
raise ValueError("Multiple main components. Got comp: {}".format(comp))
main = comp
if main is None and len(recurrences) == 1:
main = rec_main
@ -543,20 +589,7 @@ def simplify_prefilters(filters: Iterable[ET.Element], collection_tag: str
if time_filter.tag != xmlutils.make_clark("C:time-range"):
simple = False
continue
start_text = time_filter.get("start")
end_text = time_filter.get("end")
if start_text:
start = math.floor(datetime.strptime(
start_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc).timestamp())
else:
start = TIMESTAMP_MIN
if end_text:
end = math.ceil(datetime.strptime(
end_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc).timestamp())
else:
end = TIMESTAMP_MAX
start, end = time_range_timestamps(time_filter)
return tag, start, end, simple
return tag, TIMESTAMP_MIN, TIMESTAMP_MAX, simple
return None, TIMESTAMP_MIN, TIMESTAMP_MAX, simple

View file

@ -221,18 +221,31 @@ def setup() -> None:
logger.error("Invalid RADICALE_LOG_FORMAT: %r", format_name)
logger_display_backtrace_disabled: bool = False
logger_display_backtrace_enabled: bool = False
def set_level(level: Union[int, str], backtrace_on_debug: bool) -> None:
"""Set logging level for global logger."""
global logger_display_backtrace_disabled
global logger_display_backtrace_enabled
if isinstance(level, str):
level = getattr(logging, level.upper())
assert isinstance(level, int)
logger.setLevel(level)
if level > logging.DEBUG:
logger.info("Logging of backtrace is disabled in this loglevel")
if logger_display_backtrace_disabled is False:
logger.info("Logging of backtrace is disabled in this loglevel")
logger_display_backtrace_disabled = True
logger.addFilter(REMOVE_TRACEBACK_FILTER)
else:
if not backtrace_on_debug:
logger.debug("Logging of backtrace is disabled by option in this loglevel")
if logger_display_backtrace_disabled is False:
logger.debug("Logging of backtrace is disabled by option in this loglevel")
logger_display_backtrace_disabled = True
logger.addFilter(REMOVE_TRACEBACK_FILTER)
else:
if logger_display_backtrace_enabled is False:
logger.debug("Logging of backtrace is enabled by option in this loglevel")
logger_display_backtrace_enabled = True
logger.removeFilter(REMOVE_TRACEBACK_FILTER)

View file

@ -32,7 +32,7 @@ Take a look at the class ``BaseRights`` if you want to implement your own.
"""
from typing import Sequence
from typing import Sequence, Set
from radicale import config, utils
@ -57,6 +57,8 @@ def intersect(a: str, b: str) -> str:
class BaseRights:
_user_groups: Set[str] = set([])
def __init__(self, configuration: "config.Configuration") -> None:
"""Initialize BaseRights.

View file

@ -1,6 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -22,7 +23,7 @@ config (section "rights", key "file").
The login is matched against the "user" key, and the collection path
is matched against the "collection" key. In the "collection" regex you can use
`{user}` and get groups from the "user" regex with `{0}`, `{1}`, etc.
In consequence of the parameter subsitution you have to write `{{` and `}}`
In consequence of the parameter substitution you have to write `{{` and `}}`
if you want to use regular curly braces in the "user" and "collection" regexes.
For example, for the "user" key, ".+" means "authenticated user" and ".*"
@ -48,40 +49,61 @@ class Rights(rights.BaseRights):
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
self._filename = configuration.get("rights", "file")
self._log_rights_rule_doesnt_match_on_debug = configuration.get("logging", "rights_rule_doesnt_match_on_debug")
self._rights_config = configparser.ConfigParser()
try:
with open(self._filename, "r") as f:
self._rights_config.read_file(f)
logger.debug("Read rights file")
except Exception as e:
raise RuntimeError("Failed to load rights file %r: %s" %
(self._filename, e)) from e
def authorization(self, user: str, path: str) -> str:
user = user or ""
sane_path = pathutils.strip_path(path)
# Prevent "regex injection"
escaped_user = re.escape(user)
rights_config = configparser.ConfigParser()
try:
with open(self._filename, "r") as f:
rights_config.read_file(f)
except Exception as e:
raise RuntimeError("Failed to load rights file %r: %s" %
(self._filename, e)) from e
for section in rights_config.sections():
if not self._log_rights_rule_doesnt_match_on_debug:
logger.debug("logging of rules which doesn't match suppressed by config/option [logging] rights_rule_doesnt_match_on_debug")
for section in self._rights_config.sections():
group_match = None
user_match = None
try:
user_pattern = rights_config.get(section, "user")
collection_pattern = rights_config.get(section, "collection")
user_pattern = self._rights_config.get(section, "user", fallback="")
collection_pattern = self._rights_config.get(section, "collection")
allowed_groups = self._rights_config.get(section, "groups", fallback="").split(",")
try:
group_match = len(self._user_groups.intersection(allowed_groups)) > 0
except Exception:
pass
# Use empty format() for harmonized handling of curly braces
user_match = re.fullmatch(user_pattern.format(), user)
collection_match = user_match and re.fullmatch(
if user_pattern != "":
user_match = re.fullmatch(user_pattern.format(), user)
user_collection_match = user_match and re.fullmatch(
collection_pattern.format(
*(re.escape(s) for s in user_match.groups()),
user=escaped_user), sane_path)
group_collection_match = group_match and re.fullmatch(
collection_pattern.format(user=escaped_user), sane_path)
except Exception as e:
raise RuntimeError("Error in section %r of rights file %r: "
"%s" % (section, self._filename, e)) from e
if user_match and collection_match:
permission = rights_config.get(section, "permissions")
if user_match and user_collection_match:
permission = self._rights_config.get(section, "permissions")
logger.debug("Rule %r:%r matches %r:%r from section %r permission %r",
user, sane_path, user_pattern,
collection_pattern, section, permission)
return permission
logger.debug("Rule %r:%r doesn't match %r:%r from section %r",
user, sane_path, user_pattern, collection_pattern,
section)
logger.info("Rights: %r:%r doesn't match any section", user, sane_path)
if group_match and group_collection_match:
permission = self._rights_config.get(section, "permissions")
logger.debug("Rule %r:%r matches %r:%r from section %r permission %r by group membership",
user, sane_path, user_pattern,
collection_pattern, section, permission)
return permission
if self._log_rights_rule_doesnt_match_on_debug:
logger.debug("Rule %r:%r doesn't match %r:%r from section %r",
user, sane_path, user_pattern, collection_pattern,
section)
logger.debug("Rights: %r:%r doesn't match any section", user, sane_path)
return ""

View file

@ -2,8 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2023 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -34,7 +34,7 @@ from typing import (Any, Callable, Dict, List, MutableMapping, Optional, Set,
Tuple, Union)
from urllib.parse import unquote
from radicale import Application, config
from radicale import Application, config, utils
from radicale.log import logger
COMPAT_EAI_ADDRFAMILY: int
@ -58,19 +58,7 @@ elif sys.platform == "win32":
# IPv4 (host, port) and IPv6 (host, port, flowinfo, scopeid)
ADDRESS_TYPE = Union[Tuple[Union[str, bytes, bytearray], int],
Tuple[str, int, int, int]]
def format_address(address: ADDRESS_TYPE) -> str:
host, port, *_ = address
if not isinstance(host, str):
raise NotImplementedError("Unsupported address format: %r" %
(address,))
if host.find(":") == -1:
return "%s:%d" % (host, port)
else:
return "[%s]:%d" % (host, port)
ADDRESS_TYPE = utils.ADDRESS_TYPE
class ParallelHTTPServer(socketserver.ThreadingMixIn,
@ -167,6 +155,8 @@ class ParallelHTTPSServer(ParallelHTTPServer):
certfile: str = self.configuration.get("server", "certificate")
keyfile: str = self.configuration.get("server", "key")
cafile: str = self.configuration.get("server", "certificate_authority")
protocol: str = self.configuration.get("server", "protocol")
ciphersuite: str = self.configuration.get("server", "ciphersuite")
# Test if the files can be read
for name, filename in [("certificate", certfile), ("key", keyfile),
("certificate_authority", cafile)]:
@ -176,15 +166,40 @@ class ParallelHTTPSServer(ParallelHTTPServer):
if name == "certificate_authority" and not filename:
continue
try:
open(filename, "r").close()
open(filename).close()
except OSError as e:
raise RuntimeError(
"Invalid %s value for option %r in section %r in %s: %r "
"(%s)" % (type_name, name, "server", source, filename,
e)) from e
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
logger.info("SSL load files certificate='%s' key='%s'", certfile, keyfile)
context.load_cert_chain(certfile=certfile, keyfile=keyfile)
if protocol:
logger.info("SSL set explicit protocols (maybe not all supported by underlying OpenSSL): '%s'", protocol)
context.options = utils.ssl_context_options_by_protocol(protocol, context.options)
context.minimum_version = utils.ssl_context_minimum_version_by_options(context.options)
if (context.minimum_version == 0):
raise RuntimeError("No SSL minimum protocol active")
context.maximum_version = utils.ssl_context_maximum_version_by_options(context.options)
if (context.maximum_version == 0):
raise RuntimeError("No SSL maximum protocol active")
else:
logger.info("SSL active protocols: (system-default)")
logger.debug("SSL minimum acceptable protocol: %s", context.minimum_version)
logger.debug("SSL maximum acceptable protocol: %s", context.maximum_version)
logger.info("SSL accepted protocols: %s", ' '.join(utils.ssl_get_protocols(context)))
if ciphersuite:
logger.info("SSL set explicit ciphersuite (maybe not all supported by underlying OpenSSL): '%s'", ciphersuite)
context.set_ciphers(ciphersuite)
else:
logger.info("SSL active ciphersuite: (system-default)")
cipherlist = []
for entry in context.get_ciphers():
cipherlist.append(entry["name"])
logger.info("SSL accepted ciphers: %s", ' '.join(cipherlist))
if cafile:
logger.info("SSL enable mandatory client certificate verification using CA file='%s'", cafile)
context.load_verify_locations(cafile=cafile)
context.verify_mode = ssl.CERT_REQUIRED
self.socket = context.wrap_socket(
@ -199,7 +214,7 @@ class ParallelHTTPSServer(ParallelHTTPServer):
except socket.timeout:
raise
except Exception as e:
raise RuntimeError("SSL handshake failed: %s" % e) from e
raise RuntimeError("SSL handshake failed: %s client %s" % (e, str(client_address[0]))) from e
except Exception:
try:
self.handle_error(request, client_address)
@ -235,6 +250,9 @@ class RequestHandler(wsgiref.simple_server.WSGIRequestHandler):
def get_environ(self) -> Dict[str, Any]:
env = super().get_environ()
if isinstance(self.connection, ssl.SSLSocket):
env["HTTPS"] = "on"
env["SSL_CIPHER"] = self.request.cipher()[0]
env["SSL_PROTOCOL"] = self.request.version()
# The certificate can be evaluated by the auth module
env["REMOTE_CERTIFICATE"] = self.connection.getpeercert()
# Parent class only tries latin1 encoding
@ -274,7 +292,7 @@ def serve(configuration: config.Configuration,
"""
logger.info("Starting Radicale")
logger.info("Starting Radicale (%s)", utils.packages_version())
# Copy configuration before modifying
configuration = configuration.copy()
configuration.update({"server": {"_internal_server": "True"}}, "server",
@ -291,20 +309,20 @@ def serve(configuration: config.Configuration,
try:
getaddrinfo = socket.getaddrinfo(address_port[0], address_port[1], 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)
except OSError as e:
logger.warn("cannot retrieve IPv4 or IPv6 address of '%s': %s" % (format_address(address_port), e))
logger.warning("cannot retrieve IPv4 or IPv6 address of '%s': %s" % (utils.format_address(address_port), e))
continue
logger.debug("getaddrinfo of '%s': %s" % (format_address(address_port), getaddrinfo))
logger.debug("getaddrinfo of '%s': %s" % (utils.format_address(address_port), getaddrinfo))
for (address_family, socket_kind, socket_proto, socket_flags, socket_address) in getaddrinfo:
logger.debug("try to create server socket on '%s'" % (format_address(socket_address)))
logger.debug("try to create server socket on '%s'" % (utils.format_address(socket_address)))
try:
server = server_class(configuration, address_family, (socket_address[0], socket_address[1]), RequestHandler)
except OSError as e:
logger.warn("cannot create server socket on '%s': %s" % (format_address(socket_address), e))
logger.warning("cannot create server socket on '%s': %s" % (utils.format_address(socket_address), e))
continue
servers[server.socket] = server
server.set_app(application)
logger.info("Listening on %r%s",
format_address(server.server_address),
utils.format_address(server.server_address),
" with SSL" if use_ssl else "")
if not servers:
raise RuntimeError("No servers started")

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -26,8 +27,8 @@ Take a look at the class ``BaseCollection`` if you want to implement your own.
import json
import xml.etree.ElementTree as ET
from hashlib import sha256
from typing import (Iterable, Iterator, Mapping, Optional, Sequence, Set,
Tuple, Union, overload)
from typing import (Callable, ContextManager, Iterable, Iterator, Mapping,
Optional, Sequence, Set, Tuple, Union, overload)
import vobject
@ -35,17 +36,19 @@ from radicale import config
from radicale import item as radicale_item
from radicale import types, utils
from radicale.item import filter as radicale_filter
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("multifilesystem", "multifilesystem_nolock",)
CACHE_DEPS: Sequence[str] = ("radicale", "vobject", "python-dateutil",)
CACHE_VERSION: bytes = "".join(
"%s=%s;" % (pkg, utils.package_version(pkg))
for pkg in CACHE_DEPS).encode()
# NOTE: change only if cache structure is modified to avoid cache invalidation on update
CACHE_VERSION_RADICALE = "3.3.1"
CACHE_VERSION: bytes = ("%s=%s;%s=%s;" % ("radicale", CACHE_VERSION_RADICALE, "vobject", utils.package_version("vobject"))).encode()
def load(configuration: "config.Configuration") -> "BaseStorage":
"""Load the storage module chosen in configuration."""
logger.debug("storage cache version: %r", str(CACHE_VERSION))
return utils.load_plugin(INTERNAL_TYPES, "storage", "Storage", BaseStorage,
configuration)
@ -282,8 +285,11 @@ class BaseStorage:
"""
self.configuration = configuration
def discover(self, path: str, depth: str = "0") -> Iterable[
"types.CollectionOrItem"]:
def discover(
self, path: str, depth: str = "0",
child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None,
user_groups: Set[str] = set([])) -> Iterable["types.CollectionOrItem"]:
"""Discover a list of collections under the given ``path``.
``path`` is sanitized.

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -24,10 +25,12 @@ Uses one folder per collection and one file per collection entry.
"""
import os
import sys
import time
from typing import ClassVar, Iterator, Optional, Type
from radicale import config
from radicale.log import logger
from radicale.storage.multifilesystem.base import CollectionBase, StorageBase
from radicale.storage.multifilesystem.cache import CollectionPartCache
from radicale.storage.multifilesystem.create_collection import \
@ -44,6 +47,9 @@ from radicale.storage.multifilesystem.sync import CollectionPartSync
from radicale.storage.multifilesystem.upload import CollectionPartUpload
from radicale.storage.multifilesystem.verify import StoragePartVerify
# 999 second, 999 ms, 999 us, 999 ns
MTIME_NS_TEST: int = 999999999999
class Collection(
CollectionPartDelete, CollectionPartMeta, CollectionPartSync,
@ -86,6 +92,104 @@ class Storage(
_collection_class: ClassVar[Type[Collection]] = Collection
def _analyse_mtime(self):
# calculate and display mtime resolution
path = os.path.join(self._get_collection_root_folder(), ".Radicale.mtime_test")
logger.debug("Storage item mtime resolution test with file: %r", path)
try:
with open(path, "w") as f:
f.write("mtime_test")
f.close
except Exception as e:
logger.warning("Storage item mtime resolution test not possible, cannot write file: %r (%s)", path, e)
raise
# set mtime_ns for tests
try:
os.utime(path, times=None, ns=(MTIME_NS_TEST, MTIME_NS_TEST))
except Exception as e:
logger.warning("Storage item mtime resolution test not possible, cannot set utime on file: %r (%s)", path, e)
os.remove(path)
raise
logger.debug("Storage item mtime resoultion test set: %d" % MTIME_NS_TEST)
mtime_ns = os.stat(path).st_mtime_ns
logger.debug("Storage item mtime resoultion test get: %d" % mtime_ns)
# start analysis
precision = 1
mtime_ns_test = MTIME_NS_TEST
while mtime_ns > 0:
if mtime_ns == mtime_ns_test:
break
factor = 2
if int(mtime_ns / factor) == int(mtime_ns_test / factor):
precision = precision * factor
break
factor = 5
if int(mtime_ns / factor) == int(mtime_ns_test / factor):
precision = precision * factor
break
precision = precision * 10
mtime_ns = int(mtime_ns / 10)
mtime_ns_test = int(mtime_ns_test / 10)
unit = "ns"
precision_unit = precision
if precision >= 1000000000:
precision_unit = int(precision / 1000000000)
unit = "s"
elif precision >= 1000000:
precision_unit = int(precision / 1000000)
unit = "ms"
elif precision >= 1000:
precision_unit = int(precision / 1000)
unit = "us"
os.remove(path)
return (precision, precision_unit, unit)
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
self._makedirs_synced(self._filesystem_folder)
logger.info("Storage location: %r", self._filesystem_folder)
if not os.path.exists(self._filesystem_folder):
logger.warning("Storage location: %r not existing, create now", self._filesystem_folder)
self._makedirs_synced(self._filesystem_folder)
logger.info("Storage location subfolder: %r", self._get_collection_root_folder())
if not os.path.exists(self._get_collection_root_folder()):
logger.warning("Storage location subfolder: %r not existing, create now", self._get_collection_root_folder())
self._makedirs_synced(self._get_collection_root_folder())
logger.info("Storage cache subfolder usage for 'item': %s", self._use_cache_subfolder_for_item)
logger.info("Storage cache subfolder usage for 'history': %s", self._use_cache_subfolder_for_history)
logger.info("Storage cache subfolder usage for 'sync-token': %s", self._use_cache_subfolder_for_synctoken)
logger.info("Storage cache use mtime and size for 'item': %s", self._use_mtime_and_size_for_item_cache)
try:
(precision, precision_unit, unit) = self._analyse_mtime()
if precision >= 100000000:
# >= 100 ms
logger.warning("Storage item mtime resolution test result: %d %s (VERY RISKY ON PRODUCTION SYSTEMS)" % (precision_unit, unit))
elif precision >= 10000000:
# >= 10 ms
logger.warning("Storage item mtime resolution test result: %d %s (RISKY ON PRODUCTION SYSTEMS)" % (precision_unit, unit))
else:
logger.info("Storage item mtime resolution test result: %d %s" % (precision_unit, unit))
if self._use_mtime_and_size_for_item_cache is False:
logger.info("Storage cache using mtime and size for 'item' may be an option in case of performance issues")
except Exception:
logger.warning("Storage item mtime resolution test result not successful")
logger.debug("Storage cache action logging: %s", self._debug_cache_actions)
if self._use_cache_subfolder_for_item is True or self._use_cache_subfolder_for_history is True or self._use_cache_subfolder_for_synctoken is True:
logger.info("Storage cache subfolder: %r", self._get_collection_cache_folder())
if not os.path.exists(self._get_collection_cache_folder()):
logger.warning("Storage cache subfolder: %r not existing, create now", self._get_collection_cache_folder())
self._makedirs_synced(self._get_collection_cache_folder())
if sys.platform != "win32":
if not self._folder_umask:
# retrieve current umask by setting a dummy umask
current_umask = os.umask(0o0022)
logger.info("Storage folder umask (from system): '%04o'", current_umask)
# reset to original
os.umask(current_umask)
else:
try:
config_umask = int(self._folder_umask, 8)
except Exception:
logger.critical("storage folder umask defined but invalid: '%s'", self._folder_umask)
raise
logger.info("storage folder umask defined: '%04o'", config_umask)
self._config_umask = config_umask

View file

@ -69,7 +69,15 @@ class StorageBase(storage.BaseStorage):
_collection_class: ClassVar[Type["multifilesystem.Collection"]]
_filesystem_folder: str
_filesystem_cache_folder: str
_filesystem_fsync: bool
_use_cache_subfolder_for_item: bool
_use_cache_subfolder_for_history: bool
_use_cache_subfolder_for_synctoken: bool
_use_mtime_and_size_for_item_cache: bool
_debug_cache_actions: bool
_folder_umask: str
_config_umask: int
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
@ -77,10 +85,39 @@ class StorageBase(storage.BaseStorage):
"storage", "filesystem_folder")
self._filesystem_fsync = configuration.get(
"storage", "_filesystem_fsync")
self._filesystem_cache_folder = configuration.get(
"storage", "filesystem_cache_folder")
self._use_cache_subfolder_for_item = configuration.get(
"storage", "use_cache_subfolder_for_item")
self._use_cache_subfolder_for_history = configuration.get(
"storage", "use_cache_subfolder_for_history")
self._use_cache_subfolder_for_synctoken = configuration.get(
"storage", "use_cache_subfolder_for_synctoken")
self._use_mtime_and_size_for_item_cache = configuration.get(
"storage", "use_mtime_and_size_for_item_cache")
self._folder_umask = configuration.get(
"storage", "folder_umask")
self._debug_cache_actions = configuration.get(
"logging", "storage_cache_actions_on_debug")
def _get_collection_root_folder(self) -> str:
return os.path.join(self._filesystem_folder, "collection-root")
def _get_collection_cache_folder(self) -> str:
if self._filesystem_cache_folder:
return os.path.join(self._filesystem_cache_folder, "collection-cache")
else:
return os.path.join(self._filesystem_folder, "collection-cache")
def _get_collection_cache_subfolder(self, path, folder, subfolder) -> str:
if (self._use_cache_subfolder_for_item is True) and (subfolder == "item"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
elif (self._use_cache_subfolder_for_history is True) and (subfolder == "history"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
elif (self._use_cache_subfolder_for_synctoken is True) and (subfolder == "sync-token"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
return os.path.join(path, folder, subfolder)
def _fsync(self, f: IO[AnyStr]) -> None:
if self._filesystem_fsync:
try:
@ -117,6 +154,8 @@ class StorageBase(storage.BaseStorage):
if os.path.isdir(filesystem_path):
return
parent_filesystem_path = os.path.dirname(filesystem_path)
if sys.platform != "win32" and self._folder_umask:
oldmask = os.umask(self._config_umask)
# Prevent infinite loop
if filesystem_path != parent_filesystem_path:
# Create parent dirs recursively
@ -124,3 +163,5 @@ class StorageBase(storage.BaseStorage):
# Possible race!
os.makedirs(filesystem_path, exist_ok=True)
self._sync_directory(parent_filesystem_path)
if sys.platform != "win32" and self._folder_umask:
os.umask(oldmask)

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -72,6 +73,10 @@ class CollectionPartCache(CollectionBase):
_hash.update(raw_text)
return _hash.hexdigest()
@staticmethod
def _item_cache_mtime_and_size(size: int, raw_text: int) -> str:
return str(storage.CACHE_VERSION.decode()) + "size=" + str(size) + ";mtime=" + str(raw_text)
def _item_cache_content(self, item: radicale_item.Item) -> CacheContent:
return CacheContent(item.uid, item.etag, item.serialize(), item.name,
item.component_name, *item.time_range)
@ -79,10 +84,12 @@ class CollectionPartCache(CollectionBase):
def _store_item_cache(self, href: str, item: radicale_item.Item,
cache_hash: str = "") -> CacheContent:
if not cache_hash:
cache_hash = self._item_cache_hash(
item.serialize().encode(self._encoding))
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
if self._storage._use_mtime_and_size_for_item_cache is True:
raise RuntimeError("_store_item_cache called without cache_hash is not supported if [storage] use_mtime_and_size_for_item_cache is True")
else:
cache_hash = self._item_cache_hash(
item.serialize().encode(self._encoding))
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
content = self._item_cache_content(item)
self._storage._makedirs_synced(cache_folder)
# Race: Other processes might have created and locked the file.
@ -95,14 +102,21 @@ class CollectionPartCache(CollectionBase):
def _load_item_cache(self, href: str, cache_hash: str
) -> Optional[CacheContent]:
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
path = os.path.join(cache_folder, href)
try:
with open(os.path.join(cache_folder, href), "rb") as f:
with open(path, "rb") as f:
hash_, *remainder = pickle.load(f)
if hash_ and hash_ == cache_hash:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache match : %r with hash %r", path, cache_hash)
return CacheContent(*remainder)
else:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache no match : %r with hash %r", path, cache_hash)
except FileNotFoundError:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache not found : %r with hash %r", path, cache_hash)
pass
except (pickle.UnpicklingError, ValueError) as e:
logger.warning("Failed to load item cache entry %r in %r: %s",
@ -110,8 +124,7 @@ class CollectionPartCache(CollectionBase):
return None
def _clean_item_cache(self) -> None:
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
self._clean_cache(cache_folder, (
e.name for e in os.scandir(cache_folder) if not
os.path.isfile(os.path.join(self._filesystem_path, e.name))))

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -22,6 +23,7 @@ from typing import Iterable, Optional, cast
import radicale.item as radicale_item
from radicale import pathutils
from radicale.log import logger
from radicale.storage import multifilesystem
from radicale.storage.multifilesystem.base import StorageBase
@ -36,6 +38,7 @@ class StoragePartCreateCollection(StorageBase):
# Path should already be sanitized
sane_path = pathutils.strip_path(href)
filesystem_path = pathutils.path_to_filesystem(folder, sane_path)
logger.debug("Create collection: %r" % filesystem_path)
if not props:
self._makedirs_synced(filesystem_path)
@ -47,27 +50,31 @@ class StoragePartCreateCollection(StorageBase):
self._makedirs_synced(parent_dir)
# Create a temporary directory with an unsafe name
with TemporaryDirectory(prefix=".Radicale.tmp-", dir=parent_dir
) as tmp_dir:
# The temporary directory itself can't be renamed
tmp_filesystem_path = os.path.join(tmp_dir, "collection")
os.makedirs(tmp_filesystem_path)
col = self._collection_class(
cast(multifilesystem.Storage, self),
pathutils.unstrip_path(sane_path, True),
filesystem_path=tmp_filesystem_path)
col.set_meta(props)
if items is not None:
if props.get("tag") == "VCALENDAR":
col._upload_all_nonatomic(items, suffix=".ics")
elif props.get("tag") == "VADDRESSBOOK":
col._upload_all_nonatomic(items, suffix=".vcf")
try:
with TemporaryDirectory(prefix=".Radicale.tmp-", dir=parent_dir
) as tmp_dir:
# The temporary directory itself can't be renamed
tmp_filesystem_path = os.path.join(tmp_dir, "collection")
os.makedirs(tmp_filesystem_path)
col = self._collection_class(
cast(multifilesystem.Storage, self),
pathutils.unstrip_path(sane_path, True),
filesystem_path=tmp_filesystem_path)
col.set_meta(props)
if items is not None:
if props.get("tag") == "VCALENDAR":
col._upload_all_nonatomic(items, suffix=".ics")
elif props.get("tag") == "VADDRESSBOOK":
col._upload_all_nonatomic(items, suffix=".vcf")
if os.path.lexists(filesystem_path):
pathutils.rename_exchange(tmp_filesystem_path, filesystem_path)
else:
os.rename(tmp_filesystem_path, filesystem_path)
self._sync_directory(parent_dir)
if os.path.lexists(filesystem_path):
pathutils.rename_exchange(tmp_filesystem_path, filesystem_path)
else:
os.rename(tmp_filesystem_path, filesystem_path)
self._sync_directory(parent_dir)
except Exception as e:
raise ValueError("Failed to create collection %r as %r %s" %
(href, filesystem_path, e)) from e
return self._collection_class(
cast(multifilesystem.Storage, self),

View file

@ -2,6 +2,7 @@
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -53,3 +54,9 @@ class CollectionPartDelete(CollectionPartHistory, CollectionBase):
# Track the change
self._update_history_etag(href, None)
self._clean_history()
# Remove item from cache
cache_folder = self._storage._get_collection_cache_subfolder(os.path.dirname(path), ".Radicale.cache", "item")
cache_file = os.path.join(cache_folder, os.path.basename(path))
if os.path.isfile(cache_file):
os.remove(cache_file)
self._storage._sync_directory(cache_folder)

View file

@ -16,9 +16,10 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import base64
import os
import posixpath
from typing import Callable, ContextManager, Iterator, Optional, cast
from typing import Callable, ContextManager, Iterator, Optional, Set, cast
from radicale import pathutils, types
from radicale.log import logger
@ -35,8 +36,10 @@ def _null_child_context_manager(path: str,
class StoragePartDiscover(StorageBase):
def discover(
self, path: str, depth: str = "0", child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None
self, path: str, depth: str = "0",
child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None,
user_groups: Set[str] = set([])
) -> Iterator[types.CollectionOrItem]:
# assert isinstance(self, multifilesystem.Storage)
if child_context_manager is None:
@ -102,3 +105,13 @@ class StoragePartDiscover(StorageBase):
with child_context_manager(sane_child_path, None):
yield self._collection_class(
cast(multifilesystem.Storage, self), child_path)
for group in user_groups:
href = base64.b64encode(group.encode('utf-8')).decode('ascii')
logger.debug(f"searching for group calendar {group} {href}")
sane_child_path = f"GROUPS/{href}"
if not os.path.isdir(pathutils.path_to_filesystem(folder, sane_child_path)):
continue
child_path = f"/GROUPS/{href}/"
with child_context_manager(sane_child_path, None):
yield self._collection_class(
cast(multifilesystem.Storage, self), child_path)

View file

@ -80,11 +80,20 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
raise
# The hash of the component in the file system. This is used to check,
# if the entry in the cache is still valid.
cache_hash = self._item_cache_hash(raw_text)
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache check for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(raw_text)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache check for: %r with hash %r", path, cache_hash)
cache_content = self._load_item_cache(href, cache_hash)
if cache_content is None:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache miss for: %r", path)
with self._acquire_cache_lock("item"):
# Lock the item cache to prevent multpile processes from
# Lock the item cache to prevent multiple processes from
# generating the same data in parallel.
# This improves the performance for multiple requests.
if self._storage._lock.locked == "r":
@ -99,6 +108,8 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
vobject_item, = vobject_items
temp_item = radicale_item.Item(
collection=self, vobject_item=vobject_item)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r", path)
cache_content = self._store_item_cache(
href, temp_item, cache_hash)
except Exception as e:
@ -113,6 +124,9 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
if not self._item_cache_cleaned:
self._item_cache_cleaned = True
self._clean_item_cache()
else:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache hit for: %r", path)
last_modified = time.strftime(
"%a, %d %b %Y %H:%M:%S GMT",
time.gmtime(os.path.getmtime(path)))
@ -127,7 +141,7 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
def get_multi(self, hrefs: Iterable[str]
) -> Iterator[Tuple[str, Optional[radicale_item.Item]]]:
# It's faster to check for file name collissions here, because
# It's faster to check for file name collisions here, because
# we only need to call os.listdir once.
files = None
for href in hrefs:
@ -146,7 +160,7 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
def get_all(self) -> Iterator[radicale_item.Item]:
for href in self._list():
# We don't need to check for collissions, because the file names
# We don't need to check for collisions, because the file names
# are from os.listdir.
item = self._get(href, verify_href=False)
if item is not None:

View file

@ -47,8 +47,7 @@ class CollectionPartHistory(CollectionBase):
string for deleted items) and a history etag, which is a hash over
the previous history etag and the etag separated by "/".
"""
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
try:
with open(os.path.join(history_folder, href), "rb") as f:
cache_etag, history_etag = pickle.load(f)
@ -76,8 +75,7 @@ class CollectionPartHistory(CollectionBase):
def _get_deleted_history_hrefs(self):
"""Returns the hrefs of all deleted items that are still in the
history cache."""
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
with contextlib.suppress(FileNotFoundError):
for entry in os.scandir(history_folder):
href = entry.name
@ -89,7 +87,6 @@ class CollectionPartHistory(CollectionBase):
def _clean_history(self):
# Delete all expired history entries of deleted items.
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
self._clean_cache(history_folder, self._get_deleted_history_hrefs(),
max_age=self._max_sync_token_age)

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2023-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -37,10 +38,11 @@ class CollectionPartLock(CollectionBase):
if self._storage._lock.locked == "w":
yield
return
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache")
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", ns)
self._storage._makedirs_synced(cache_folder)
lock_path = os.path.join(cache_folder,
".Radicale.lock" + (".%s" % ns if ns else ""))
logger.debug("Lock file (CollectionPartLock): %r" % lock_path)
lock = pathutils.RwLock(lock_path)
with lock.acquire("w"):
yield
@ -54,11 +56,12 @@ class StoragePartLock(StorageBase):
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
lock_path = os.path.join(self._filesystem_folder, ".Radicale.lock")
logger.debug("Lock file (StoragePartLock): %r" % lock_path)
self._lock = pathutils.RwLock(lock_path)
self._hook = configuration.get("storage", "hook")
@types.contextmanager
def acquire_lock(self, mode: str, user: str = "") -> Iterator[None]:
def acquire_lock(self, mode: str, user: str = "", *args, **kwargs) -> Iterator[None]:
with self._lock.acquire(mode):
yield
# execute hook
@ -73,29 +76,46 @@ class StoragePartLock(StorageBase):
else:
# Process group is also used to identify child processes
preexec_fn = os.setpgrp
command = self._hook % {
"user": shlex.quote(user or "Anonymous")}
logger.debug("Running storage hook")
p = subprocess.Popen(
command, stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE if debug else subprocess.DEVNULL,
stderr=subprocess.PIPE if debug else subprocess.DEVNULL,
shell=True, universal_newlines=True, preexec_fn=preexec_fn,
cwd=self._filesystem_folder, creationflags=creationflags)
# optional argument
path = kwargs.get('path', "")
try:
command = self._hook % {
"path": shlex.quote(self._get_collection_root_folder() + path),
"cwd": shlex.quote(self._filesystem_folder),
"user": shlex.quote(user or "Anonymous")}
except KeyError as e:
logger.error("Storage hook contains not supported placeholder %s (skip execution of: %r)" % (e, self._hook))
return
logger.debug("Executing storage hook: '%s'" % command)
try:
p = subprocess.Popen(
command, stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE if debug else subprocess.DEVNULL,
stderr=subprocess.PIPE if debug else subprocess.DEVNULL,
shell=True, universal_newlines=True, preexec_fn=preexec_fn,
cwd=self._filesystem_folder, creationflags=creationflags)
except Exception as e:
logger.error("Execution of storage hook not successful on 'Popen': %s" % e)
return
logger.debug("Executing storage hook started 'Popen'")
try:
stdout_data, stderr_data = p.communicate()
except BaseException: # e.g. KeyboardInterrupt or SystemExit
except BaseException as e: # e.g. KeyboardInterrupt or SystemExit
logger.error("Execution of storage hook not successful on 'communicate': %s" % e)
p.kill()
p.wait()
raise
return
finally:
if sys.platform != "win32":
# Kill remaining children identified by process group
with contextlib.suppress(OSError):
os.killpg(p.pid, signal.SIGKILL)
logger.debug("Executing storage hook finished")
if stdout_data:
logger.debug("Captured stdout from hook:\n%s", stdout_data)
logger.debug("Captured stdout from storage hook:\n%s", stdout_data)
if stderr_data:
logger.debug("Captured stderr from hook:\n%s", stderr_data)
logger.debug("Captured stderr from storage hook:\n%s", stderr_data)
if p.returncode != 0:
raise subprocess.CalledProcessError(p.returncode, p.args)
logger.error("Execution of storage hook not successful: %s" % subprocess.CalledProcessError(p.returncode, p.args))
return

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -62,6 +63,9 @@ class CollectionPartMeta(CollectionBase):
def set_meta(self, props: Mapping[str, str]) -> None:
# TODO: better fix for "mypy"
with self._atomic_write(self._props_path, "w") as fo: # type: ignore
f = cast(TextIO, fo)
json.dump(props, f, sort_keys=True)
try:
with self._atomic_write(self._props_path, "w") as fo: # type: ignore
f = cast(TextIO, fo)
json.dump(props, f, sort_keys=True)
except OSError as e:
raise ValueError("Failed to write meta data %r %s" % (self._props_path, e)) from e

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -20,6 +21,7 @@ import os
from radicale import item as radicale_item
from radicale import pathutils, storage
from radicale.log import logger
from radicale.storage import multifilesystem
from radicale.storage.multifilesystem.base import StorageBase
@ -33,24 +35,28 @@ class StoragePartMove(StorageBase):
assert isinstance(to_collection, multifilesystem.Collection)
assert isinstance(item.collection, multifilesystem.Collection)
assert item.href
os.replace(pathutils.path_to_filesystem(
item.collection._filesystem_path, item.href),
pathutils.path_to_filesystem(
to_collection._filesystem_path, to_href))
move_from = pathutils.path_to_filesystem(item.collection._filesystem_path, item.href)
move_to = pathutils.path_to_filesystem(to_collection._filesystem_path, to_href)
try:
os.replace(move_from, move_to)
except OSError as e:
raise ValueError("Failed to move file %r => %r %s" % (move_from, move_to, e)) from e
self._sync_directory(to_collection._filesystem_path)
if item.collection._filesystem_path != to_collection._filesystem_path:
self._sync_directory(item.collection._filesystem_path)
# Move the item cache entry
cache_folder = os.path.join(item.collection._filesystem_path,
".Radicale.cache", "item")
to_cache_folder = os.path.join(to_collection._filesystem_path,
".Radicale.cache", "item")
cache_folder = self._get_collection_cache_subfolder(item.collection._filesystem_path, ".Radicale.cache", "item")
to_cache_folder = self._get_collection_cache_subfolder(to_collection._filesystem_path, ".Radicale.cache", "item")
self._makedirs_synced(to_cache_folder)
move_from = os.path.join(cache_folder, item.href)
move_to = os.path.join(to_cache_folder, to_href)
try:
os.replace(os.path.join(cache_folder, item.href),
os.path.join(to_cache_folder, to_href))
os.replace(move_from, move_to)
except FileNotFoundError:
pass
except OSError as e:
logger.error("Failed to move cache file %r => %r %s" % (move_from, move_to, e))
pass
else:
self._makedirs_synced(to_cache_folder)
if cache_folder != to_cache_folder:

View file

@ -67,8 +67,7 @@ class CollectionPartSync(CollectionPartCache, CollectionPartHistory,
if token_name == old_token_name:
# Nothing changed
return token, ()
token_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "sync-token")
token_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "sync-token")
token_path = os.path.join(token_folder, token_name)
old_state = {}
if old_token_name:

View file

@ -1,7 +1,8 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -24,6 +25,7 @@ from typing import Iterable, Iterator, TextIO, cast
import radicale.item as radicale_item
from radicale import pathutils
from radicale.log import logger
from radicale.storage.multifilesystem.base import CollectionBase
from radicale.storage.multifilesystem.cache import CollectionPartCache
from radicale.storage.multifilesystem.get import CollectionPartGet
@ -37,19 +39,28 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
) -> radicale_item.Item:
if not pathutils.is_safe_filesystem_path_component(href):
raise pathutils.UnsafePathError(href)
path = pathutils.path_to_filesystem(self._filesystem_path, href)
try:
self._store_item_cache(href, item)
with self._atomic_write(path, newline="") as fo: # type: ignore
f = cast(TextIO, fo)
f.write(item.serialize())
except Exception as e:
raise ValueError("Failed to store item %r in collection %r: %s" %
(href, self.path, e)) from e
path = pathutils.path_to_filesystem(self._filesystem_path, href)
# TODO: better fix for "mypy"
with self._atomic_write(path, newline="") as fo: # type: ignore
f = cast(TextIO, fo)
f.write(item.serialize())
# Clean the cache after the actual item is stored, or the cache entry
# will be removed again.
self._clean_item_cache()
# store cache file
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(item.serialize().encode(self._encoding))
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with hash %r", path, cache_hash)
try:
self._store_item_cache(href, item, cache_hash)
except Exception as e:
raise ValueError("Failed to store item cache of %r in collection %r: %s" %
(href, self.path, e)) from e
# Track the change
self._update_history_etag(href, item)
self._clean_history()
@ -75,20 +86,16 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
yield radicale_item.find_available_uid(
lambda href: not is_safe_free_href(href), suffix)
cache_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "item")
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
self._storage._makedirs_synced(cache_folder)
for item in items:
uid = item.uid
try:
cache_content = self._item_cache_content(item)
except Exception as e:
raise ValueError(
"Failed to store item %r in temporary collection %r: %s" %
(uid, self.path, e)) from e
logger.debug("Store item from list with uid: '%s'" % uid)
cache_content = self._item_cache_content(item)
for href in get_safe_free_hrefs(uid):
path = os.path.join(self._filesystem_path, href)
try:
f = open(os.path.join(self._filesystem_path, href),
f = open(path,
"w", newline="", encoding=self._encoding)
except OSError as e:
if (sys.platform != "win32" and e.errno == errno.EINVAL or
@ -100,12 +107,31 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
else:
raise RuntimeError("No href found for item %r in temporary "
"collection %r" % (uid, self.path))
with f:
f.write(item.serialize())
f.flush()
self._storage._fsync(f)
try:
with f:
f.write(item.serialize())
f.flush()
self._storage._fsync(f)
except Exception as e:
raise ValueError(
"Failed to store item %r in temporary collection %r: %s" %
(uid, self.path, e)) from e
# store cache file
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(item.serialize().encode(self._encoding))
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with hash %r", path, cache_hash)
path_cache = os.path.join(cache_folder, href)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store into: %r", path_cache)
with open(os.path.join(cache_folder, href), "wb") as fb:
pickle.dump(cache_content, fb)
pickle.dump((cache_hash, *cache_content), fb)
fb.flush()
self._storage._fsync(fb)
self._storage._sync_directory(cache_folder)

View file

@ -29,6 +29,8 @@ class StoragePartVerify(StoragePartDiscover, StorageBase):
def verify(self) -> bool:
item_errors = collection_errors = 0
logger.info("Disable fsync during storage verification")
self._filesystem_fsync = False
@types.contextmanager
def exception_cm(sane_path: str, href: Optional[str]

View file

@ -29,13 +29,15 @@ import wsgiref.util
import xml.etree.ElementTree as ET
from io import BytesIO
from typing import Any, Dict, List, Optional, Tuple, Union
from urllib.parse import quote
import defusedxml.ElementTree as DefusedET
import vobject
import radicale
from radicale import app, config, types, xmlutils
RESPONSES = Dict[str, Union[int, Dict[str, Tuple[int, ET.Element]]]]
RESPONSES = Dict[str, Union[int, Dict[str, Tuple[int, ET.Element]], vobject.base.Component]]
# Enable debug output
radicale.log.logger.setLevel(logging.DEBUG)
@ -107,12 +109,11 @@ class BaseTest:
def parse_responses(text: str) -> RESPONSES:
xml = DefusedET.fromstring(text)
assert xml.tag == xmlutils.make_clark("D:multistatus")
path_responses: Dict[str, Union[
int, Dict[str, Tuple[int, ET.Element]]]] = {}
path_responses: RESPONSES = {}
for response in xml.findall(xmlutils.make_clark("D:response")):
href = response.find(xmlutils.make_clark("D:href"))
assert href.text not in path_responses
prop_respones: Dict[str, Tuple[int, ET.Element]] = {}
prop_responses: Dict[str, Tuple[int, ET.Element]] = {}
for propstat in response.findall(
xmlutils.make_clark("D:propstat")):
status = propstat.find(xmlutils.make_clark("D:status"))
@ -121,16 +122,22 @@ class BaseTest:
for element in propstat.findall(
"./%s/*" % xmlutils.make_clark("D:prop")):
human_tag = xmlutils.make_human_tag(element.tag)
assert human_tag not in prop_respones
prop_respones[human_tag] = (status_code, element)
assert human_tag not in prop_responses
prop_responses[human_tag] = (status_code, element)
status = response.find(xmlutils.make_clark("D:status"))
if status is not None:
assert not prop_respones
assert not prop_responses
assert status.text.startswith("HTTP/1.1 ")
status_code = int(status.text.split(" ")[1])
path_responses[href.text] = status_code
else:
path_responses[href.text] = prop_respones
path_responses[href.text] = prop_responses
return path_responses
@staticmethod
def parse_free_busy(text: str) -> RESPONSES:
path_responses: RESPONSES = {}
path_responses[""] = vobject.readOne(text)
return path_responses
def get(self, path: str, check: Optional[int] = 200, **kwargs
@ -161,7 +168,7 @@ class BaseTest:
assert answer is not None
responses = self.parse_responses(answer)
if kwargs.get("HTTP_DEPTH", "0") == "0":
assert len(responses) == 1 and path in responses
assert len(responses) == 1 and quote(path) in responses
return status, responses
def proppatch(self, path: str, data: Optional[str] = None,
@ -177,13 +184,18 @@ class BaseTest:
return status, responses
def report(self, path: str, data: str, check: Optional[int] = 207,
is_xml: Optional[bool] = True,
**kwargs) -> Tuple[int, RESPONSES]:
status, _, answer = self.request("REPORT", path, data, check=check,
**kwargs)
if status < 200 or 300 <= status:
return status, {}
assert answer is not None
return status, self.parse_responses(answer)
if is_xml:
parsed = self.parse_responses(answer)
else:
parsed = self.parse_free_busy(answer)
return status, parsed
def delete(self, path: str, check: Optional[int] = 200, **kwargs
) -> Tuple[int, RESPONSES]:

View file

@ -29,7 +29,7 @@ from radicale import auth
class Auth(auth.BaseAuth):
def login(self, login: str, password: str) -> str:
def _login(self, login: str, password: str) -> str:
if login == "tmp":
return login
return ""

View file

@ -0,0 +1,36 @@
BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event10
SUMMARY:Event
CATEGORIES:some_category1,another_category2
ORGANIZER:mailto:unclesam@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;CN=Jane Doe:MAILTO:janedoe@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;DELEGATED-FROM="MAILTO:bob@host.com";PARTSTAT=ACCEPTED;CN=John Doe:MAILTO:johndoe@example.com
DTSTART;TZID=Europe/Paris:20130901T180000
DTEND;TZID=Europe/Paris:20130901T190000
STATUS:CANCELLED
END:VEVENT
END:VCALENDAR

View file

@ -0,0 +1,35 @@
BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060102T120000
DURATION:PT1H
RRULE:FREQ=DAILY;COUNT=5
SUMMARY:Event #2
UID:event_daily_rrule_overridden
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060104T140000
DURATION:PT1H
RECURRENCE-ID;TZID=US/Eastern:20060104T120000
SUMMARY:Event #2 bis
UID:event_daily_rrule_overridden
END:VEVENT
END:VCALENDAR

View file

@ -0,0 +1,55 @@
BEGIN:VCALENDAR
VERSION:2.0
PRODID:DAVx5/4.4.3.2-ose ical4j/3.2.19
BEGIN:VEVENT
DTSTAMP:20241125T195941Z
UID:9fb6578a-07a6-4c61-8406-69229713d40e
SEQUENCE:3
SUMMARY:Escalade
DTSTART;TZID=Europe/Paris:20240606T193000
DTEND;TZID=Europe/Paris:20240606T203000
RRULE:FREQ=WEEKLY;WKST=MO;BYDAY=TH
EXDATE;TZID=Europe/Paris:20240704T193000
CLASS:PUBLIC
STATUS:CONFIRMED
BEGIN:VALARM
TRIGGER:-P1D
ACTION:DISPLAY
DESCRIPTION:Escalade
END:VALARM
END:VEVENT
BEGIN:VEVENT
DTSTAMP:20241125T195941Z
UID:9fb6578a-07a6-4c61-8406-69229713d40e
RECURRENCE-ID;TZID=Europe/Paris:20241128T193000
SEQUENCE:1
SUMMARY:Escalade avec Romain
DTSTART;TZID=Europe/Paris:20241128T193000
DTEND;TZID=Europe/Paris:20241128T203000
EXDATE;TZID=Europe/Paris:20240704T193000
CLASS:PUBLIC
STATUS:CONFIRMED
BEGIN:VALARM
TRIGGER:-P1D
ACTION:DISPLAY
DESCRIPTION:Escalade avec Romain
END:VALARM
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:DAYLIGHT
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR

View file

@ -5,14 +5,14 @@ BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404
DTSTART:20000404T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026
DTSTART:20001026T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400

View file

@ -0,0 +1,28 @@
BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060321T150000
DURATION:PT1H
RRULE:FREQ=WEEKLY;COUNT=5
SUMMARY:Recurring event
UID:event_weekly_rrule
END:VEVENT
END:VCALENDAR

View file

@ -2,7 +2,7 @@
# Copyright © 2012-2016 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -22,6 +22,8 @@ Radicale tests with simple requests and authentication.
"""
import base64
import logging
import os
import sys
from typing import Iterable, Tuple, Union
@ -39,6 +41,14 @@ class TestBaseAuthRequests(BaseTest):
"""
# test for available bcrypt module
try:
import bcrypt
except ImportError:
has_bcrypt = 0
else:
has_bcrypt = 1
def _test_htpasswd(self, htpasswd_encryption: str, htpasswd_content: str,
test_matrix: Union[str, Iterable[Tuple[str, str, bool]]]
= "ascii") -> None:
@ -69,6 +79,9 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_plain(self) -> None:
self._test_htpasswd("plain", "tmp:bepo")
def test_htpasswd_plain_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:bepo")
def test_htpasswd_plain_password_split(self) -> None:
self._test_htpasswd("plain", "tmp:be:po", (
("tmp", "be:po", True), ("tmp", "bepo", False)))
@ -79,6 +92,9 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_md5(self) -> None:
self._test_htpasswd("md5", "tmp:$apr1$BI7VKCZh$GKW4vq2hqDINMr8uv7lDY/")
def test_htpasswd_md5_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$apr1$BI7VKCZh$GKW4vq2hqDINMr8uv7lDY/")
def test_htpasswd_md5_unicode(self):
self._test_htpasswd(
"md5", "😀:$apr1$w4ev89r1$29xO8EvJmS2HEAadQ5qy11", "unicode")
@ -86,20 +102,99 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_sha256(self) -> None:
self._test_htpasswd("sha256", "tmp:$5$i4Ni4TQq6L5FKss5$ilpTjkmnxkwZeV35GB9cYSsDXTALBn6KtWRJAzNlCL/")
def test_htpasswd_sha256_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$5$i4Ni4TQq6L5FKss5$ilpTjkmnxkwZeV35GB9cYSsDXTALBn6KtWRJAzNlCL/")
def test_htpasswd_sha512(self) -> None:
self._test_htpasswd("sha512", "tmp:$6$3Qhl8r6FLagYdHYa$UCH9yXCed4A.J9FQsFPYAOXImzZUMfvLa0lwcWOxWYLOF5sE/lF99auQ4jKvHY2vijxmefl7G6kMqZ8JPdhIJ/")
def test_htpasswd_bcrypt(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3V"
"NTRI3w5KDnj8NTUKJNWfVpvRq")
def test_htpasswd_sha512_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$6$3Qhl8r6FLagYdHYa$UCH9yXCed4A.J9FQsFPYAOXImzZUMfvLa0lwcWOxWYLOF5sE/lF99auQ4jKvHY2vijxmefl7G6kMqZ8JPdhIJ/")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2a(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2a$10$Mj4A9vMecAp/K7.0fMKoVOk1SjgR.RBhl06a52nvzXhxlT3HB7Reu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2a_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2a$10$Mj4A9vMecAp/K7.0fMKoVOk1SjgR.RBhl06a52nvzXhxlT3HB7Reu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2b(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2b$12$7a4z/fdmXlBIfkz0smvzW.1Nds8wpgC/bo2DVOb4OSQKWCDL1A1wu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2b_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2b$12$7a4z/fdmXlBIfkz0smvzW.1Nds8wpgC/bo2DVOb4OSQKWCDL1A1wu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2y(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3VNTRI3w5KDnj8NTUKJNWfVpvRq")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2y_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3VNTRI3w5KDnj8NTUKJNWfVpvRq")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_C10(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$10$bZsWq06ECzxqi7RmulQvC.T1YHUnLW2E3jn.MU2pvVTGn1dfORt2a")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_C10_autodetect(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$10$bZsWq06ECzxqi7RmulQvC.T1YHUnLW2E3jn.MU2pvVTGn1dfORt2a")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_unicode(self) -> None:
self._test_htpasswd("bcrypt", "😀:$2y$10$Oyz5aHV4MD9eQJbk6GPemOs4T6edK"
"6U9Sqlzr.W1mMVCS8wJUftnW", "unicode")
self._test_htpasswd("bcrypt", "😀:$2y$10$Oyz5aHV4MD9eQJbk6GPemOs4T6edK6U9Sqlzr.W1mMVCS8wJUftnW", "unicode")
def test_htpasswd_multi(self) -> None:
self._test_htpasswd("plain", "ign:ign\ntmp:bepo")
# login cache successful
def test_htpasswd_login_cache_successful_plain(self, caplog) -> None:
caplog.set_level(logging.INFO)
self.configure({"auth": {"cache_logins": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (("tmp", "bepo", True), ("tmp", "bepo", True)))
htpasswd_found = False
htpasswd_cached_found = False
for line in caplog.messages:
if line == "Successful login: 'tmp' (htpasswd)":
htpasswd_found = True
elif line == "Successful login: 'tmp' (htpasswd / cached)":
htpasswd_cached_found = True
if (htpasswd_found is False) or (htpasswd_cached_found is False):
raise ValueError("Logging misses expected log lines")
# login cache failed
def test_htpasswd_login_cache_failed_plain(self, caplog) -> None:
caplog.set_level(logging.INFO)
self.configure({"auth": {"cache_logins": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (("tmp", "bepo1", False), ("tmp", "bepo1", False)))
htpasswd_found = False
htpasswd_cached_found = False
for line in caplog.messages:
if line == "Failed login attempt from unknown: 'tmp' (htpasswd)":
htpasswd_found = True
elif line == "Failed login attempt from unknown: 'tmp' (htpasswd / cached)":
htpasswd_cached_found = True
if (htpasswd_found is False) or (htpasswd_cached_found is False):
raise ValueError("Logging misses expected log lines")
# htpasswd file cache
def test_htpasswd_file_cache(self, caplog) -> None:
self.configure({"auth": {"htpasswd_cache": "True"}})
self._test_htpasswd("plain", "tmp:bepo")
# detection of broken htpasswd file entries
def test_htpasswd_broken(self) -> None:
for userpass in ["tmp:", ":tmp"]:
try:
self._test_htpasswd("plain", userpass)
except RuntimeError:
pass
else:
raise
@pytest.mark.skipif(sys.platform == "win32", reason="leading and trailing "
"whitespaces not allowed in file names")
def test_htpasswd_whitespace_user(self) -> None:
@ -115,6 +210,21 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_comment(self) -> None:
self._test_htpasswd("plain", "#comment\n #comment\n \ntmp:bepo\n\n")
def test_htpasswd_lc_username(self) -> None:
self.configure({"auth": {"lc_username": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (
("tmp", "bepo", True), ("TMP", "bepo", True), ("tmp1", "bepo", False)))
def test_htpasswd_uc_username(self) -> None:
self.configure({"auth": {"uc_username": "True"}})
self._test_htpasswd("plain", "TMP:bepo", (
("tmp", "bepo", True), ("TMP", "bepo", True), ("TMP1", "bepo", False)))
def test_htpasswd_strip_domain(self) -> None:
self.configure({"auth": {"strip_domain": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (
("tmp", "bepo", True), ("tmp@domain.example", "bepo", True), ("tmp1", "bepo", False)))
def test_remote_user(self) -> None:
self.configure({"auth": {"type": "remote_user"}})
_, responses = self.propfind("/", """\
@ -149,6 +259,118 @@ class TestBaseAuthRequests(BaseTest):
href_element = prop.find(xmlutils.make_clark("D:href"))
assert href_element is not None and href_element.text == "/test/"
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def _test_dovecot(
self, user, password, expected_status,
response=b'FAIL\n1\n', mech=[b'PLAIN'], broken=None):
import socket
from unittest.mock import DEFAULT, patch
self.configure({"auth": {"type": "dovecot",
"dovecot_socket": "./dovecot.sock"}})
if broken is None:
broken = []
handshake = b''
if "version" not in broken:
handshake += b'VERSION\t'
if "incompatible" in broken:
handshake += b'2'
else:
handshake += b'1'
handshake += b'\t2\n'
if "mech" not in broken:
handshake += b'MECH\t%b\n' % b' '.join(mech)
if "duplicate" in broken:
handshake += b'VERSION\t1\t2\n'
if "done" not in broken:
handshake += b'DONE\n'
with patch.multiple(
'socket.socket',
connect=DEFAULT,
send=DEFAULT,
recv=DEFAULT
) as mock_socket:
if "socket" in broken:
mock_socket["connect"].side_effect = socket.error(
"Testing error with the socket"
)
mock_socket["recv"].side_effect = [handshake, response]
status, _, answer = self.request(
"PROPFIND", "/",
HTTP_AUTHORIZATION="Basic %s" % base64.b64encode(
("%s:%s" % (user, password)).encode()).decode())
assert status == expected_status
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_no_user(self):
self._test_dovecot("", "", 401)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_no_password(self):
self._test_dovecot("user", "", 401)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_version(self):
self._test_dovecot("user", "password", 401, broken=["version"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_incompatible(self):
self._test_dovecot("user", "password", 401, broken=["incompatible"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_duplicate(self):
self._test_dovecot(
"user", "password", 207, response=b'OK\t1',
broken=["duplicate"]
)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_mech(self):
self._test_dovecot("user", "password", 401, broken=["mech"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_unsupported_mech(self):
self._test_dovecot("user", "password", 401, mech=[b'ONE', b'TWO'])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_done(self):
self._test_dovecot("user", "password", 401, broken=["done"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_socket(self):
self._test_dovecot("user", "password", 401, broken=["socket"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_good1(self):
self._test_dovecot("user", "password", 207, response=b'OK\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_good2(self):
self._test_dovecot(
"user", "password", 207, response=b'OK\t1',
mech=[b'PLAIN\nEXTRA\tTERM']
)
self._test_dovecot("user", "password", 207, response=b'OK\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_bad1(self):
self._test_dovecot("user", "password", 401, response=b'FAIL\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_bad2(self):
self._test_dovecot("user", "password", 401, response=b'CONT\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_id_mismatch(self):
self._test_dovecot("user", "password", 401, response=b'OK\t2')
def test_custom(self) -> None:
"""Custom authentication."""
self.configure({"auth": {"type": "radicale.tests.custom.auth"}})

View file

@ -1,6 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -25,6 +26,7 @@ import posixpath
from typing import Any, Callable, ClassVar, Iterable, List, Optional, Tuple
import defusedxml.ElementTree as DefusedET
import vobject
from radicale import storage, xmlutils
from radicale.tests import RESPONSES, BaseTest
@ -42,6 +44,26 @@ class TestBaseRequests(BaseTest):
rights_file_path = os.path.join(self.colpath, "rights")
with open(rights_file_path, "w") as f:
f.write("""\
[permit delete collection]
user: .*
collection: test-permit-delete
permissions: RrWwD
[forbid delete collection]
user: .*
collection: test-forbid-delete
permissions: RrWwd
[permit overwrite collection]
user: .*
collection: test-permit-overwrite
permissions: RrWwO
[forbid overwrite collection]
user: .*
collection: test-forbid-overwrite
permissions: RrWwo
[allow all]
user: .*
collection: .*
@ -145,6 +167,12 @@ permissions: RrWw""")
event = get_file_content("event_mixed_datetime_and_date.ics")
self.put("/calendar.ics/event.ics", event)
def test_add_event_with_exdate_without_rrule(self) -> None:
"""Test event with EXDATE but not having RRULE."""
self.mkcalendar("/calendar.ics/")
event = get_file_content("event_exdate_without_rrule.ics")
self.put("/calendar.ics/event.ics", event)
def test_add_todo(self) -> None:
"""Add a todo."""
self.mkcalendar("/calendar.ics/")
@ -359,7 +387,7 @@ permissions: RrWw""")
self.get(path1, check=404)
self.get(path2)
def test_move_between_colections(self) -> None:
def test_move_between_collections(self) -> None:
"""Move a item."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -372,7 +400,7 @@ permissions: RrWw""")
self.get(path1, check=404)
self.get(path2)
def test_move_between_colections_duplicate_uid(self) -> None:
def test_move_between_collections_duplicate_uid(self) -> None:
"""Move a item to a collection which already contains the UID."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -388,7 +416,7 @@ permissions: RrWw""")
assert xml.tag == xmlutils.make_clark("D:error")
assert xml.find(xmlutils.make_clark("C:no-uid-conflict")) is not None
def test_move_between_colections_overwrite(self) -> None:
def test_move_between_collections_overwrite(self) -> None:
"""Move a item to a collection which already contains the item."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -402,8 +430,8 @@ permissions: RrWw""")
self.request("MOVE", path1, check=204, HTTP_OVERWRITE="T",
HTTP_DESTINATION="http://127.0.0.1/"+path2)
def test_move_between_colections_overwrite_uid_conflict(self) -> None:
"""Move a item to a collection which already contains the item with
def test_move_between_collections_overwrite_uid_conflict(self) -> None:
"""Move an item to a collection which already contains the item with
a different UID."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -438,6 +466,33 @@ permissions: RrWw""")
assert responses["/calendar.ics/"] == 200
self.get("/calendar.ics/", check=404)
def test_delete_collection_global_forbid(self) -> None:
"""Delete a collection (expect forbidden)."""
self.configure({"rights": {"permit_delete_collection": False}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
self.put("/calendar.ics/event1.ics", event)
_, responses = self.delete("/calendar.ics/", check=401)
self.get("/calendar.ics/", check=200)
def test_delete_collection_global_forbid_explicit_permit(self) -> None:
"""Delete a collection with permitted path (expect permit)."""
self.configure({"rights": {"permit_delete_collection": False}})
self.mkcalendar("/test-permit-delete/")
event = get_file_content("event1.ics")
self.put("/test-permit-delete/event1.ics", event)
_, responses = self.delete("/test-permit-delete/", check=200)
self.get("/test-permit-delete/", check=404)
def test_delete_collection_global_permit_explicit_forbid(self) -> None:
"""Delete a collection with permitted path (expect forbid)."""
self.configure({"rights": {"permit_delete_collection": True}})
self.mkcalendar("/test-forbid-delete/")
event = get_file_content("event1.ics")
self.put("/test-forbid-delete/event1.ics", event)
_, responses = self.delete("/test-forbid-delete/", check=401)
self.get("/test-forbid-delete/", check=200)
def test_delete_root_collection(self) -> None:
"""Delete the root collection."""
self.mkcalendar("/calendar.ics/")
@ -449,6 +504,30 @@ permissions: RrWw""")
self.get("/calendar.ics/", check=404)
self.get("/event1.ics", 404)
def test_overwrite_collection_global_forbid(self) -> None:
"""Overwrite a collection (expect forbid)."""
self.configure({"rights": {"permit_overwrite_collection": False}})
event = get_file_content("event1.ics")
self.put("/calender.ics/", event, check=401)
def test_overwrite_collection_global_forbid_explict_permit(self) -> None:
"""Overwrite a collection with permitted path (expect permit)."""
self.configure({"rights": {"permit_overwrite_collection": False}})
event = get_file_content("event1.ics")
self.put("/test-permit-overwrite/", event, check=201)
def test_overwrite_collection_global_permit(self) -> None:
"""Overwrite a collection (expect permit)."""
self.configure({"rights": {"permit_overwrite_collection": True}})
event = get_file_content("event1.ics")
self.put("/calender.ics/", event, check=201)
def test_overwrite_collection_global_permit_explict_forbid(self) -> None:
"""Overwrite a collection with forbidden path (expect forbid)."""
self.configure({"rights": {"permit_overwrite_collection": True}})
event = get_file_content("event1.ics")
self.put("/test-forbid-overwrite/", event, check=401)
def test_propfind(self) -> None:
calendar_path = "/calendar.ics/"
self.mkcalendar("/calendar.ics/")
@ -1360,10 +1439,45 @@ permissions: RrWw""")
</C:calendar-query>""")
assert len(responses) == 1
response = responses[event_path]
assert not isinstance(response, int)
assert isinstance(response, dict)
status, prop = response["D:getetag"]
assert status == 200 and prop.text
def test_report_free_busy(self) -> None:
"""Test free busy report on a few items"""
calendar_path = "/calendar.ics/"
self.mkcalendar(calendar_path)
for i in (1, 2, 10):
filename = "event{}.ics".format(i)
event = get_file_content(filename)
self.put(posixpath.join(calendar_path, filename), event)
code, responses = self.report(calendar_path, """\
<?xml version="1.0" encoding="utf-8" ?>
<C:free-busy-query xmlns:C="urn:ietf:params:xml:ns:caldav">
<C:time-range start="20130901T140000Z" end="20130908T220000Z"/>
</C:free-busy-query>""", 200, is_xml=False)
for response in responses.values():
assert isinstance(response, vobject.base.Component)
assert len(responses) == 1
vcalendar = list(responses.values())[0]
assert isinstance(vcalendar, vobject.base.Component)
assert len(vcalendar.vfreebusy_list) == 3
types = {}
for vfb in vcalendar.vfreebusy_list:
fbtype_val = vfb.fbtype.value
if fbtype_val not in types:
types[fbtype_val] = 0
types[fbtype_val] += 1
assert types == {'BUSY': 2, 'FREE': 1}
# Test max_freebusy_occurrence limit
self.configure({"reporting": {"max_freebusy_occurrence": 1}})
code, responses = self.report(calendar_path, """\
<?xml version="1.0" encoding="utf-8" ?>
<C:free-busy-query xmlns:C="urn:ietf:params:xml:ns:caldav">
<C:time-range start="20130901T140000Z" end="20130908T220000Z"/>
</C:free-busy-query>""", 400, is_xml=False)
def _report_sync_token(
self, calendar_path: str, sync_token: Optional[str] = None
) -> Tuple[str, RESPONSES]:
@ -1525,184 +1639,6 @@ permissions: RrWw""")
calendar_path, "http://radicale.org/ns/sync/INVALID")
assert not sync_token
def test_report_with_expand_property(self) -> None:
"""Test report with expand property"""
self.put("/calendar.ics/", get_file_content("event_daily_rrule.ics"))
req_body_without_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses['/calendar.ics/event_daily_rrule.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
assert "BEGIN:VTIMEZONE" in element.text
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == "event_daily_rrule"
uids.append(uid)
assert len(uids) == 1
req_body_with_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="20060103T000000Z" end="20060105T000000Z"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses['/calendar.ics/event_daily_rrule.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == "UID:event_daily_rrule"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in ["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"]
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line == "DTSTART:20060102T170000Z"
assert len(uids) == 2
assert len(set(recurrence_ids)) == 2
def test_report_with_expand_property_all_day_event(self) -> None:
"""Test report with expand property"""
self.put("/calendar.ics/", get_file_content("event_full_day_rrule.ics"))
req_body_without_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses['/calendar.ics/event_full_day_rrule.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == "event_full_day_rrule"
uids.append(uid)
assert len(uids) == 1
req_body_with_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="20060103T000000Z" end="20060105T000000Z"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses['/calendar.ics/event_full_day_rrule.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == "UID:event_full_day_rrule"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in ["RECURRENCE-ID:20060103", "RECURRENCE-ID:20060104", "RECURRENCE-ID:20060105"]
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line == "DTSTART:20060102"
if line.startswith("DTEND:"):
assert line == "DTEND:20060103"
assert len(uids) == 3
assert len(set(recurrence_ids)) == 3
def test_propfind_sync_token(self) -> None:
"""Retrieve the sync-token with a propfind request"""
calendar_path = "/calendar.ics/"
@ -1778,6 +1714,7 @@ permissions: RrWw""")
assert status == 200 and prop.text == "text/vcard;charset=utf-8"
def test_authorization(self) -> None:
self.configure({"auth": {"type": "none"}})
_, responses = self.propfind("/", """\
<?xml version="1.0" encoding="utf-8"?>
<propfind xmlns="DAV:">
@ -1804,6 +1741,7 @@ permissions: RrWw""")
def test_principal_collection_creation(self) -> None:
"""Verify existence of the principal collection."""
self.configure({"auth": {"type": "none"}})
self.propfind("/user/", login="user:")
def test_authentication_current_user_principal_hack(self) -> None:

View file

@ -0,0 +1,230 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Radicale tests with expand requests.
"""
import os
from typing import ClassVar, List
from radicale.tests import BaseTest
from radicale.tests.helpers import get_file_content
ONLY_DATES = True
CONTAINS_TIMES = False
class TestExpandRequests(BaseTest):
"""Tests with expand requests."""
# Allow skipping sync-token tests, when not fully supported by the backend
full_sync_token_support: ClassVar[bool] = True
def setup_method(self) -> None:
BaseTest.setup_method(self)
rights_file_path = os.path.join(self.colpath, "rights")
with open(rights_file_path, "w") as f:
f.write("""\
[permit delete collection]
user: .*
collection: test-permit-delete
permissions: RrWwD
[forbid delete collection]
user: .*
collection: test-forbid-delete
permissions: RrWwd
[permit overwrite collection]
user: .*
collection: test-permit-overwrite
permissions: RrWwO
[forbid overwrite collection]
user: .*
collection: test-forbid-overwrite
permissions: RrWwo
[allow all]
user: .*
collection: .*
permissions: RrWw""")
self.configure({"rights": {"file": rights_file_path,
"type": "from_file"}})
def _test_expand(self,
expected_uid: str,
start: str,
end: str,
expected_recurrence_ids: List[str],
expected_start_times: List[str],
expected_end_times: List[str],
only_dates: bool,
nr_uids: int) -> None:
self.put("/calendar.ics/", get_file_content(f"{expected_uid}.ics"))
req_body_without_expand = \
f"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="{start}" end="{end}"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses[f'/calendar.ics/{expected_uid}.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
if not only_dates:
assert "BEGIN:VTIMEZONE" in element.text
if nr_uids == 1:
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == expected_uid
uids.append(uid)
assert len(uids) == nr_uids
req_body_with_expand = \
f"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="{start}" end="{end}"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="{start}" end="{end}"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses[f'/calendar.ics/{expected_uid}.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == f"UID:{expected_uid}"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in expected_recurrence_ids
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line in expected_start_times
if line.startswith("DTEND:"):
assert line in expected_end_times
assert len(uids) == len(expected_recurrence_ids)
assert len(set(recurrence_ids)) == len(expected_recurrence_ids)
def test_report_with_expand_property(self) -> None:
"""Test report with expand property"""
self._test_expand(
"event_daily_rrule",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"],
["DTSTART:20060103T170000Z", "DTSTART:20060104T170000Z"],
[],
CONTAINS_TIMES,
1
)
def test_report_with_expand_property_all_day_event(self) -> None:
"""Test report with expand property for all day events"""
self._test_expand(
"event_full_day_rrule",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103", "RECURRENCE-ID:20060104", "RECURRENCE-ID:20060105"],
["DTSTART:20060103", "DTSTART:20060104", "DTSTART:20060105"],
["DTEND:20060104", "DTEND:20060105", "DTEND:20060106"],
ONLY_DATES,
1
)
def test_report_with_expand_property_overridden(self) -> None:
"""Test report with expand property with overridden events"""
self._test_expand(
"event_daily_rrule_overridden",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"],
["DTSTART:20060103T170000Z", "DTSTART:20060104T190000Z"],
[],
CONTAINS_TIMES,
2
)
def test_report_with_expand_property_timezone(self):
self._test_expand(
"event_weekly_rrule",
"20060320T000000Z",
"20060414T000000Z",
[
"RECURRENCE-ID:20060321T200000Z",
"RECURRENCE-ID:20060328T200000Z",
"RECURRENCE-ID:20060404T190000Z",
"RECURRENCE-ID:20060411T190000Z",
],
[
"DTSTART:20060321T200000Z",
"DTSTART:20060328T200000Z",
"DTSTART:20060404T190000Z",
"DTSTART:20060411T190000Z",
],
[],
CONTAINS_TIMES,
1
)

View file

@ -30,10 +30,10 @@ class TestBaseRightsRequests(BaseTest):
def _test_rights(self, rights_type: str, user: str, path: str, mode: str,
expected_status: int, with_auth: bool = True) -> None:
assert mode in ("r", "w")
assert user in ("", "tmp")
assert user in ("", "tmp", "user@domain.test")
htpasswd_file_path = os.path.join(self.colpath, ".htpasswd")
with open(htpasswd_file_path, "w") as f:
f.write("tmp:bepo\nother:bepo")
f.write("tmp:bepo\nother:bepo\nuser@domain.test:bepo")
self.configure({
"rights": {"type": rights_type},
"auth": {"type": "htpasswd" if with_auth else "none",
@ -42,8 +42,9 @@ class TestBaseRightsRequests(BaseTest):
for u in ("tmp", "other"):
# Indirect creation of principal collection
self.propfind("/%s/" % u, login="%s:bepo" % u)
os.makedirs(os.path.join(self.colpath, "collection-root", "domain.test"), exist_ok=True)
(self.propfind if mode == "r" else self.proppatch)(
path, check=expected_status, login="tmp:bepo" if user else None)
path, check=expected_status, login="%s:bepo" % user if user else None)
def test_owner_only(self) -> None:
self._test_rights("owner_only", "", "/", "r", 401)
@ -110,14 +111,23 @@ permissions: RrWw
[custom]
user: .*
collection: custom(/.*)?
permissions: Rr""")
permissions: Rr
[read-domain-principal]
user: .+@([^@]+)
collection: {0}
permissions: R""")
self.configure({"rights": {"file": rights_file_path}})
self._test_rights("from_file", "", "/other/", "r", 401)
self._test_rights("from_file", "tmp", "/tmp/", "r", 207)
self._test_rights("from_file", "tmp", "/other/", "r", 403)
self._test_rights("from_file", "", "/custom/sub", "r", 404)
self._test_rights("from_file", "tmp", "/custom/sub", "r", 404)
self._test_rights("from_file", "", "/custom/sub", "w", 401)
self._test_rights("from_file", "tmp", "/custom/sub", "w", 403)
self._test_rights("from_file", "tmp", "/custom/sub", "w", 403)
self._test_rights("from_file", "user@domain.test", "/domain.test/", "r", 207)
self._test_rights("from_file", "user@domain.test", "/tmp/", "r", 403)
self._test_rights("from_file", "user@domain.test", "/other/", "r", 403)
def test_from_file_limited_get(self):
rights_file_path = os.path.join(self.colpath, "rights")
@ -133,6 +143,7 @@ collection: public/[^/]*
permissions: i""")
self.configure({"rights": {"type": "from_file",
"file": rights_file_path}})
self.configure({"auth": {"type": "none"}})
self.mkcalendar("/tmp/calendar", login="tmp:bepo")
self.mkcol("/public", login="tmp:bepo")
self.mkcalendar("/public/calendar", login="tmp:bepo")
@ -155,6 +166,7 @@ permissions: i""")
Items are allowed at "/.../.../...".
"""
self.configure({"auth": {"type": "none"}})
self.mkcalendar("/", check=401)
self.mkcalendar("/user/", check=401)
self.mkcol("/user/")
@ -165,6 +177,7 @@ permissions: i""")
def test_put_collections_and_items(self) -> None:
"""Test rights for creation of calendars and items with PUT."""
self.configure({"auth": {"type": "none"}})
self.put("/user/", "BEGIN:VCALENDAR\r\nEND:VCALENDAR", check=401)
self.mkcol("/user/")
self.put("/user/calendar/", "BEGIN:VCALENDAR\r\nEND:VCALENDAR")

View file

@ -141,13 +141,19 @@ class TestBaseServerRequests(BaseTest):
def test_bind_fail(self) -> None:
for address_family, address in [(socket.AF_INET, "::1"),
(socket.AF_INET6, "127.0.0.1")]:
with socket.socket(address_family, socket.SOCK_STREAM) as sock:
if address_family == socket.AF_INET6:
# Only allow IPv6 connections to the IPv6 socket
sock.setsockopt(server.COMPAT_IPPROTO_IPV6,
socket.IPV6_V6ONLY, 1)
with pytest.raises(OSError) as exc_info:
sock.bind((address, 0))
try:
with socket.socket(address_family, socket.SOCK_STREAM) as sock:
if address_family == socket.AF_INET6:
# Only allow IPv6 connections to the IPv6 socket
sock.setsockopt(server.COMPAT_IPPROTO_IPV6,
socket.IPV6_V6ONLY, 1)
with pytest.raises(OSError) as exc_info:
sock.bind((address, 0))
except OSError as e:
if e.errno in (errno.EADDRNOTAVAIL, errno.EAFNOSUPPORT,
errno.EPROTONOSUPPORT):
continue
raise
# See ``radicale.server.serve``
assert (isinstance(exc_info.value, socket.gaierror) and
exc_info.value.errno in (

View file

@ -1,6 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -76,13 +77,14 @@ class TestMultiFileSystem(BaseTest):
"""Verify that the hooks runs when a new user is created."""
self.configure({"storage": {"hook": "mkdir %s" % os.path.join(
"collection-root", "created_by_hook")}})
self.configure({"auth": {"type": "none"}})
self.propfind("/", login="user:")
self.propfind("/created_by_hook/")
def test_hook_fail(self) -> None:
"""Verify that a request fails if the hook fails."""
"""Verify that a request succeeded if the hook still fails (anyhow no rollback implemented)."""
self.configure({"storage": {"hook": "exit 1"}})
self.mkcalendar("/calendar.ics/", check=500)
self.mkcalendar("/calendar.ics/", check=201)
def test_item_cache_rebuild(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
@ -99,6 +101,38 @@ class TestMultiFileSystem(BaseTest):
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_item_cache_rebuild_subfolder(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
self.configure({"storage": {"use_cache_subfolder_for_item": "True"}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
path = "/calendar.ics/event1.ics"
self.put(path, event)
_, answer1 = self.get(path)
cache_folder = os.path.join(self.colpath, "collection-cache",
"calendar.ics", ".Radicale.cache", "item")
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
shutil.rmtree(cache_folder)
_, answer2 = self.get(path)
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_item_cache_rebuild_mtime_and_size(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
self.configure({"storage": {"use_mtime_and_size_for_item_cache": "True"}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
path = "/calendar.ics/event1.ics"
self.put(path, event)
_, answer1 = self.get(path)
cache_folder = os.path.join(self.colpath, "collection-root",
"calendar.ics", ".Radicale.cache", "item")
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
shutil.rmtree(cache_folder)
_, answer2 = self.get(path)
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_put_whole_calendar_uids_used_as_file_names(self) -> None:
"""Test if UIDs are used as file names."""
_TestBaseRequests.test_put_whole_calendar(

View file

@ -15,9 +15,9 @@
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import contextlib
import sys
from typing import (Any, Callable, ContextManager, Iterator, List, Mapping,
MutableMapping, Sequence, Tuple, TypeVar, Union)
MutableMapping, Protocol, Sequence, Tuple, TypeVar, Union,
runtime_checkable)
WSGIResponseHeaders = Union[Mapping[str, str], Sequence[Tuple[str, str]]]
WSGIResponse = Tuple[int, WSGIResponseHeaders, Union[None, str, bytes]]
@ -41,20 +41,17 @@ def contextmanager(func: Callable[..., Iterator[_T]]
return result
if sys.version_info >= (3, 8):
from typing import Protocol, runtime_checkable
@runtime_checkable
class InputStream(Protocol):
def read(self, size: int = ...) -> bytes: ...
@runtime_checkable
class InputStream(Protocol):
def read(self, size: int = ...) -> bytes: ...
@runtime_checkable
class ErrorStream(Protocol):
def flush(self) -> object: ...
def write(self, s: str) -> object: ...
else:
ErrorStream = Any
InputStream = Any
@runtime_checkable
class ErrorStream(Protocol):
def flush(self) -> object: ...
def write(self, s: str) -> object: ...
from radicale import item, storage # noqa:E402 isort:skip

View file

@ -2,6 +2,7 @@
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -16,20 +17,29 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import ssl
import sys
from importlib import import_module
from typing import Callable, Sequence, Type, TypeVar, Union
from importlib import import_module, metadata
from typing import Callable, Sequence, Tuple, Type, TypeVar, Union
from radicale import config
from radicale.log import logger
if sys.version_info < (3, 8):
import pkg_resources
else:
from importlib import metadata
_T_co = TypeVar("_T_co", covariant=True)
RADICALE_MODULES: Sequence[str] = ("radicale", "vobject", "passlib", "defusedxml",
"dateutil",
"bcrypt",
"pika",
"ldap",
"ldap3",
"pam")
# IPv4 (host, port) and IPv6 (host, port, flowinfo, scopeid)
ADDRESS_TYPE = Union[Tuple[Union[str, bytes, bytearray], int],
Tuple[str, int, int, int]]
def load_plugin(internal_types: Sequence[str], module_name: str,
class_name: str, base_class: Type[_T_co],
@ -52,6 +62,155 @@ def load_plugin(internal_types: Sequence[str], module_name: str,
def package_version(name):
if sys.version_info < (3, 8):
return pkg_resources.get_distribution(name).version
return metadata.version(name)
def packages_version():
versions = []
versions.append("python=%s.%s.%s" % (sys.version_info[0], sys.version_info[1], sys.version_info[2]))
for pkg in RADICALE_MODULES:
try:
versions.append("%s=%s" % (pkg, package_version(pkg)))
except Exception:
try:
versions.append("%s=%s" % (pkg, package_version("python-" + pkg)))
except Exception:
versions.append("%s=%s" % (pkg, "n/a"))
return " ".join(versions)
def format_address(address: ADDRESS_TYPE) -> str:
host, port, *_ = address
if not isinstance(host, str):
raise NotImplementedError("Unsupported address format: %r" %
(address,))
if host.find(":") == -1:
return "%s:%d" % (host, port)
else:
return "[%s]:%d" % (host, port)
def ssl_context_options_by_protocol(protocol: str, ssl_context_options):
logger.debug("SSL protocol string: '%s' and current SSL context options: '0x%x'", protocol, ssl_context_options)
# disable any protocol by default
logger.debug("SSL context options, disable ALL by default")
ssl_context_options |= ssl.OP_NO_SSLv2
ssl_context_options |= ssl.OP_NO_SSLv3
ssl_context_options |= ssl.OP_NO_TLSv1
ssl_context_options |= ssl.OP_NO_TLSv1_1
ssl_context_options |= ssl.OP_NO_TLSv1_2
ssl_context_options |= ssl.OP_NO_TLSv1_3
logger.debug("SSL cleared SSL context options: '0x%x'", ssl_context_options)
for entry in protocol.split():
entry = entry.strip('+') # remove trailing '+'
if entry == "ALL":
logger.debug("SSL context options, enable ALL (some maybe not supported by underlying OpenSSL, SSLv2 not enabled at all)")
ssl_context_options &= ~ssl.OP_NO_SSLv3
ssl_context_options &= ~ssl.OP_NO_TLSv1
ssl_context_options &= ~ssl.OP_NO_TLSv1_1
ssl_context_options &= ~ssl.OP_NO_TLSv1_2
ssl_context_options &= ~ssl.OP_NO_TLSv1_3
elif entry == "SSLv2":
logger.warning("SSL context options, ignore SSLv2 (totally insecure)")
elif entry == "SSLv3":
ssl_context_options &= ~ssl.OP_NO_SSLv3
logger.debug("SSL context options, enable SSLv3 (maybe not supported by underlying OpenSSL)")
elif entry == "TLSv1":
ssl_context_options &= ~ssl.OP_NO_TLSv1
logger.debug("SSL context options, enable TLSv1 (maybe not supported by underlying OpenSSL)")
elif entry == "TLSv1.1":
logger.debug("SSL context options, enable TLSv1.1 (maybe not supported by underlying OpenSSL)")
ssl_context_options &= ~ssl.OP_NO_TLSv1_1
elif entry == "TLSv1.2":
logger.debug("SSL context options, enable TLSv1.2")
ssl_context_options &= ~ssl.OP_NO_TLSv1_2
elif entry == "TLSv1.3":
logger.debug("SSL context options, enable TLSv1.3")
ssl_context_options &= ~ssl.OP_NO_TLSv1_3
elif entry == "-ALL":
logger.debug("SSL context options, disable ALL")
ssl_context_options |= ssl.OP_NO_SSLv2
ssl_context_options |= ssl.OP_NO_SSLv3
ssl_context_options |= ssl.OP_NO_TLSv1
ssl_context_options |= ssl.OP_NO_TLSv1_1
ssl_context_options |= ssl.OP_NO_TLSv1_2
ssl_context_options |= ssl.OP_NO_TLSv1_3
elif entry == "-SSLv2":
ssl_context_options |= ssl.OP_NO_SSLv2
logger.debug("SSL context options, disable SSLv2")
elif entry == "-SSLv3":
ssl_context_options |= ssl.OP_NO_SSLv3
logger.debug("SSL context options, disable SSLv3")
elif entry == "-TLSv1":
logger.debug("SSL context options, disable TLSv1")
ssl_context_options |= ssl.OP_NO_TLSv1
elif entry == "-TLSv1.1":
logger.debug("SSL context options, disable TLSv1.1")
ssl_context_options |= ssl.OP_NO_TLSv1_1
elif entry == "-TLSv1.2":
logger.debug("SSL context options, disable TLSv1.2")
ssl_context_options |= ssl.OP_NO_TLSv1_2
elif entry == "-TLSv1.3":
logger.debug("SSL context options, disable TLSv1.3")
ssl_context_options |= ssl.OP_NO_TLSv1_3
else:
raise RuntimeError("SSL protocol config contains unsupported entry '%s'" % (entry))
logger.debug("SSL resulting context options: '0x%x'", ssl_context_options)
return ssl_context_options
def ssl_context_minimum_version_by_options(ssl_context_options):
logger.debug("SSL calculate minimum version by context options: '0x%x'", ssl_context_options)
ssl_context_minimum_version = ssl.TLSVersion.SSLv3 # default
if ((ssl_context_options & ssl.OP_NO_SSLv3) and (ssl_context_minimum_version == ssl.TLSVersion.SSLv3)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1
if ((ssl_context_options & ssl.OP_NO_TLSv1) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_1
if ((ssl_context_options & ssl.OP_NO_TLSv1_1) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_1)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_2
if ((ssl_context_options & ssl.OP_NO_TLSv1_2) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_2)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_3
if ((ssl_context_options & ssl.OP_NO_TLSv1_3) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_3)):
ssl_context_minimum_version = 0 # all disabled
logger.debug("SSL context options: '0x%x' results in minimum version: %s", ssl_context_options, ssl_context_minimum_version)
return ssl_context_minimum_version
def ssl_context_maximum_version_by_options(ssl_context_options):
logger.debug("SSL calculate maximum version by context options: '0x%x'", ssl_context_options)
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_3 # default
if ((ssl_context_options & ssl.OP_NO_TLSv1_3) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_3)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_2
if ((ssl_context_options & ssl.OP_NO_TLSv1_2) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_2)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_1
if ((ssl_context_options & ssl.OP_NO_TLSv1_1) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_1)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1
if ((ssl_context_options & ssl.OP_NO_TLSv1) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1)):
ssl_context_maximum_version = ssl.TLSVersion.SSLv3
if ((ssl_context_options & ssl.OP_NO_SSLv3) and (ssl_context_maximum_version == ssl.TLSVersion.SSLv3)):
ssl_context_maximum_version = 0
logger.debug("SSL context options: '0x%x' results in maximum version: %s", ssl_context_options, ssl_context_maximum_version)
return ssl_context_maximum_version
def ssl_get_protocols(context):
protocols = []
if not (context.options & ssl.OP_NO_SSLv3):
if (context.minimum_version < ssl.TLSVersion.TLSv1):
protocols.append("SSLv3")
if not (context.options & ssl.OP_NO_TLSv1):
if (context.minimum_version < ssl.TLSVersion.TLSv1_1) and (context.maximum_version >= ssl.TLSVersion.TLSv1):
protocols.append("TLSv1")
if not (context.options & ssl.OP_NO_TLSv1_1):
if (context.minimum_version < ssl.TLSVersion.TLSv1_2) and (context.maximum_version >= ssl.TLSVersion.TLSv1_1):
protocols.append("TLSv1.1")
if not (context.options & ssl.OP_NO_TLSv1_2):
if (context.minimum_version <= ssl.TLSVersion.TLSv1_2) and (context.maximum_version >= ssl.TLSVersion.TLSv1_2):
protocols.append("TLSv1.2")
if not (context.options & ssl.OP_NO_TLSv1_3):
if (context.minimum_version <= ssl.TLSVersion.TLSv1_3) and (context.maximum_version >= ssl.TLSVersion.TLSv1_3):
protocols.append("TLSv1.3")
return protocols

View file

@ -39,6 +39,17 @@ main{
color: #484848;
}
#loginscene .infcloudlink{
margin: 0;
width: 100%;
text-align: center;
color: #484848;
}
#loginscene .infcloudlink-hidden{
visibility: hidden;
}
#loginscene input{
}

View file

@ -1,6 +1,8 @@
/**
* This file is part of Radicale Server - Calendar Server
* Copyright © 2017-2024 Unrud <unrud@outlook.com>
* Copyright © 2023-2024 Matthew Hana <matthew.hana@gmail.com>
* Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -394,7 +396,7 @@ function create_edit_collection(user, password, collection, create, callback) {
calendar_color = escape_xml(collection.color + (collection.color ? "ff" : ""));
calendar_description = escape_xml(collection.description);
resourcetype = '<CS:subscribed />';
calendar_source = collection.source;
calendar_source = escape_xml(collection.source);
} else {
calendar_color = escape_xml(collection.color + (collection.color ? "ff" : ""));
calendar_description = escape_xml(collection.description);
@ -873,8 +875,7 @@ function UploadCollectionScene(user, password, collection) {
upload_btn.onclick = upload_start;
uploadfile_form.onchange = onfileschange;
let href = random_uuid();
href_form.value = href;
href_form.value = "";
/** @type {?number} */ let scene_index = null;
/** @type {?XMLHttpRequest} */ let upload_req = null;
@ -926,7 +927,7 @@ function UploadCollectionScene(user, password, collection) {
if(files.length > 1 || href.length == 0){
href = random_uuid();
}
let upload_href = collection.href + "/" + href + "/";
let upload_href = collection.href + href + "/";
upload_req = upload_collection(user, password, upload_href, file, function(result) {
upload_req = null;
results.push(result);
@ -992,10 +993,12 @@ function UploadCollectionScene(user, password, collection) {
hreflimitmsg_html.classList.remove("hidden");
href_form.classList.add("hidden");
href_label.classList.add("hidden");
href_form.value = random_uuid(); // dummy, will be replaced on upload
}else{
hreflimitmsg_html.classList.add("hidden");
href_form.classList.remove("hidden");
href_label.classList.remove("hidden");
href_form.value = files[0].name.replace(/\.(ics|vcf)$/, '');
}
return false;
}
@ -1004,6 +1007,12 @@ function UploadCollectionScene(user, password, collection) {
scene_index = scene_stack.length - 1;
html_scene.classList.remove("hidden");
close_btn.onclick = onclose;
if(error){
error_form.textContent = "Error: " + error;
error_form.classList.remove("hidden");
}else{
error_form.classList.add("hidden");
}
};
this.hide = function() {
@ -1212,7 +1221,7 @@ function CreateEditCollectionScene(user, password, collection) {
alert("You must enter a valid HREF");
return false;
}
href = collection.href + "/" + newhreftxtvalue + "/";
href = collection.href + newhreftxtvalue + "/";
}
displayname = displayname_form.value;
description = description_form.value;
@ -1316,6 +1325,12 @@ function CreateEditCollectionScene(user, password, collection) {
fill_form();
submit_btn.onclick = onsubmit;
cancel_btn.onclick = oncancel;
if(error){
error_form.textContent = "Error: " + error;
error_form.classList.remove("hidden");
}else{
error_form.classList.add("hidden");
}
};
this.hide = function() {
read_form();
@ -1347,8 +1362,10 @@ function cleanHREFinput(a) {
href_form = a.target;
}
let currentTxtVal = href_form.value.trim().toLowerCase();
//Clean the HREF to remove non lowercase letters and dashes
currentTxtVal = currentTxtVal.replace(/(?![0-9a-z\-\_])./g, '');
//Clean the HREF to remove not permitted chars
currentTxtVal = currentTxtVal.replace(/(?![0-9a-z\-\_\.])./g, '');
//Clean the HREF to remove leading . (would result in hidden directory)
currentTxtVal = currentTxtVal.replace(/^\./, '');
href_form.value = currentTxtVal;
}

View file

@ -1,4 +1,10 @@
<!DOCTYPE html>
<!--
* Copyright © 2018-2020 Unrud <unrud@outlook.com>
* Copyright © 2023-2023 Henning <github@henning-ullrich.de>
* Copyright © 2023-2024 Matthew Hana <matthew.hana@gmail.com>
* Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
-->
<html lang="en">
<head>
<meta charset="utf-8">
@ -27,8 +33,15 @@
</section>
<section id="loginscene" class="container hidden">
<div class="infcloudlink-hidden">
<form action="infcloud/" method="get" target="_blank">
<button class="blue" type="submit">Collection content<br>(InfCloud web client)</button>
</form>
</div>
<div class="logocontainer">
<img src="css/logo.svg" alt="Radicale">
<br>
Collection management
</div>
<h1>Sign in</h1>
<br>
@ -116,6 +129,8 @@
<button type="submit" class="green" data-name="submit">Save</button>
<button type="button" class="red" data-name="cancel">Cancel</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="createcollectionscene" class="container hidden">
@ -149,6 +164,8 @@
<button type="submit" class="green" data-name="submit">Create</button>
<button type="button" class="red" data-name="cancel">Cancel</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="uploadcollectionscene" class="container hidden">
@ -172,6 +189,8 @@
<button type="submit" class="green" data-name="submit">Upload</button>
<button type="button" class="red" data-name="close">Close</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="deletecollectionscene" class="container hidden">

24
rights
View file

@ -1,5 +1,29 @@
# -*- mode: conf -*-
# vim:ft=cfg
# Allow all rights for the Administrator
#[root]
#user: Administrator
#collection: .*
#permissions: RW
# Allow reading principal collection (same as username)
#[principal]
#user: .+
#collection: {user}
#permissions: R
# Allow reading and writing private collection (same as username)
#[private]
#user: .+
#collection: {user}/private/
#permissions: RW
# Allow reading calendars and address books that are direct
# children of the principal collection for other users
#[calendarsReader]
#user: .+
#collection: {user}/[^/]+
#permissions: r
# Rights management file for Radicale - A simple calendar server
#

View file

@ -1,57 +1,6 @@
[tool:pytest]
addopts = --typeguard-packages=radicale
[tox:tox]
[testenv]
extras = test
deps =
flake8
isort
# mypy installation fails with pypy<3.9
mypy; implementation_name!='pypy' or python_version>='3.9'
types-setuptools
pytest-cov
commands =
flake8 .
isort --check --diff .
# Run mypy if it's installed
python -c 'import importlib.util, subprocess, sys; \
importlib.util.find_spec("mypy") \
and sys.exit(subprocess.run(["mypy", "."]).returncode) \
or print("Skipped: mypy is not installed")'
pytest -r s --cov --cov-report=term --cov-report=xml .
[tool:isort]
known_standard_library = _dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib
known_third_party = defusedxml,passlib,pkg_resources,pytest,vobject
[flake8]
# Only enable default tests (https://github.com/PyCQA/flake8/issues/790#issuecomment-812823398)
# DNE: DOES-NOT-EXIST
select = E,F,W,C90,DNE000
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501,E261
extend-exclude = build
[mypy]
ignore_missing_imports = True
show_error_codes = True
exclude = (^|/)build($|/)
[coverage:run]
branch = True
source = radicale
omit = tests/*,*/tests/*
[coverage:report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if __name__ == .__main__.:

62
setup.cfg.legacy Normal file
View file

@ -0,0 +1,62 @@
[tool:pytest]
[tox:tox]
min_version = 4.0
envlist = py, flake8, isort, mypy
[testenv]
extras =
test
deps =
pytest
pytest-cov
commands = pytest -r s --cov --cov-report=term --cov-report=xml .
[testenv:flake8]
deps = flake8==7.1.0
commands = flake8 .
skip_install = True
[testenv:isort]
deps = isort==5.13.2
commands = isort --check --diff .
skip_install = True
[testenv:mypy]
deps = mypy==1.11.0
commands = mypy --install-types --non-interactive .
skip_install = True
[tool:isort]
known_standard_library = _dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib
known_third_party = defusedxml,passlib,pkg_resources,pytest,vobject
[flake8]
# Only enable default tests (https://github.com/PyCQA/flake8/issues/790#issuecomment-812823398)
# DNE: DOES-NOT-EXIST
select = E,F,W,C90,DNE000
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501
extend-exclude = build
[mypy]
ignore_missing_imports = True
show_error_codes = True
exclude = (^|/)build($|/)
[coverage:run]
branch = True
source = radicale
omit = tests/*,*/tests/*
[coverage:report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if __name__ == .__main__.:

View file

@ -1,6 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2009-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -19,7 +20,7 @@ from setuptools import find_packages, setup
# When the version is updated, a new section in the CHANGELOG.md file must be
# added too.
VERSION = "3.dev"
VERSION = "3.5.1.dev"
with open("README.md", encoding="utf-8") as f:
long_description = f.read()
@ -36,11 +37,12 @@ web_files = ["web/internal_data/css/icon.png",
"web/internal_data/index.html"]
install_requires = ["defusedxml", "passlib", "vobject>=0.9.6",
"python-dateutil>=2.7.3",
"pika>=1.1.0",
"setuptools; python_version<'3.9'"]
"requests",
]
bcrypt_requires = ["bcrypt"]
test_requires = ["pytest>=7", "typeguard<4.3", "waitress", *bcrypt_requires]
ldap_requires = ["ldap3"]
test_requires = ["pytest>=7", "waitress", *bcrypt_requires]
setup(
name="Radicale",
@ -58,9 +60,9 @@ setup(
package_data={"radicale": [*web_files, "py.typed"]},
entry_points={"console_scripts": ["radicale = radicale.__main__:run"]},
install_requires=install_requires,
extras_require={"test": test_requires, "bcrypt": bcrypt_requires},
extras_require={"test": test_requires, "bcrypt": bcrypt_requires, "ldap": ldap_requires},
keywords=["calendar", "addressbook", "CalDAV", "CardDAV"],
python_requires=">=3.8.0",
python_requires=">=3.9.0",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
@ -70,11 +72,11 @@ setup(
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Office/Business :: Groupware"])