Compare commits

..

No commits in common. "master" and "v3.2.2" have entirely different histories.

82 changed files with 889 additions and 4983 deletions

1
.github/FUNDING.yml vendored
View file

@ -1 +0,0 @@
custom: https://github.com/Kozea/Radicale/wiki/Donations

View file

@ -6,8 +6,10 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ['3.9', '3.10', '3.11', '3.12.3', '3.13.0', pypy-3.9]
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12', pypy-3.8, pypy-3.9]
exclude:
- os: windows-latest
python-version: pypy-3.8
- os: windows-latest
python-version: pypy-3.9
runs-on: ${{ matrix.os }}
@ -19,7 +21,7 @@ jobs:
- name: Install Test dependencies
run: pip install tox
- name: Test
run: tox -e py
run: tox
- name: Install Coveralls
if: github.event_name == 'push'
run: pip install coveralls
@ -44,15 +46,3 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: coveralls --service=github --finish
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install tox
run: pip install tox
- name: Lint
run: tox -e flake8,mypy,isort

View file

@ -1,106 +1,6 @@
# Changelog
## 3.5.1.dev
* Fix: auth/htpasswd related to detection and use of bcrypt
* Add: option [auth] ldap_ignore_attribute_create_modify_timestamp for support of Authentik LDAP server
* Extend: [storage] hook supports now placeholder for "cwd" and "path" (and catches unsupported placeholders)
* Fix: location of lock file for in case of dedicated cache folder is activated
* Extend: log and create base folders if not existing during startup
## 3.5.0
* Add: option [auth] type oauth2 by code migration from https://gitlab.mim-libre.fr/alphabet/radicale_oauth/-/blob/dev/oauth2/
* Fix: catch OS errors on PUT MKCOL MKCALENDAR MOVE PROPPATCH (insufficient storage, access denied, internal server error)
* Test: skip bcrypt related tests if module is missing
* Improve: relax mtime check on storage filesystem, change test file location to "collection-root" directory
* Add: option [auth] type pam by code migration from v1, add new option pam_serivce
* Cosmetics: extend list of used modules with their version on startup
* Improve: WebUI
* Add: option [server] script_name for reverse proxy base_prefix handling
* Fix: proper base_prefix stripping if running behind reverse proxy
* Review: Apache reverse proxy config example
* Add: on-the-fly link activation and default content adjustment in case of bundled InfCloud (tested with 0.13.1)
* Adjust: [auth] imap: use AUTHENTICATE PLAIN instead of LOGIN towards remote IMAP server
* Improve: log client IP on SSL error and SSL protocol+cipher if successful
* Improve: catch htpasswd hash verification errors
* Improve: add support for more bcrypt algos on autodetection, extend logging for autodetection fallback to PLAIN in case of hash length is not matching
* Add: warning in case of started standalone and not listen on loopback interface but trusting external authentication
* Adjust: Change default [auth] type from "none" to "denyall" for secure-by-default
## 3.4.1
* Add: option [auth] dovecot_connection_type / dovecot_host / dovecot_port
* Add: option [auth] type imap by code migration from https://github.com/Unrud/RadicaleIMAP/
## 3.4.0
* Add: option [auth] cache_logins/cache_successful_logins_expiry/cache_failed_logins for caching logins
* Improve: [auth] log used hash method and result on debug for htpasswd authentication
* Improve: [auth] htpasswd file now read and verified on start
* Add: option [auth] htpasswd_cache to automatic re-read triggered on change (mtime or size) instead reading on each request
* Improve: [auth] htpasswd: module 'bcrypt' is no longer mandatory in case digest method not used in file
* Improve: [auth] successful/failed login logs now type and whether result was taken from cache
* Improve: [auth] constant execution time for failed logins independent of external backend or by htpasswd used digest method
* Drop: support for Python 3.8
* Add: option [auth] ldap_user_attribute
* Add: option [auth] ldap_groups_attribute as a more flexible replacement of removed ldap_load_groups
## 3.3.3
* Add: display mtime_ns precision of storage folder with condition warning if too less
* Improve: disable fsync during storage verification
* Improve: suppress duplicate log lines on startup
* Contrib: logwatch config and script
* Improve: log precondition result on PUT request
## 3.3.2
* Fix: debug logging in rights/from_file
* Add: option [storage] use_cache_subfolder_for_item for storing 'item' cache outside collection-root
* Fix: ignore empty RRULESET in item
* Add: option [storage] filesystem_cache_folder for defining location of cache outside collection-root
* Add: option [storage] use_cache_subfolder_for_history for storing 'history' cache outside collection-root
* Add: option [storage] use_cache_subfolder_for_synctoken for storing 'sync-token' cache outside collection-root
* Add: option [storage] folder_umask for configuration of umask (overwrite system-default)
* Fix: also remove 'item' from cache on delete
* Improve: avoid automatically invalid cache on upgrade in case no change on cache structure
* Improve: log important module versions on startup
* Improve: auth.ldap config shown on startup, terminate in case no password is supplied for bind user
* Add: option [auth] uc_username for uppercase conversion (similar to existing lc_username)
* Add: option [logging] storage_cache_action_on_debug for conditional logging
* Fix: set PRODID on collection upload (instead of vobject is inserting default one)
* Add: option [storage] use_mtime_and_size_for_item_cache for changing cache lookup from SHA256 to mtime_ns + size
* Fix: buggy cache file content creation on collection upload
## 3.3.1
* Add: option [auth] type=dovecot
* Enhancement: log content in case of multiple main components error
* Fix: expand does not take timezones into account
* Fix: expand does not support overridden recurring events
* Fix: expand does not honor start and end times
* Add: option [server] protocol + ciphersuite for optional restrictions on SSL socket
* Enhancement: [storage] hook documentation, logging, error behavior (no longer throwing an exception)
## 3.3.0
* Adjustment: option [auth] htpasswd_encryption change default from "md5" to "autodetect"
* Add: option [auth] type=ldap with (group) rights management via LDAP/LDAPS
* Enhancement: permit_delete_collection can be now controlled also per collection by rights 'D' or 'd'
* Add: option [rights] permit_overwrite_collection (default=True) which can be also controlled per collection by rights 'O' or 'o'
* Fix: only expand VEVENT on REPORT request containing 'expand'
* Adjustment: switch from setup.py to pyproject.toml (but keep files for legacy packaging)
* Adjustment: 'rights' file is now read only during startup
* Cleanup: Python 3.7 leftovers
## 3.2.3
* Add: support for Python 3.13
* Fix: Using icalendar's tzinfo on created datetime to fix issue with icalendar
* Fix: typos in code
* Enhancement: Added free-busy report
* Enhancement: Added 'max_freebusy_occurrences` setting to avoid potential DOS on reports
* Enhancement: remove unexpected control codes from uploaded items
* Enhancement: add 'strip_domain' setting for username handling
* Enhancement: add option to toggle debug log of rights rule with doesn't match
* Drop: remove unused requirement "typeguard"
* Improve: Refactored some date parsing code
## 3.dev
## 3.2.2
* Enhancement: add support for auth.type=denyall (will be default for security reasons in upcoming releases)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,11 @@
# This file is intended to be used apart from the containing source code tree.
FROM python:3-alpine AS builder
FROM python:3-alpine as builder
# Version of Radicale (e.g. v3)
ARG VERSION=master
# Optional dependencies (e.g. bcrypt or ldap)
# Optional dependencies (e.g. bcrypt)
ARG DEPENDENCIES=bcrypt
RUN apk add --no-cache --virtual gcc libffi-dev musl-dev \

View file

@ -1,6 +1,6 @@
FROM python:3-alpine AS builder
FROM python:3-alpine as builder
# Optional dependencies (e.g. bcrypt or ldap)
# Optional dependencies (e.g. bcrypt)
ARG DEPENDENCIES=bcrypt
COPY . /app

151
config
View file

@ -40,15 +40,6 @@
# TCP traffic between Radicale and a reverse proxy
#certificate_authority =
# SSL protocol, secure configuration: ALL -SSLv3 -TLSv1 -TLSv1.1
#protocol = (default)
# SSL ciphersuite, secure configuration: DHE:ECDHE:-NULL:-SHA (see also "man openssl-ciphers")
#ciphersuite = (default)
# script name to strip from URI if called by reverse proxy
#script_name = (default taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME)
[encoding]
@ -62,83 +53,8 @@
[auth]
# Authentication method
# Value: none | htpasswd | remote_user | http_x_remote_user | dovecot | ldap | oauth2 | pam | denyall
#type = denyall
# Cache logins for until expiration time
#cache_logins = false
# Expiration time for caching successful logins in seconds
#cache_successful_logins_expiry = 15
## Expiration time of caching failed logins in seconds
#cache_failed_logins_expiry = 90
# Ignore modifyTimestamp and createTimestamp attributes. Required e.g. for Authentik LDAP server
#ldap_ignore_attribute_create_modify_timestamp = false
# URI to the LDAP server
#ldap_uri = ldap://localhost
# The base DN where the user accounts have to be searched
#ldap_base = ##BASE_DN##
# The reader DN of the LDAP server
#ldap_reader_dn = CN=ldapreader,CN=Users,##BASE_DN##
# Password of the reader DN
#ldap_secret = ldapreader-secret
# Path of the file containing password of the reader DN
#ldap_secret_file = /run/secrets/ldap_password
# the attribute to read the group memberships from in the user's LDAP entry (default: not set)
#ldap_groups_attribute = memberOf
# The filter to find the DN of the user. This filter must contain a python-style placeholder for the login
#ldap_filter = (&(objectClass=person)(uid={0}))
# the attribute holding the value to be used as username after authentication
#ldap_user_attribute = cn
# Use ssl on the ldap connection
#ldap_use_ssl = False
# The certificate verification mode. NONE, OPTIONAL, default is REQUIRED
#ldap_ssl_verify_mode = REQUIRED
# The path to the CA file in pem format which is used to certificate the server certificate
#ldap_ssl_ca_file =
# Connection type for dovecot authentication (AF_UNIX|AF_INET|AF_INET6)
# Note: credentials are transmitted in cleartext
#dovecot_connection_type = AF_UNIX
# The path to the Dovecot client authentication socket (eg. /run/dovecot/auth-client on Fedora). Radicale must have read / write access to the socket.
#dovecot_socket = /var/run/dovecot/auth-client
# Host of via network exposed dovecot socket
#dovecot_host = localhost
# Port of via network exposed dovecot socket
#dovecot_port = 12345
# IMAP server hostname
# Syntax: address | address:port | [address]:port | imap.server.tld
#imap_host = localhost
# Secure the IMAP connection
# Value: tls | starttls | none
#imap_security = tls
# OAuth2 token endpoint URL
#oauth2_token_endpoint = <URL>
# PAM service
#pam_serivce = radicale
# PAM group user should be member of
#pam_group_membership =
# Value: none | htpasswd | remote_user | http_x_remote_user | denyall
#type = none
# Htpasswd filename
#htpasswd_filename = /etc/radicale/users
@ -146,10 +62,7 @@
# Htpasswd encryption method
# Value: plain | bcrypt | md5 | sha256 | sha512 | autodetect
# bcrypt requires the installation of 'bcrypt' module.
#htpasswd_encryption = autodetect
# Enable caching of htpasswd file based on size and mtime_ns
#htpasswd_cache = False
#htpasswd_encryption = md5
# Incorrect authentication delay (seconds)
#delay = 1
@ -160,14 +73,11 @@
# Convert username to lowercase, must be true for case-insensitive auth providers
#lc_username = False
# Strip domain name from username
#strip_domain = False
[rights]
# Rights backend
# Value: authenticated | owner_only | owner_write | from_file
# Value: none | authenticated | owner_only | owner_write | from_file
#type = owner_only
# File for rights management from_file
@ -176,9 +86,6 @@
# Permit delete of a collection (global)
#permit_delete_collection = True
# Permit overwrite of a collection (global)
#permit_overwrite_collection = True
[storage]
@ -189,47 +96,14 @@
# Folder for storing local collections, created if not present
#filesystem_folder = /var/lib/radicale/collections
# Folder for storing cache of local collections, created if not present
# Note: only used in case of use_cache_subfolder_* options are active
# Note: can be used on multi-instance setup to cache files on local node (see below)
#filesystem_cache_folder = (filesystem_folder)
# Use subfolder 'collection-cache' for 'item' cache file structure instead of inside collection folder
# Note: can be used on multi-instance setup to cache 'item' on local node
#use_cache_subfolder_for_item = False
# Use subfolder 'collection-cache' for 'history' cache file structure instead of inside collection folder
# Note: use only on single-instance setup, will break consistency with client in multi-instance setup
#use_cache_subfolder_for_history = False
# Use subfolder 'collection-cache' for 'sync-token' cache file structure instead of inside collection folder
# Note: use only on single-instance setup, will break consistency with client in multi-instance setup
#use_cache_subfolder_for_synctoken = False
# Use last modifiction time (nanoseconds) and size (bytes) for 'item' cache instead of SHA256 (improves speed)
# Note: check used filesystem mtime precision before enabling
# Note: conversion is done on access, bulk conversion can be done offline using storage verification option: radicale --verify-storage
#use_mtime_and_size_for_item_cache = False
# Use configured umask for folder creation (not applicable for OS Windows)
# Useful value: 0077 | 0027 | 0007 | 0022
#folder_umask = (system default, usual 0022)
# Delete sync token that are older (seconds)
#max_sync_token_age = 2592000
# Skip broken item instead of triggering an exception
#skip_broken_item = True
# Command that is run after changes to storage, default is emtpy
# Supported placeholders:
# %(user)s: logged-in user
# %(cwd)s : current working directory
# %(path)s: full path of item
# Command will be executed with base directory defined in filesystem_folder
# For "git" check DOCUMENTATION.md for bootstrap instructions
# Example(test): echo \"user=%(user)s path=%(path)s cwd=%(cwd)s\"
# Example(git): git add -A && (git diff --cached --quiet || git commit -m "Changes by \"%(user)s\"")
# Command that is run after changes to storage
# Example: ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by \"%(user)s\"")
#hook =
# Create predefined user collections
@ -282,18 +156,12 @@
# Log response content on level=debug
#response_content_on_debug = False
# Log rights rule which doesn't match on level=debug
#rights_rule_doesnt_match_on_debug = False
# Log storage cache actions on level=debug
#storage_cache_actions_on_debug = False
[headers]
# Additional HTTP headers
#Access-Control-Allow-Origin = *
[hook]
# Hook types
@ -302,10 +170,3 @@
#rabbitmq_endpoint =
#rabbitmq_topic =
#rabbitmq_queue_type = classic
[reporting]
# When returning a free-busy report, limit the number of returned
# occurences per event to prevent DOS attacks.
#max_freebusy_occurrence = 10000

View file

@ -4,7 +4,6 @@
## Apache acting as reverse proxy and forward requests via ProxyPass to a running "radicale" server
# SELinux WARNING: To use this correctly, you will need to set:
# setsebool -P httpd_can_network_connect=1
# URI prefix: /radicale
#Define RADICALE_SERVER_REVERSE_PROXY
@ -12,12 +11,11 @@
# MAY CONFLICT with other WSG servers on same system -> use then inside a VirtualHost
# SELinux WARNING: To use this correctly, you will need to set:
# setsebool -P httpd_can_read_write_radicale=1
# URI prefix: /radicale
#Define RADICALE_SERVER_WSGI
### Extra options
## Apache starting a dedicated VHOST with SSL without "/radicale" prefix in URI on port 8443
## Apache starting a dedicated VHOST with SSL
#Define RADICALE_SERVER_VHOST_SSL
@ -29,13 +27,8 @@
#Define RADICALE_ENFORCE_SSL
### enable authentication by web server (config: [auth] type = http_x_remote_user)
#Define RADICALE_SERVER_USER_AUTHENTICATION
### Particular configuration EXAMPLES, adjust/extend/override to your needs
##########################
### default host
##########################
@ -44,56 +37,33 @@
## RADICALE_SERVER_REVERSE_PROXY
<IfDefine RADICALE_SERVER_REVERSE_PROXY>
RewriteEngine On
RewriteRule ^/radicale$ /radicale/ [R,L]
RewriteCond %{REQUEST_METHOD} GET
RewriteRule ^/radicale/$ /radicale/.web/ [R,L]
<LocationMatch "^/radicale/\.web.*>
# Internal WebUI does not need authentication at all
<Location /radicale>
RequestHeader set X-Script-Name /radicale
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
RequestHeader unset X-Forwarded-Proto
<If "%{HTTPS} =~ /on/">
RequestHeader set X-Forwarded-Proto "https"
</If>
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
<LocationMatch "^/radicale(?!/\.web)">
RequestHeader set X-Script-Name /radicale
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
<IfDefine RADICALE_ENFORCE_SSL>
<IfModule !ssl_module>
@ -101,7 +71,7 @@
</IfModule>
SSLRequireSSL
</IfDefine>
</LocationMatch>
</Location>
</IfDefine>
@ -127,38 +97,22 @@
WSGIScriptAlias /radicale /usr/share/radicale/radicale.wsgi
# Internal WebUI does not need authentication at all
<LocationMatch "^/radicale/\.web.*>
<Location /radicale>
RequestHeader set X-Script-Name /radicale
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
<LocationMatch "^/radicale(?!/\.web)">
RequestHeader set X-Script-Name /radicale
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
<IfDefine RADICALE_ENFORCE_SSL>
<IfModule !ssl_module>
@ -166,7 +120,7 @@
</IfModule>
SSLRequireSSL
</IfDefine>
</LocationMatch>
</Location>
</IfModule>
<IfModule !wsgi_module>
Error "RADICALE_SERVER_WSGI selected but wsgi module not loaded/enabled"
@ -210,51 +164,29 @@ CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
## RADICALE_SERVER_REVERSE_PROXY
<IfDefine RADICALE_SERVER_REVERSE_PROXY>
RewriteEngine On
<Location />
RequestHeader set X-Script-Name /
RewriteCond %{REQUEST_METHOD} GET
RewriteRule ^/$ /.web/ [R,L]
<LocationMatch "^/\.web.*>
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
RequestHeader set X-Forwarded-Proto "https"
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</LocationMatch>
<LocationMatch "^(?!/\.web)">
RequestHeader set X-Forwarded-Port "%{SERVER_PORT}s"
RequestHeader set X-Forwarded-Proto expr=%{REQUEST_SCHEME}
ProxyPass http://localhost:5232/ retry=0
ProxyPassReverse http://localhost:5232/
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
</LocationMatch>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
</Location>
</IfDefine>
@ -280,27 +212,23 @@ CustomLog logs/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
WSGIScriptAlias / /usr/share/radicale/radicale.wsgi
<LocationMatch "^/(?!/\.web)">
<IfDefine !RADICALE_SERVER_USER_AUTHENTICATION>
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
<Location />
RequestHeader set X-Script-Name /
## User authentication handled by "radicale"
Require local
<IfDefine RADICALE_PERMIT_PUBLIC_ACCESS>
Require all granted
</IfDefine>
<IfDefine RADICALE_SERVER_USER_AUTHENTICATION>
## You may want to use apache's authentication (config: [auth] type = http_x_remote_user)
## e.g. create a new file with a testuser: htpasswd -c -B /etc/httpd/conf/htpasswd-radicale testuser
AuthBasicProvider file
AuthType Basic
AuthName "Enter your credentials"
AuthUserFile /etc/httpd/conf/htpasswd-radicale
AuthGroupFile /dev/null
Require valid-user
RequestHeader set X-Remote-User expr=%{REMOTE_USER}
</IfDefine>
</LocationMatch>
## You may want to use apache's authentication (config: [auth] type = remote_user)
#AuthBasicProvider file
#AuthType Basic
#AuthName "Enter your credentials"
#AuthUserFile /path/to/httpdfile/
#AuthGroupFile /dev/null
#Require valid-user
</Location>
</IfModule>
<IfModule !wsgi_module>
Error "RADICALE_SERVER_WSGI selected but wsgi module not loaded/enabled"

View file

@ -1,193 +0,0 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (script)
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# Detail levels
# >= 5: Logins
# >= 10: ResponseTimes
$Detail = $ENV{'LOGWATCH_DETAIL_LEVEL'} || 0;
my %ResponseTimes;
my %Responses;
my %Requests;
my %Logins;
my %Loglevel;
my %OtherEvents;
my $sum;
my $length;
sub ResponseTimesMinMaxSum($$) {
my $req = $_[0];
my $time = $_[1];
$ResponseTimes{$req}->{'cnt'}++;
if (! defined $ResponseTimes{$req}->{'min'}) {
$ResponseTimes{$req}->{'min'} = $time;
} elsif ($ResponseTimes->{$req}->{'min'} > $time) {
$ResponseTimes{$req}->{'min'} = $time;
}
if (! defined $ResponseTimes{$req}->{'max'}) {
$ResponseTimes{$req}{'max'} = $time;
} elsif ($ResponseTimes{$req}->{'max'} < $time) {
$ResponseTimes{$req}{'max'} = $time;
}
$ResponseTimes{$req}->{'sum'} += $time;
}
sub Sum($) {
my $phash = $_[0];
my $sum = 0;
foreach my $entry (keys %$phash) {
$sum += $phash->{$entry};
}
return $sum;
}
sub MaxLength($) {
my $phash = $_[0];
my $length = 0;
foreach my $entry (keys %$phash) {
$length = length($entry) if (length($entry) > $length);
}
return $length;
}
while (defined($ThisLine = <STDIN>)) {
# count loglevel
if ( $ThisLine =~ /\[(DEBUG|INFO|WARNING|ERROR|CRITICAL)\] /o ) {
$Loglevel{$1}++
}
# parse log for events
if ( $ThisLine =~ /Radicale server ready/o ) {
$OtherEvents{"Radicale server started"}++;
}
elsif ( $ThisLine =~ /Stopping Radicale/o ) {
$OtherEvents{"Radicale server stopped"}++;
}
elsif ( $ThisLine =~ / (\S+) response status/o ) {
my $req = $1;
if ( $ThisLine =~ / \S+ response status for .* with depth '(\d)' in ([0-9.]+) seconds: (\d+)/o ) {
$req .= ":D=" . $1 . ":R=" . $3;
ResponseTimesMinMaxSum($req, $2) if ($Detail >= 10);
} elsif ( $ThisLine =~ / \S+ response status for .* in ([0-9.]+) seconds: (\d+)/ ) {
$req .= ":R=" . $2;
ResponseTimesMinMaxSum($req, $1) if ($Detail >= 10);
}
$Responses{$req}++;
}
elsif ( $ThisLine =~ / (\S+) request for/o ) {
my $req = $1;
if ( $ThisLine =~ / \S+ request for .* with depth '(\d)' received/o ) {
$req .= ":D=" . $1;
}
$Requests{$req}++;
}
elsif ( $ThisLine =~ / (Successful login): '([^']+)'/o ) {
$Logins{$2}++ if ($Detail >= 5);
$OtherEvents{$1}++;
}
elsif ( $ThisLine =~ / (Failed login attempt) /o ) {
$OtherEvents{$1}++;
}
elsif ( $ThisLine =~ /\[(DEBUG|INFO)\] /o ) {
# skip if DEBUG+INFO
}
else {
# Report any unmatched entries...
$ThisLine =~ s/^\[\d+(\/Thread-\d+)?\] //; # remove process/Thread ID
chomp($ThisLine);
$OtherList{$ThisLine}++;
}
}
if ($Started) {
print "\nStatistics:\n";
print " Radicale started: $Started Time(s)\n";
}
if (keys %Loglevel) {
$sum = Sum(\%Loglevel);
print "\n**Loglevel counters**\n";
printf "%-18s | %7s | %5s |\n", "Loglevel", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $level (sort keys %Loglevel) {
printf "%-18s | %7d | %3d%% |\n", $level, $Loglevel{$level}, int(($Loglevel{$level} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Requests) {
$sum = Sum(\%Requests);
print "\n**Request counters (D=<depth>)**\n";
printf "%-18s | %7s | %5s |\n", "Request", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $req (sort keys %Requests) {
printf "%-18s | %7d | %3d%% |\n", $req, $Requests{$req}, int(($Requests{$req} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Responses) {
$sum = Sum(\%Responses);
print "\n**Response result counters ((D=<depth> R=<result>)**\n";
printf "%-18s | %7s | %5s |\n", "Response", "cnt", "ratio";
print "-" x38 . "\n";
foreach my $req (sort keys %Responses) {
printf "%-18s | %7d | %3d%% |\n", $req, $Responses{$req}, int(($Responses{$req} * 100) / $sum);
}
print "-" x38 . "\n";
printf "%-18s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %Logins) {
$sum = Sum(\%Logins);
$length = MaxLength(\%Logins);
print "\n**Successful login counters**\n";
printf "%-" . $length . "s | %7s | %5s |\n", "Login", "cnt", "ratio";
print "-" x($length + 20) . "\n";
foreach my $login (sort keys %Logins) {
printf "%-" . $length . "s | %7d | %3d%% |\n", $login, $Logins{$login}, int(($Logins{$login} * 100) / $sum);
}
print "-" x($length + 20) . "\n";
printf "%-" . $length . "s | %7d | %3d%% |\n", "", $sum, 100;
}
if (keys %ResponseTimes) {
print "\n**Response timings (counts, seconds) (D=<depth> R=<result>)**\n";
printf "%-18s | %7s | %7s | %7s | %7s |\n", "Response", "cnt", "min", "max", "avg";
print "-" x60 . "\n";
foreach my $req (sort keys %ResponseTimes) {
printf "%-18s | %7d | %7.3f | %7.3f | %7.3f |\n", $req
, $ResponseTimes{$req}->{'cnt'}
, $ResponseTimes{$req}->{'min'}
, $ResponseTimes{$req}->{'max'}
, $ResponseTimes{$req}->{'sum'} / $ResponseTimes{$req}->{'cnt'};
}
print "-" x60 . "\n";
}
if (keys %OtherEvents) {
print "\n**Other Events**\n";
foreach $ThisOne (sort keys %OtherEvents) {
print "$ThisOne: $OtherEvents{$ThisOne} Time(s)\n";
}
}
if (keys %OtherList) {
print "\n**Unmatched Entries**\n";
foreach $ThisOne (sort keys %OtherList) {
print "$ThisOne: $OtherList{$ThisOne} Time(s)\n";
}
}
exit(0);
# vim: shiftwidth=3 tabstop=3 syntax=perl et smartindent

View file

@ -1,11 +0,0 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (config) - input from journald
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
Title = "Radicale"
LogFile = none
*JournalCtl = "--output=cat --unit=radicale.service"
# vi: shiftwidth=3 tabstop=3 et

View file

@ -1,13 +0,0 @@
# This file is related to Radicale - CalDAV and CardDAV server
# for logwatch (config) - input from syslog file
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
Title = "Radicale"
LogFile = messages
*OnlyService = radicale
*RemoveHeaders
# vi: shiftwidth=3 tabstop=3 et

View file

@ -1,31 +0,0 @@
### Proxy Forward to local running "radicale" server
###
### Usual configuration file location: /etc/nginx/default.d/
## "well-known" redirect at least for Apple devices
rewrite ^/.well-known/carddav /radicale/ redirect;
rewrite ^/.well-known/caldav /radicale/ redirect;
## Base URI: /radicale/
location /radicale/ {
proxy_pass http://localhost:5232/;
proxy_set_header X-Script-Name /radicale;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_pass_header Authorization;
}
## Base URI: /
#location / {
# proxy_pass http://localhost:5232/;
# proxy_set_header X-Script-Name /radicale;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Host $host;
# proxy_set_header X-Forwarded-Port $server_port;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Host $http_host;
# proxy_pass_header Authorization;
#}

View file

@ -1,128 +0,0 @@
[project]
name = "Radicale"
# When the version is updated, a new section in the CHANGELOG.md file must be
# added too.
readme = "README.md"
version = "3.5.1.dev"
authors = [{name = "Guillaume Ayoub", email = "guillaume.ayoub@kozea.fr"}, {name = "Unrud", email = "unrud@outlook.com"}, {name = "Peter Bieringer", email = "pb@bieringer.de"}]
license = {text = "GNU GPL v3"}
description = "CalDAV and CardDAV Server"
keywords = ["calendar", "addressbook", "CalDAV", "CardDAV"]
classifiers = [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: End Users/Desktop",
"Intended Audience :: Information Technology",
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Office/Business :: Groupware",
]
urls = {Homepage = "https://radicale.org/"}
requires-python = ">=3.9.0"
dependencies = [
"defusedxml",
"passlib",
"vobject>=0.9.6",
"pika>=1.1.0",
"requests",
]
[project.optional-dependencies]
test = ["pytest>=7", "waitress", "bcrypt"]
bcrypt = ["bcrypt"]
ldap = ["ldap3"]
[project.scripts]
radicale = "radicale.__main__:run"
[build-system]
requires = ["setuptools>=61.2"]
build-backend = "setuptools.build_meta"
[tool.tox]
min_version = "4.0"
envlist = ["py", "flake8", "isort", "mypy"]
[tool.tox.env.py]
extras = ["test"]
deps = [
"pytest",
"pytest-cov"
]
commands = [["pytest", "-r", "s", "--cov", "--cov-report=term", "--cov-report=xml", "."]]
[tool.tox.env.flake8]
deps = ["flake8==7.1.0"]
commands = [["flake8", "."]]
skip_install = true
[tool.tox.env.isort]
deps = ["isort==5.13.2"]
commands = [["isort", "--check", "--diff", "."]]
skip_install = true
[tool.tox.env.mypy]
deps = ["mypy==1.11.0"]
commands = [["mypy", "--install-types", "--non-interactive", "."]]
skip_install = true
[tool.setuptools]
platforms = ["Any"]
include-package-data = false
[tool.setuptools.packages.find]
exclude = ["*.tests"] # *.tests.*; tests.*; tests
namespaces = false
[tool.setuptools.package-data]
radicale = [
"web/internal_data/css/icon.png",
"web/internal_data/css/loading.svg",
"web/internal_data/css/logo.svg",
"web/internal_data/css/main.css",
"web/internal_data/css/icons/delete.svg",
"web/internal_data/css/icons/download.svg",
"web/internal_data/css/icons/edit.svg",
"web/internal_data/css/icons/new.svg",
"web/internal_data/css/icons/upload.svg",
"web/internal_data/fn.js",
"web/internal_data/index.html",
"py.typed",
]
[tool.isort]
known_standard_library = "_dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib"
known_third_party = "defusedxml,passlib,pkg_resources,pytest,vobject"
[tool.mypy]
ignore_missing_imports = true
show_error_codes = true
exclude = "(^|/)build($|/)"
[tool.coverage.run]
branch = true
source = ["radicale"]
omit = ["tests/*", "*/tests/*"]
[tool.coverage.report]
# Regexes for lines to exclude from consideration
exclude_lines = [
# Have to re-enable the standard pragma
"pragma: no cover",
# Don't complain if tests don't hit defensive assertion code:
"raise AssertionError",
"raise NotImplementedError",
# Don't complain if non-runnable code isn't run:
"if __name__ == .__main__.:",
]

View file

@ -3,8 +3,4 @@ Radicale WSGI file (mod_wsgi and uWSGI compliant).
"""
import os
from radicale import application
# set an environment variable
os.environ.setdefault('SERVER_GATEWAY_INTERFACE', 'Web')

View file

@ -61,7 +61,7 @@ def _get_application_instance(config_path: str, wsgi_errors: types.ErrorStream
if not miss and source != "default config":
default_config_active = False
if default_config_active:
logger.warning("%s", "No config file found/readable - only default config is active")
logger.warn("%s", "No config file found/readable - only default config is active")
_application_instance = Application(configuration)
if _application_config_path != config_path:
raise ValueError("RADICALE_CONFIG must not change: %r != %r" %

View file

@ -175,7 +175,7 @@ def run() -> None:
default_config_active = False
if default_config_active:
logger.warning("%s", "No config file found/readable - only default config is active")
logger.warn("%s", "No config file found/readable - only default config is active")
if args_ns.verify_storage:
logger.info("Verifying storage")
@ -183,7 +183,7 @@ def run() -> None:
storage_ = storage.load(configuration)
with storage_.acquire_lock("r"):
if not storage_.verify():
logger.critical("Storage verification failed")
logger.critical("Storage verifcation failed")
sys.exit(1)
except Exception as e:
logger.critical("An exception occurred during storage "

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -68,10 +68,8 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
_internal_server: bool
_max_content_length: int
_auth_realm: str
_script_name: str
_extra_headers: Mapping[str, str]
_permit_delete_collection: bool
_permit_overwrite_collection: bool
def __init__(self, configuration: config.Configuration) -> None:
"""Initialize Application.
@ -88,26 +86,11 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
self._response_content_on_debug = configuration.get("logging", "response_content_on_debug")
self._auth_delay = configuration.get("auth", "delay")
self._internal_server = configuration.get("server", "_internal_server")
self._script_name = configuration.get("server", "script_name")
if self._script_name:
if self._script_name[0] != "/":
logger.error("server.script_name must start with '/': %r", self._script_name)
raise RuntimeError("server.script_name option has to start with '/'")
else:
if self._script_name.endswith("/"):
logger.error("server.script_name must not end with '/': %r", self._script_name)
raise RuntimeError("server.script_name option must not end with '/'")
else:
logger.info("Provided script name to strip from URI if called by reverse proxy: %r", self._script_name)
else:
logger.info("Default script name to strip from URI if called by reverse proxy is taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME")
self._max_content_length = configuration.get(
"server", "max_content_length")
self._auth_realm = configuration.get("auth", "realm")
self._permit_delete_collection = configuration.get("rights", "permit_delete_collection")
logger.info("permit delete of collection: %s", self._permit_delete_collection)
self._permit_overwrite_collection = configuration.get("rights", "permit_overwrite_collection")
logger.info("permit overwrite of collection: %s", self._permit_overwrite_collection)
self._extra_headers = dict()
for key in self.configuration.options("headers"):
self._extra_headers[key] = configuration.get("headers", key)
@ -150,7 +133,6 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
time_begin = datetime.datetime.now()
request_method = environ["REQUEST_METHOD"].upper()
unsafe_path = environ.get("PATH_INFO", "")
https = environ.get("HTTPS", "")
"""Manage a request."""
def response(status: int, headers: types.WSGIResponseHeaders,
@ -164,7 +146,7 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
if self._response_content_on_debug:
logger.debug("Response content:\n%s", answer)
else:
logger.debug("Response content: suppressed by config/option [logging] response_content_on_debug")
logger.debug("Response content: suppressed by config/option [auth] response_content_on_debug")
headers["Content-Type"] += "; charset=%s" % self._encoding
answer = answer.encode(self._encoding)
accept_encoding = [
@ -193,71 +175,50 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
# Return response content
return status_text, list(headers.items()), answers
reverse_proxy = False
remote_host = "unknown"
if environ.get("REMOTE_HOST"):
remote_host = repr(environ["REMOTE_HOST"])
elif environ.get("REMOTE_ADDR"):
remote_host = environ["REMOTE_ADDR"]
if environ.get("HTTP_X_FORWARDED_FOR"):
reverse_proxy = True
remote_host = "%s (forwarded for %r)" % (
remote_host, environ["HTTP_X_FORWARDED_FOR"])
if environ.get("HTTP_X_FORWARDED_HOST") or environ.get("HTTP_X_FORWARDED_PROTO") or environ.get("HTTP_X_FORWARDED_SERVER"):
reverse_proxy = True
remote_useragent = ""
if environ.get("HTTP_USER_AGENT"):
remote_useragent = " using %r" % environ["HTTP_USER_AGENT"]
depthinfo = ""
if environ.get("HTTP_DEPTH"):
depthinfo = " with depth %r" % environ["HTTP_DEPTH"]
if https:
https_info = " " + environ.get("SSL_PROTOCOL", "") + " " + environ.get("SSL_CIPHER", "")
else:
https_info = ""
logger.info("%s request for %r%s received from %s%s%s",
logger.info("%s request for %r%s received from %s%s",
request_method, unsafe_path, depthinfo,
remote_host, remote_useragent, https_info)
remote_host, remote_useragent)
if self._request_header_on_debug:
logger.debug("Request header:\n%s",
pprint.pformat(self._scrub_headers(environ)))
else:
logger.debug("Request header: suppressed by config/option [logging] request_header_on_debug")
logger.debug("Request header: suppressed by config/option [auth] request_header_on_debug")
# SCRIPT_NAME is already removed from PATH_INFO, according to the
# WSGI specification.
# Reverse proxies can overwrite SCRIPT_NAME with X-SCRIPT-NAME header
if self._script_name and (reverse_proxy is True):
base_prefix_src = "config"
base_prefix = self._script_name
else:
base_prefix_src = ("HTTP_X_SCRIPT_NAME" if "HTTP_X_SCRIPT_NAME" in
environ else "SCRIPT_NAME")
base_prefix = environ.get(base_prefix_src, "")
if base_prefix and base_prefix[0] != "/":
logger.error("Base prefix (from %s) must start with '/': %r",
base_prefix_src, base_prefix)
if base_prefix_src == "HTTP_X_SCRIPT_NAME":
return response(*httputils.BAD_REQUEST)
return response(*httputils.INTERNAL_SERVER_ERROR)
if base_prefix.endswith("/"):
logger.warning("Base prefix (from %s) must not end with '/': %r",
base_prefix_src, base_prefix)
base_prefix = base_prefix.rstrip("/")
if base_prefix:
logger.debug("Base prefix (from %s): %r", base_prefix_src, base_prefix)
base_prefix_src = ("HTTP_X_SCRIPT_NAME" if "HTTP_X_SCRIPT_NAME" in
environ else "SCRIPT_NAME")
base_prefix = environ.get(base_prefix_src, "")
if base_prefix and base_prefix[0] != "/":
logger.error("Base prefix (from %s) must start with '/': %r",
base_prefix_src, base_prefix)
if base_prefix_src == "HTTP_X_SCRIPT_NAME":
return response(*httputils.BAD_REQUEST)
return response(*httputils.INTERNAL_SERVER_ERROR)
if base_prefix.endswith("/"):
logger.warning("Base prefix (from %s) must not end with '/': %r",
base_prefix_src, base_prefix)
base_prefix = base_prefix.rstrip("/")
logger.debug("Base prefix (from %s): %r", base_prefix_src, base_prefix)
# Sanitize request URI (a WSGI server indicates with an empty path,
# that the URL targets the application root without a trailing slash)
path = pathutils.sanitize_path(unsafe_path)
logger.debug("Sanitized path: %r", path)
if (reverse_proxy is True) and (len(base_prefix) > 0):
if path.startswith(base_prefix):
path_new = path.removeprefix(base_prefix)
logger.debug("Called by reverse proxy, remove base prefix %r from path: %r => %r", base_prefix, path, path_new)
path = path_new
else:
logger.warning("Called by reverse proxy, cannot removed base prefix %r from path: %r as not matching", base_prefix, path)
# Get function corresponding to method
function = getattr(self, "do_%s" % request_method, None)
@ -271,7 +232,7 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
path.rstrip("/").endswith("/.well-known/carddav")):
return response(*httputils.redirect(
base_prefix + "/", client.MOVED_PERMANENTLY))
# Return NOT FOUND for all other paths containing ".well-known"
# Return NOT FOUND for all other paths containing ".well-knwon"
if path.endswith("/.well-known") or "/.well-known/" in path:
return response(*httputils.NOT_FOUND)
@ -288,24 +249,18 @@ class Application(ApplicationPartDelete, ApplicationPartHead,
self.configuration, environ, base64.b64decode(
authorization.encode("ascii"))).split(":", 1)
(user, info) = self._auth.login(login, password) or ("", "") if login else ("", "")
if self.configuration.get("auth", "type") == "ldap":
try:
logger.debug("Groups %r", ",".join(self._auth._ldap_groups))
self._rights._user_groups = self._auth._ldap_groups
except AttributeError:
pass
user = self._auth.login(login, password) or "" if login else ""
if user and login == user:
logger.info("Successful login: %r (%s)", user, info)
logger.info("Successful login: %r", user)
elif user:
logger.info("Successful login: %r -> %r (%s)", login, user, info)
logger.info("Successful login: %r -> %r", login, user)
elif login:
logger.warning("Failed login attempt from %s: %r (%s)",
remote_host, login, info)
logger.warning("Failed login attempt from %s: %r",
remote_host, login)
# Random delay to avoid timing oracles and bruteforce attacks
if self._auth_delay > 0:
random_delay = self._auth_delay * (0.5 + random.random())
logger.debug("Failed login, sleeping random: %.3f sec", random_delay)
logger.debug("Sleeping %.3f seconds", random_delay)
time.sleep(random_delay)
if user and not pathutils.is_safe_path_component(user):

View file

@ -40,7 +40,6 @@ class ApplicationBase:
_web: web.BaseWeb
_encoding: str
_permit_delete_collection: bool
_permit_overwrite_collection: bool
_hook: hook.BaseHook
def __init__(self, configuration: config.Configuration) -> None:
@ -52,7 +51,6 @@ class ApplicationBase:
self._encoding = configuration.get("encoding", "request")
self._log_bad_put_request_content = configuration.get("logging", "bad_put_request_content")
self._response_content_on_debug = configuration.get("logging", "response_content_on_debug")
self._request_content_on_debug = configuration.get("logging", "request_content_on_debug")
self._hook = hook.load(configuration)
def _read_xml_request_body(self, environ: types.WSGIEnviron
@ -68,20 +66,17 @@ class ApplicationBase:
logger.debug("Request content (Invalid XML):\n%s", content)
raise RuntimeError("Failed to parse XML: %s" % e) from e
if logger.isEnabledFor(logging.DEBUG):
if self._request_content_on_debug:
logger.debug("Request content (XML):\n%s",
xmlutils.pretty_xml(xml_content))
else:
logger.debug("Request content (XML): suppressed by config/option [logging] request_content_on_debug")
logger.debug("Request content:\n%s",
xmlutils.pretty_xml(xml_content))
return xml_content
def _xml_response(self, xml_content: ET.Element) -> bytes:
if logger.isEnabledFor(logging.DEBUG):
if self._response_content_on_debug:
logger.debug("Response content (XML):\n%s",
logger.debug("Response content:\n%s",
xmlutils.pretty_xml(xml_content))
else:
logger.debug("Response content (XML): suppressed by config/option [logging] response_content_on_debug")
logger.debug("Response content: suppressed by config/option [auth] response_content_on_debug")
f = io.BytesIO()
ET.ElementTree(xml_content).write(f, encoding=self._encoding,
xml_declaration=True)
@ -126,7 +121,7 @@ class Access:
def check(self, permission: str,
item: Optional[types.CollectionOrItem] = None) -> bool:
if permission not in "rwdDoO":
if permission not in "rw":
raise ValueError("Invalid permission argument: %r" % permission)
if not item:
permissions = permission + permission.upper()

View file

@ -3,7 +3,6 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -25,7 +24,6 @@ from typing import Optional
from radicale import httputils, storage, types, xmlutils
from radicale.app.base import Access, ApplicationBase
from radicale.hook import HookNotificationItem, HookNotificationItemTypes
from radicale.log import logger
def xml_delete(base_prefix: str, path: str, collection: storage.BaseCollection,
@ -73,22 +71,17 @@ class ApplicationPartDelete(ApplicationBase):
hook_notification_item_list = []
if isinstance(item, storage.BaseCollection):
if self._permit_delete_collection:
if access.check("d", item):
logger.info("delete of collection is permitted by config/option [rights] permit_delete_collection but explicit forbidden by permission 'd': %s", path)
return httputils.NOT_ALLOWED
else:
if not access.check("D", item):
logger.info("delete of collection is prevented by config/option [rights] permit_delete_collection and not explicit allowed by permission 'D': %s", path)
return httputils.NOT_ALLOWED
for i in item.get_all():
hook_notification_item_list.append(
HookNotificationItem(
HookNotificationItemTypes.DELETE,
access.path,
i.uid
for i in item.get_all():
hook_notification_item_list.append(
HookNotificationItem(
HookNotificationItemTypes.DELETE,
access.path,
i.uid
)
)
)
xml_answer = xml_delete(base_prefix, path, item)
xml_answer = xml_delete(base_prefix, path, item)
else:
return httputils.NOT_ALLOWED
else:
assert item.collection is not None
assert item.href is not None

View file

@ -66,8 +66,6 @@ class ApplicationPartGet(ApplicationBase):
if path == "/.web" or path.startswith("/.web/"):
# Redirect to sanitized path for all subpaths of /.web
unsafe_path = environ.get("PATH_INFO", "")
if len(base_prefix) > 0:
unsafe_path = unsafe_path.removeprefix(base_prefix)
if unsafe_path != path:
location = base_prefix + path
logger.info("Redirecting to sanitized path: %r ==> %r",

View file

@ -2,8 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,9 +17,7 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
import socket
from http import client
@ -73,20 +70,7 @@ class ApplicationPartMkcalendar(ApplicationBase):
try:
self._storage.create_collection(path, props=props)
except ValueError as e:
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MKCALENDAR request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
logger.warning(
"Bad MKCALENDAR request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.CREATED, {}, None

View file

@ -2,8 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,9 +17,7 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
import socket
from http import client
@ -77,21 +74,8 @@ class ApplicationPartMkcol(ApplicationBase):
try:
self._storage.create_collection(path, props=props)
except ValueError as e:
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
return httputils.BAD_REQUEST
logger.warning(
"Bad MKCOL request on %r (type:%s): %s", path, collection_type, e, exc_info=True)
return httputils.BAD_REQUEST
logger.info("MKCOL request %r (type:%s): %s", path, collection_type, "successful")
return client.CREATED, {}, None

View file

@ -2,8 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2023 Unrud <unrud@outlook.com>
# Copyright © 2023-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,7 +17,6 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import posixpath
import re
from http import client
@ -111,20 +109,7 @@ class ApplicationPartMove(ApplicationBase):
try:
self._storage.move(item, to_collection, to_href)
except ValueError as e:
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed MOVE request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad MOVE request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
logger.warning(
"Bad MOVE request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.NO_CONTENT if to_item else client.CREATED, {}, None

View file

@ -322,13 +322,13 @@ def xml_propfind_response(
responses[404 if is404 else 200].append(element)
for status_code, children in responses.items():
if not children:
for status_code, childs in responses.items():
if not childs:
continue
propstat = ET.Element(xmlutils.make_clark("D:propstat"))
response.append(propstat)
prop = ET.Element(xmlutils.make_clark("D:prop"))
prop.extend(children)
prop.extend(childs)
propstat.append(prop)
status = ET.Element(xmlutils.make_clark("D:status"))
status.text = xmlutils.make_response(status_code)
@ -392,8 +392,7 @@ class ApplicationPartPropfind(ApplicationBase):
return httputils.REQUEST_TIMEOUT
with self._storage.acquire_lock("r", user):
items_iter = iter(self._storage.discover(
path, environ.get("HTTP_DEPTH", "0"),
None, self._rights._user_groups))
path, environ.get("HTTP_DEPTH", "0")))
# take root item for rights checking
item = next(items_iter, None)
if not item:

View file

@ -2,9 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2020-2020 Tuna Celik <tuna@jakpark.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -19,8 +17,6 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import re
import socket
import xml.etree.ElementTree as ET
from http import client
@ -111,20 +107,7 @@ class ApplicationPartProppatch(ApplicationBase):
)
self._hook.notify(hook_notification_item)
except ValueError as e:
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed PROPPATCH request on %r: %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
logger.warning(
"Bad PROPPATCH request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
return client.MULTI_STATUS, headers, self._xml_response(xml_answer)

View file

@ -2,9 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2020-2023 Tuna Celik <tuna@jakpark.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -19,10 +18,8 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import errno
import itertools
import posixpath
import re
import socket
import sys
from http import client
@ -32,8 +29,7 @@ from typing import Iterator, List, Mapping, MutableMapping, Optional, Tuple
import vobject
import radicale.item as radicale_item
from radicale import (httputils, pathutils, rights, storage, types, utils,
xmlutils)
from radicale import httputils, pathutils, rights, storage, types, xmlutils
from radicale.app.base import Access, ApplicationBase
from radicale.hook import HookNotificationItem, HookNotificationItemTypes
from radicale.log import logger
@ -41,8 +37,6 @@ from radicale.log import logger
MIMETYPE_TAGS: Mapping[str, str] = {value: key for key, value in
xmlutils.MIMETYPES.items()}
PRODID = u"-//Radicale//NONSGML Version " + utils.package_version("radicale") + "//EN"
def prepare(vobject_items: List[vobject.base.Component], path: str,
content_type: str, permission: bool, parent_permission: bool,
@ -86,7 +80,6 @@ def prepare(vobject_items: List[vobject.base.Component], path: str,
vobject_collection = vobject.iCalendar()
for component in components:
vobject_collection.add(component)
vobject_collection.add(vobject.base.ContentLine("PRODID", [], PRODID))
item = radicale_item.Item(collection_path=collection_path,
vobject_item=vobject_collection)
item.prepare()
@ -157,7 +150,7 @@ class ApplicationPartPut(ApplicationBase):
if self._log_bad_put_request_content:
logger.warning("Bad PUT request content of %r:\n%s", path, content)
else:
logger.debug("Bad PUT request content: suppressed by config/option [logging] bad_put_request_content")
logger.debug("Bad PUT request content: suppressed by config/option [auth] bad_put_request_content")
return httputils.BAD_REQUEST
(prepared_items, prepared_tag, prepared_write_whole_collection,
prepared_props, prepared_exc_info) = prepare(
@ -165,7 +158,7 @@ class ApplicationPartPut(ApplicationBase):
bool(rights.intersect(access.permissions, "Ww")),
bool(rights.intersect(access.parent_permissions, "w")))
with self._storage.acquire_lock("w", user, path=path):
with self._storage.acquire_lock("w", user):
item = next(iter(self._storage.discover(path)), None)
parent_item = next(iter(
self._storage.discover(access.parent_path)), None)
@ -183,39 +176,22 @@ class ApplicationPartPut(ApplicationBase):
if write_whole_collection:
if ("w" if tag else "W") not in access.permissions:
if not parent_item.tag:
logger.warning("Not a collection (check .Radicale.props): %r", parent_item.path)
return httputils.NOT_ALLOWED
if not self._permit_overwrite_collection:
if ("O") not in access.permissions:
logger.info("overwrite of collection is prevented by config/option [rights] permit_overwrite_collection and not explicit allowed by permssion 'O': %r", path)
return httputils.NOT_ALLOWED
else:
if ("o") in access.permissions:
logger.info("overwrite of collection is allowed by config/option [rights] permit_overwrite_collection but explicit forbidden by permission 'o': %r", path)
return httputils.NOT_ALLOWED
elif "w" not in access.parent_permissions:
return httputils.NOT_ALLOWED
etag = environ.get("HTTP_IF_MATCH", "")
if not item and etag:
# Etag asked but no item found: item has been removed
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_MATCH: %s, item not existing)", path, etag)
return httputils.PRECONDITION_FAILED
if item and etag and item.etag != etag:
# Etag asked but item not matching: item has changed
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_MATCH: %s, item has different etag: %s)", path, etag, item.etag)
return httputils.PRECONDITION_FAILED
if item and etag:
logger.debug("Precondition passed on PUT request for %r (HTTP_IF_MATCH: %s, item has etag: %s)", path, etag, item.etag)
match = environ.get("HTTP_IF_NONE_MATCH", "") == "*"
if item and match:
# Creation asked but item found: item can't be replaced
logger.warning("Precondition failed on PUT request for %r (HTTP_IF_NONE_MATCH: *, creation requested but item found with etag: %s)", path, item.etag)
return httputils.PRECONDITION_FAILED
if match:
logger.debug("Precondition passed on PUT request for %r (HTTP_IF_NONE_MATCH: *)", path)
if (tag != prepared_tag or
prepared_write_whole_collection != write_whole_collection):
@ -266,22 +242,9 @@ class ApplicationPartPut(ApplicationBase):
)
self._hook.notify(hook_notification_item)
except ValueError as e:
# return better matching HTTP result in case errno is provided and catched
errno_match = re.search("\\[Errno ([0-9]+)\\]", str(e))
if errno_match:
logger.error(
"Failed PUT request on %r (upload): %s", path, e, exc_info=True)
errno_e = int(errno_match.group(1))
if errno_e == errno.ENOSPC:
return httputils.INSUFFICIENT_STORAGE
elif errno_e in [errno.EPERM, errno.EACCES]:
return httputils.FORBIDDEN
else:
return httputils.INTERNAL_SERVER_ERROR
else:
logger.warning(
"Bad PUT request on %r (upload): %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
logger.warning(
"Bad PUT request on %r (upload): %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"ETag": etag}
return client.CREATED, headers, None

View file

@ -2,11 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Pieter Hijma <pieterhijma@users.noreply.github.com>
# Copyright © 2024-2024 Ray <ray@react0r.com>
# Copyright © 2024-2024 Georgiy <metallerok@gmail.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -28,11 +24,10 @@ import posixpath
import socket
import xml.etree.ElementTree as ET
from http import client
from typing import (Callable, Iterable, Iterator, List, Optional, Sequence,
Tuple, Union)
from typing import (Any, Callable, Iterable, Iterator, List, Optional,
Sequence, Tuple, Union)
from urllib.parse import unquote, urlparse
import vobject
import vobject.base
from vobject.base import ContentLine
@ -43,110 +38,11 @@ from radicale.item import filter as radicale_filter
from radicale.log import logger
def free_busy_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
collection: storage.BaseCollection, encoding: str,
unlock_storage_fn: Callable[[], None],
max_occurrence: int
) -> Tuple[int, Union[ET.Element, str]]:
# NOTE: this function returns both an Element and a string because
# free-busy reports are an edge-case on the return type according
# to the spec.
multistatus = ET.Element(xmlutils.make_clark("D:multistatus"))
if xml_request is None:
return client.MULTI_STATUS, multistatus
root = xml_request
if (root.tag == xmlutils.make_clark("C:free-busy-query") and
collection.tag != "VCALENDAR"):
logger.warning("Invalid REPORT method %r on %r requested",
xmlutils.make_human_tag(root.tag), path)
return client.FORBIDDEN, xmlutils.webdav_error("D:supported-report")
time_range_element = root.find(xmlutils.make_clark("C:time-range"))
assert isinstance(time_range_element, ET.Element)
# Build a single filter from the free busy query for retrieval
# TODO: filter for VFREEBUSY in additional to VEVENT but
# test_filter doesn't support that yet.
vevent_cf_element = ET.Element(xmlutils.make_clark("C:comp-filter"),
attrib={'name': 'VEVENT'})
vevent_cf_element.append(time_range_element)
vcalendar_cf_element = ET.Element(xmlutils.make_clark("C:comp-filter"),
attrib={'name': 'VCALENDAR'})
vcalendar_cf_element.append(vevent_cf_element)
filter_element = ET.Element(xmlutils.make_clark("C:filter"))
filter_element.append(vcalendar_cf_element)
filters = (filter_element,)
# First pull from storage
retrieved_items = list(collection.get_filtered(filters))
# !!! Don't access storage after this !!!
unlock_storage_fn()
cal = vobject.iCalendar()
collection_tag = collection.tag
while retrieved_items:
# Second filtering before evaluating occurrences.
# ``item.vobject_item`` might be accessed during filtering.
# Don't keep reference to ``item``, because VObject requires a lot of
# memory.
item, filter_matched = retrieved_items.pop(0)
if not filter_matched:
try:
if not test_filter(collection_tag, item, filter_element):
continue
except ValueError as e:
raise ValueError("Failed to free-busy filter item %r from %r: %s" %
(item.href, collection.path, e)) from e
except Exception as e:
raise RuntimeError("Failed to free-busy filter item %r from %r: %s" %
(item.href, collection.path, e)) from e
fbtype = None
if item.component_name == 'VEVENT':
transp = getattr(item.vobject_item.vevent, 'transp', None)
if transp and transp.value != 'OPAQUE':
continue
status = getattr(item.vobject_item.vevent, 'status', None)
if not status or status.value == 'CONFIRMED':
fbtype = 'BUSY'
elif status.value == 'CANCELLED':
fbtype = 'FREE'
elif status.value == 'TENTATIVE':
fbtype = 'BUSY-TENTATIVE'
else:
# Could do fbtype = status.value for x-name, I prefer this
fbtype = 'BUSY'
# TODO: coalesce overlapping periods
if max_occurrence > 0:
n_occurrences = max_occurrence+1
else:
n_occurrences = 0
occurrences = radicale_filter.time_range_fill(item.vobject_item,
time_range_element,
"VEVENT",
n=n_occurrences)
if len(occurrences) >= max_occurrence:
raise ValueError("FREEBUSY occurrences limit of {} hit"
.format(max_occurrence))
for occurrence in occurrences:
vfb = cal.add('vfreebusy')
vfb.add('dtstamp').value = item.vobject_item.vevent.dtstamp.value
vfb.add('dtstart').value, vfb.add('dtend').value = occurrence
if fbtype:
vfb.add('fbtype').value = fbtype
return (client.OK, cal.serialize())
def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
collection: storage.BaseCollection, encoding: str,
unlock_storage_fn: Callable[[], None]
) -> Tuple[int, ET.Element]:
"""Read and answer REPORT requests that return XML.
"""Read and answer REPORT requests.
Read rfc3253-3.6 for info.
@ -175,11 +71,7 @@ def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
xmlutils.make_human_tag(root.tag), path)
return client.FORBIDDEN, xmlutils.webdav_error("D:supported-report")
props: Union[ET.Element, List]
if root.find(xmlutils.make_clark("D:prop")) is not None:
props = root.find(xmlutils.make_clark("D:prop")) # type: ignore[assignment]
else:
props = []
props: Union[ET.Element, List] = root.find(xmlutils.make_clark("D:prop")) or []
hreferences: Iterable[str]
if root.tag in (
@ -265,7 +157,7 @@ def xml_report(base_prefix: str, path: str, xml_request: Optional[ET.Element],
element.text = item.serialize()
expand = prop.find(xmlutils.make_clark("C:expand"))
if expand is not None and item.component_name == 'VEVENT':
if expand is not None:
start = expand.get('start')
end = expand.get('end')
@ -304,184 +196,103 @@ def _expand(
start: datetime.datetime,
end: datetime.datetime,
) -> ET.Element:
vevent_component: vobject.base.Component = copy.copy(item.vobject_item)
# Split the vevents included in the component into one that contains the
# recurrence information and others that contain a recurrence id to
# override instances.
vevent_recurrence, vevents_overridden = _split_overridden_vevents(vevent_component)
dt_format = '%Y%m%dT%H%M%SZ'
all_day_event = False
if type(vevent_recurrence.dtstart.value) is datetime.date:
# If an event comes to us with a dtstart specified as a date
if type(item.vobject_item.vevent.dtstart.value) is datetime.date:
# If an event comes to us with a dt_start specified as a date
# then in the response we return the date, not datetime
dt_format = '%Y%m%d'
all_day_event = True
# In case of dates, we need to remove timezone information since
# rruleset.between computes with datetimes without timezone information
start = start.replace(tzinfo=None)
end = end.replace(tzinfo=None)
for vevent in vevents_overridden:
_strip_single_event(vevent, dt_format)
duration = None
if hasattr(vevent_recurrence, "dtend"):
duration = vevent_recurrence.dtend.value - vevent_recurrence.dtstart.value
rruleset = None
if hasattr(vevent_recurrence, 'rrule'):
rruleset = vevent_recurrence.getrruleset()
expanded_item, rruleset = _make_vobject_expanded_item(item, dt_format)
if rruleset:
# This function uses datetimes internally without timezone info for dates
recurrences = rruleset.between(start, end, inc=True)
_strip_component(vevent_component)
_strip_single_event(vevent_recurrence, dt_format)
is_component_filled: bool = False
i_overridden = 0
expanded: vobject.base.Component = copy.copy(expanded_item.vobject_item)
is_expanded_filled: bool = False
for recurrence_dt in recurrences:
recurrence_utc = recurrence_dt.astimezone(datetime.timezone.utc)
i_overridden, vevent = _find_overridden(i_overridden, vevents_overridden, recurrence_utc, dt_format)
if not vevent:
# We did not find an overridden instance, so create a new one
vevent = copy.deepcopy(vevent_recurrence)
vevent = copy.deepcopy(expanded.vevent)
vevent.recurrence_id = ContentLine(
name='RECURRENCE-ID',
value=recurrence_utc.strftime(dt_format), params={}
)
# For all day events, the system timezone may influence the
# results, so use recurrence_dt
recurrence_id = recurrence_dt if all_day_event else recurrence_utc
vevent.recurrence_id = ContentLine(
name='RECURRENCE-ID',
value=recurrence_id, params={}
)
_convert_to_utc(vevent, 'recurrence_id', dt_format)
vevent.dtstart = ContentLine(
name='DTSTART',
value=recurrence_id.strftime(dt_format), params={}
)
if duration:
vevent.dtend = ContentLine(
name='DTEND',
value=(recurrence_id + duration).strftime(dt_format), params={}
)
if not is_component_filled:
vevent_component.vevent = vevent
is_component_filled = True
if is_expanded_filled is False:
expanded.vevent = vevent
is_expanded_filled = True
else:
vevent_component.add(vevent)
expanded.add(vevent)
element.text = vevent_component.serialize()
element.text = expanded.serialize()
else:
element.text = expanded_item.vobject_item.serialize()
return element
def _convert_timezone(vevent: vobject.icalendar.RecurringComponent,
name_prop: str,
name_content_line: str):
prop = getattr(vevent, name_prop, None)
if prop:
if type(prop.value) is datetime.date:
date_time = datetime.datetime.fromordinal(
prop.value.toordinal()
def _make_vobject_expanded_item(
item: radicale_item.Item,
dt_format: str,
) -> Tuple[radicale_item.Item, Optional[Any]]:
# https://www.rfc-editor.org/rfc/rfc4791#section-9.6.5
# The returned calendar components MUST NOT use recurrence
# properties (i.e., EXDATE, EXRULE, RDATE, and RRULE) and MUST NOT
# have reference to or include VTIMEZONE components. Date and local
# time with reference to time zone information MUST be converted
# into date with UTC time.
item = copy.copy(item)
vevent = item.vobject_item.vevent
if type(vevent.dtstart.value) is datetime.date:
start_utc = datetime.datetime.fromordinal(
vevent.dtstart.value.toordinal()
).replace(tzinfo=datetime.timezone.utc)
else:
start_utc = vevent.dtstart.value.astimezone(datetime.timezone.utc)
vevent.dtstart = ContentLine(name='DTSTART', value=start_utc, params=[])
dt_end = getattr(vevent, 'dtend', None)
if dt_end is not None:
if type(vevent.dtend.value) is datetime.date:
end_utc = datetime.datetime.fromordinal(
dt_end.value.toordinal()
).replace(tzinfo=datetime.timezone.utc)
else:
date_time = prop.value.astimezone(datetime.timezone.utc)
end_utc = dt_end.value.astimezone(datetime.timezone.utc)
setattr(vevent, name_prop, ContentLine(name=name_content_line, value=date_time, params=[]))
vevent.dtend = ContentLine(name='DTEND', value=end_utc, params={})
rruleset = None
if hasattr(item.vobject_item.vevent, 'rrule'):
rruleset = vevent.getrruleset()
def _convert_to_utc(vevent: vobject.icalendar.RecurringComponent,
name_prop: str,
dt_format: str):
prop = getattr(vevent, name_prop, None)
if prop:
setattr(vevent, name_prop, ContentLine(name=prop.name, value=prop.value.strftime(dt_format), params=[]))
# There is something strage behavour during serialization native datetime, so converting manualy
vevent.dtstart.value = vevent.dtstart.value.strftime(dt_format)
if dt_end is not None:
vevent.dtend.value = vevent.dtend.value.strftime(dt_format)
def _strip_single_event(vevent: vobject.icalendar.RecurringComponent, dt_format: str) -> None:
_convert_timezone(vevent, 'dtstart', 'DTSTART')
_convert_timezone(vevent, 'dtend', 'DTEND')
_convert_timezone(vevent, 'recurrence_id', 'RECURRENCE-ID')
# There is something strange behaviour during serialization native datetime, so converting manually
_convert_to_utc(vevent, 'dtstart', dt_format)
_convert_to_utc(vevent, 'dtend', dt_format)
_convert_to_utc(vevent, 'recurrence_id', dt_format)
try:
delattr(vevent, 'rrule')
delattr(vevent, 'exdate')
delattr(vevent, 'exrule')
delattr(vevent, 'rdate')
except AttributeError:
pass
def _strip_component(vevent: vobject.base.Component) -> None:
timezones_to_remove = []
for component in vevent.components():
for component in item.vobject_item.components():
if component.name == 'VTIMEZONE':
timezones_to_remove.append(component)
for timezone in timezones_to_remove:
vevent.remove(timezone)
item.vobject_item.remove(timezone)
try:
delattr(item.vobject_item.vevent, 'rrule')
delattr(item.vobject_item.vevent, 'exdate')
delattr(item.vobject_item.vevent, 'exrule')
delattr(item.vobject_item.vevent, 'rdate')
except AttributeError:
pass
def _split_overridden_vevents(
component: vobject.base.Component,
) -> Tuple[
vobject.icalendar.RecurringComponent,
List[vobject.icalendar.RecurringComponent]
]:
vevent_recurrence = None
vevents_overridden = []
for vevent in component.vevent_list:
if hasattr(vevent, 'recurrence_id'):
vevents_overridden += [vevent]
elif vevent_recurrence:
raise ValueError(
f"component with UID {vevent.uid} "
f"has more than one vevent with recurrence information"
)
else:
vevent_recurrence = vevent
if vevent_recurrence:
return (
vevent_recurrence, sorted(
vevents_overridden,
key=lambda vevent: vevent.recurrence_id.value
)
)
else:
raise ValueError(
f"component with UID {vevent.uid} "
f"does not have a vevent without a recurrence_id"
)
def _find_overridden(
start: int,
vevents: List[vobject.icalendar.RecurringComponent],
dt: datetime.datetime,
dt_format: str
) -> Tuple[int, Optional[vobject.icalendar.RecurringComponent]]:
for i in range(start, len(vevents)):
dt_event = datetime.datetime.strptime(
vevents[i].recurrence_id.value,
dt_format
).replace(tzinfo=datetime.timezone.utc)
if dt_event == dt:
return (i + 1, vevents[i])
return (start, None)
return item, rruleset
def xml_item_response(base_prefix: str, href: str,
@ -615,28 +426,13 @@ class ApplicationPartReport(ApplicationBase):
else:
assert item.collection is not None
collection = item.collection
if xml_content is not None and \
xml_content.tag == xmlutils.make_clark("C:free-busy-query"):
max_occurrence = self.configuration.get("reporting", "max_freebusy_occurrence")
try:
status, body = free_busy_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close, max_occurrence)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/calendar; charset=%s" % self._encoding}
return status, headers, str(body)
else:
try:
status, xml_answer = xml_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/xml; charset=%s" % self._encoding}
return status, headers, self._xml_response(xml_answer)
try:
status, xml_answer = xml_report(
base_prefix, path, xml_content, collection, self._encoding,
lock_stack.close)
except ValueError as e:
logger.warning(
"Bad REPORT request on %r: %s", path, e, exc_info=True)
return httputils.BAD_REQUEST
headers = {"Content-Type": "text/xml; charset=%s" % self._encoding}
return status, headers, self._xml_response(xml_answer)

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -29,83 +29,29 @@ Take a look at the class ``BaseAuth`` if you want to implement your own.
"""
import hashlib
import os
import threading
import time
from typing import List, Sequence, Set, Tuple, Union, final
from typing import Sequence, Tuple, Union
from radicale import config, types, utils
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("none", "remote_user", "http_x_remote_user",
"denyall",
"htpasswd",
"ldap",
"imap",
"oauth2",
"pam",
"dovecot")
CACHE_LOGIN_TYPES: Sequence[str] = (
"dovecot",
"ldap",
"htpasswd",
"imap",
"oauth2",
"pam",
)
INSECURE_IF_NO_LOOPBACK_TYPES: Sequence[str] = (
"remote_user",
"http_x_remote_user",
)
AUTH_SOCKET_FAMILY: Sequence[str] = ("AF_UNIX", "AF_INET", "AF_INET6")
"htpasswd")
def load(configuration: "config.Configuration") -> "BaseAuth":
"""Load the authentication module chosen in configuration."""
_type = configuration.get("auth", "type")
if _type == "none":
logger.warning("No user authentication is selected: '[auth] type=none' (INSECURE)")
elif _type == "denyall":
logger.warning("All user authentication is blocked by: '[auth] type=denyall'")
elif _type in INSECURE_IF_NO_LOOPBACK_TYPES:
sgi = os.environ.get('SERVER_GATEWAY_INTERFACE') or None
if not sgi:
hosts: List[Tuple[str, int]] = configuration.get("server", "hosts")
localhost_only = True
address_lo = []
address = []
for address_port in hosts:
if address_port[0] in ["localhost", "localhost6", "127.0.0.1", "::1"]:
address_lo.append(utils.format_address(address_port))
else:
address.append(utils.format_address(address_port))
localhost_only = False
if localhost_only is False:
logger.warning("User authentication '[auth] type=%s' is selected but server is not only listen on loopback address (potentially INSECURE): %s", _type, " ".join(address))
if configuration.get("auth", "type") == "none":
logger.warning("No user authentication is selected: '[auth] type=none' (insecure)")
if configuration.get("auth", "type") == "denyall":
logger.warning("All access is blocked by: '[auth] type=denyall'")
return utils.load_plugin(INTERNAL_TYPES, "auth", "Auth", BaseAuth,
configuration)
class BaseAuth:
_ldap_groups: Set[str] = set([])
_lc_username: bool
_uc_username: bool
_strip_domain: bool
_auth_delay: float
_failed_auth_delay: float
_type: str
_cache_logins: bool
_cache_successful: dict # login -> (digest, time_ns)
_cache_successful_logins_expiry: int
_cache_failed: dict # digest_failed -> (time_ns, login)
_cache_failed_logins_expiry: int
_cache_failed_logins_salt_ns: int # persistent over runtime
_lock: threading.Lock
def __init__(self, configuration: "config.Configuration") -> None:
"""Initialize BaseAuth.
@ -117,45 +63,6 @@ class BaseAuth:
"""
self.configuration = configuration
self._lc_username = configuration.get("auth", "lc_username")
self._uc_username = configuration.get("auth", "uc_username")
self._strip_domain = configuration.get("auth", "strip_domain")
logger.info("auth.strip_domain: %s", self._strip_domain)
logger.info("auth.lc_username: %s", self._lc_username)
logger.info("auth.uc_username: %s", self._uc_username)
if self._lc_username is True and self._uc_username is True:
raise RuntimeError("auth.lc_username and auth.uc_username cannot be enabled together")
self._auth_delay = configuration.get("auth", "delay")
logger.info("auth.delay: %f", self._auth_delay)
self._failed_auth_delay = 0
self._lock = threading.Lock()
# cache_successful_logins
self._cache_logins = configuration.get("auth", "cache_logins")
self._type = configuration.get("auth", "type")
if (self._type in CACHE_LOGIN_TYPES) or (self._cache_logins is False):
logger.info("auth.cache_logins: %s", self._cache_logins)
else:
logger.info("auth.cache_logins: %s (but not required for type '%s' and disabled therefore)", self._cache_logins, self._type)
self._cache_logins = False
if self._cache_logins is True:
self._cache_successful_logins_expiry = configuration.get("auth", "cache_successful_logins_expiry")
if self._cache_successful_logins_expiry < 0:
raise RuntimeError("self._cache_successful_logins_expiry cannot be < 0")
self._cache_failed_logins_expiry = configuration.get("auth", "cache_failed_logins_expiry")
if self._cache_failed_logins_expiry < 0:
raise RuntimeError("self._cache_failed_logins_expiry cannot be < 0")
logger.info("auth.cache_successful_logins_expiry: %s seconds", self._cache_successful_logins_expiry)
logger.info("auth.cache_failed_logins_expiry: %s seconds", self._cache_failed_logins_expiry)
# cache init
self._cache_successful = dict()
self._cache_failed = dict()
self._cache_failed_logins_salt_ns = time.time_ns()
def _cache_digest(self, login: str, password: str, salt: str) -> str:
h = hashlib.sha3_512()
h.update(salt.encode())
h.update(login.encode())
h.update(password.encode())
return str(h.digest())
def get_external_login(self, environ: types.WSGIEnviron) -> Union[
Tuple[()], Tuple[str, str]]:
@ -183,132 +90,5 @@ class BaseAuth:
raise NotImplementedError
def _sleep_for_constant_exec_time(self, time_ns_begin: int):
"""Sleep some time to reach a constant execution time for failed logins
Independent of time required by external backend or used digest methods
Increase final execution time in case initial limit exceeded
See also issue 591
"""
time_delta = (time.time_ns() - time_ns_begin) / 1000 / 1000 / 1000
with self._lock:
# avoid that another thread is changing global value at the same time
failed_auth_delay = self._failed_auth_delay
failed_auth_delay_old = failed_auth_delay
if time_delta > failed_auth_delay:
# set new
failed_auth_delay = time_delta
# store globally
self._failed_auth_delay = failed_auth_delay
if (failed_auth_delay_old != failed_auth_delay):
logger.debug("Failed login constant execution time need increase of failed_auth_delay: %.9f -> %.9f sec", failed_auth_delay_old, failed_auth_delay)
# sleep == 0
else:
sleep = failed_auth_delay - time_delta
logger.debug("Failed login constant exection time alignment, sleeping: %.9f sec", sleep)
time.sleep(sleep)
@final
def login(self, login: str, password: str) -> Tuple[str, str]:
time_ns_begin = time.time_ns()
result_from_cache = False
if self._lc_username:
login = login.lower()
if self._uc_username:
login = login.upper()
if self._strip_domain:
login = login.split('@')[0]
if self._cache_logins is True:
# time_ns is also used as salt
result = ""
digest = ""
time_ns = time.time_ns()
# cleanup failed login cache to avoid out-of-memory
cache_failed_entries = len(self._cache_failed)
if cache_failed_entries > 0:
logger.debug("Login failed cache investigation start (entries: %d)", cache_failed_entries)
self._lock.acquire()
cache_failed_cleanup = dict()
for digest in self._cache_failed:
(time_ns_cache, login_cache) = self._cache_failed[digest]
age_failed = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
if age_failed > self._cache_failed_logins_expiry:
cache_failed_cleanup[digest] = (login_cache, age_failed)
cache_failed_cleanup_entries = len(cache_failed_cleanup)
logger.debug("Login failed cache cleanup start (entries: %d)", cache_failed_cleanup_entries)
if cache_failed_cleanup_entries > 0:
for digest in cache_failed_cleanup:
(login, age_failed) = cache_failed_cleanup[digest]
logger.debug("Login failed cache entry for user+password expired: '%s' (age: %d > %d sec)", login_cache, age_failed, self._cache_failed_logins_expiry)
del self._cache_failed[digest]
self._lock.release()
logger.debug("Login failed cache investigation finished")
# check for cache failed login
digest_failed = login + ":" + self._cache_digest(login, password, str(self._cache_failed_logins_salt_ns))
if self._cache_failed.get(digest_failed):
# login+password found in cache "failed" -> shortcut return
(time_ns_cache, login_cache) = self._cache_failed[digest]
age_failed = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
logger.debug("Login failed cache entry for user+password found: '%s' (age: %d sec)", login_cache, age_failed)
self._sleep_for_constant_exec_time(time_ns_begin)
return ("", self._type + " / cached")
if self._cache_successful.get(login):
# login found in cache "successful"
(digest_cache, time_ns_cache) = self._cache_successful[login]
digest = self._cache_digest(login, password, str(time_ns_cache))
if digest == digest_cache:
age_success = int((time_ns - time_ns_cache) / 1000 / 1000 / 1000)
if age_success > self._cache_successful_logins_expiry:
logger.debug("Login successful cache entry for user+password found but expired: '%s' (age: %d > %d sec)", login, age_success, self._cache_successful_logins_expiry)
# delete expired success from cache
del self._cache_successful[login]
digest = ""
else:
logger.debug("Login successful cache entry for user+password found: '%s' (age: %d sec)", login, age_success)
result = login
result_from_cache = True
else:
logger.debug("Login successful cache entry for user+password not matching: '%s'", login)
else:
# login not found in cache, caculate always to avoid timing attacks
digest = self._cache_digest(login, password, str(time_ns))
if result == "":
# verify login+password via configured backend
logger.debug("Login verification for user+password via backend: '%s'", login)
result = self._login(login, password)
if result != "":
logger.debug("Login successful for user+password via backend: '%s'", login)
if digest == "":
# successful login, but expired, digest must be recalculated
digest = self._cache_digest(login, password, str(time_ns))
# store successful login in cache
self._lock.acquire()
self._cache_successful[login] = (digest, time_ns)
self._lock.release()
logger.debug("Login successful cache for user set: '%s'", login)
if self._cache_failed.get(digest_failed):
logger.debug("Login failed cache for user cleared: '%s'", login)
del self._cache_failed[digest_failed]
else:
logger.debug("Login failed for user+password via backend: '%s'", login)
self._lock.acquire()
self._cache_failed[digest_failed] = (time_ns, login)
self._lock.release()
logger.debug("Login failed cache for user set: '%s'", login)
if result_from_cache is True:
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type + " / cached")
else:
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type)
else:
# self._cache_logins is False
result = self._login(login, password)
if result == "":
self._sleep_for_constant_exec_time(time_ns_begin)
return (result, self._type)
def login(self, login: str, password: str) -> str:
return self._login(login, password).lower() if self._lc_username else self._login(login, password)

View file

@ -1,192 +0,0 @@
# This file is part of Radicale Server - Calendar Server
# Copyright © 2014 Giel van Schijndel
# Copyright © 2019 (GalaxyMaster)
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import base64
import itertools
import os
import socket
from contextlib import closing
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration):
super().__init__(configuration)
self.timeout = 5
self.request_id_gen = itertools.count(1)
config_family = configuration.get("auth", "dovecot_connection_type")
if config_family == "AF_UNIX":
self.family = socket.AF_UNIX
self.address = configuration.get("auth", "dovecot_socket")
logger.info("auth dovecot socket: %r", self.address)
return
self.address = configuration.get("auth", "dovecot_host"), configuration.get("auth", "dovecot_port")
logger.warning("auth dovecot address: %r (INSECURE, credentials are transmitted in clear text)", self.address)
if config_family == "AF_INET":
self.family = socket.AF_INET
else:
self.family = socket.AF_INET6
def _login(self, login, password):
"""Validate credentials.
Check if the ``login``/``password`` pair is valid according to Dovecot.
This implementation communicates with a Dovecot server through the
Dovecot Authentication Protocol v1.1.
https://dovecot.org/doc/auth-protocol.txt
"""
logger.info("Authentication request (dovecot): '{}'".format(login))
if not login or not password:
return ""
with closing(socket.socket(
self.family,
socket.SOCK_STREAM)
) as sock:
try:
sock.settimeout(self.timeout)
sock.connect(self.address)
buf = bytes()
supported_mechs = []
done = False
seen_part = [0, 0, 0]
# Upon the initial connection we only care about the
# handshake, which is usually just around 100 bytes long,
# e.g.
#
# VERSION 1 2
# MECH PLAIN plaintext
# SPID 22901
# CUID 1
# COOKIE 2dbe4116a30fb4b8a8719f4448420af7
# DONE
#
# Hence, we try to read just once with a buffer big
# enough to hold all of it.
buf = sock.recv(1024)
while b'\n' in buf and not done:
line, buf = buf.split(b'\n', 1)
parts = line.split(b'\t')
first, parts = parts[0], parts[1:]
if first == b'VERSION':
if seen_part[0]:
logger.warning(
"Server presented multiple VERSION "
"tokens, ignoring"
)
continue
version = parts
logger.debug("Dovecot server version: '{}'".format(
(b'.'.join(version)).decode()
))
if int(version[0]) != 1:
logger.fatal(
"Only Dovecot 1.x versions are supported!"
)
return ""
seen_part[0] += 1
elif first == b'MECH':
supported_mechs.append(parts[0])
seen_part[1] += 1
elif first == b'DONE':
seen_part[2] += 1
if not (seen_part[0] and seen_part[1]):
logger.fatal(
"An unexpected end of the server "
"handshake received!"
)
return ""
done = True
if not done:
logger.fatal("Encountered a broken server handshake!")
return ""
logger.debug(
"Supported auth methods: '{}'"
.format((b"', '".join(supported_mechs)).decode())
)
if b'PLAIN' not in supported_mechs:
logger.info(
"Authentication method 'PLAIN' is not supported, "
"but is required!"
)
return ""
# Handshake
logger.debug("Sending auth handshake")
sock.send(b'VERSION\t1\t1\n')
sock.send(b'CPID\t%u\n' % os.getpid())
request_id = next(self.request_id_gen)
logger.debug(
"Authenticating with request id: '{}'"
.format(request_id)
)
sock.send(
b'AUTH\t%u\tPLAIN\tservice=radicale\tresp=%b\n' %
(
request_id, base64.b64encode(
b'\0%b\0%b' %
(login.encode(), password.encode())
)
)
)
logger.debug("Processing auth response")
buf = sock.recv(1024)
line = buf.split(b'\n', 1)[0]
parts = line.split(b'\t')[:2]
resp, reply_id, params = (
parts[0], int(parts[1]),
dict(part.split('=', 1) for part in parts[2:])
)
logger.debug(
"Auth response: result='{}', id='{}', parameters={}"
.format(resp.decode(), reply_id, params)
)
if request_id != reply_id:
logger.fatal(
"Unexpected reply ID {} received (expected {})"
.format(
reply_id, request_id
)
)
return ""
if resp == b'OK':
return login
except socket.error as e:
logger.fatal(
"Failed to communicate with Dovecot: %s" %
(e)
)
return ""

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -36,7 +36,7 @@ pointed to by the ``htpasswd_filename`` configuration value while assuming
the password encryption method specified via the ``htpasswd_encryption``
configuration value.
The following htpasswd password encryption methods are supported by Radicale
The following htpasswd password encrpytion methods are supported by Radicale
out-of-the-box:
- plain-text (created by htpasswd -p ...) -- INSECURE
- MD5-APR1 (htpasswd -m ...) -- htpasswd's default method, INSECURE
@ -50,11 +50,7 @@ When bcrypt is installed:
import functools
import hmac
import os
import re
import threading
import time
from typing import Any, Tuple
from typing import Any
from passlib.hash import apr_md5_crypt, sha256_crypt, sha512_crypt
@ -65,202 +61,72 @@ class Auth(auth.BaseAuth):
_filename: str
_encoding: str
_htpasswd: dict # login -> digest
_htpasswd_mtime_ns: int
_htpasswd_size: int
_htpasswd_ok: bool
_htpasswd_not_ok_time: float
_htpasswd_not_ok_reminder_seconds: int
_htpasswd_bcrypt_use: int
_htpasswd_cache: bool
_has_bcrypt: bool
_encryption: str
_lock: threading.Lock
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
self._filename = configuration.get("auth", "htpasswd_filename")
logger.info("auth htpasswd file: %r", self._filename)
self._encoding = configuration.get("encoding", "stock")
logger.info("auth htpasswd file encoding: %r", self._encoding)
self._htpasswd_cache = configuration.get("auth", "htpasswd_cache")
logger.info("auth htpasswd cache: %s", self._htpasswd_cache)
self._encryption: str = configuration.get("auth", "htpasswd_encryption")
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s'", self._encryption)
encryption: str = configuration.get("auth", "htpasswd_encryption")
self._has_bcrypt = False
self._htpasswd_ok = False
self._htpasswd_not_ok_reminder_seconds = 60 # currently hardcoded
(self._htpasswd_ok, self._htpasswd_bcrypt_use, self._htpasswd, self._htpasswd_size, self._htpasswd_mtime_ns) = self._read_htpasswd(True, False)
self._lock = threading.Lock()
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s'", encryption)
if self._encryption == "plain":
if encryption == "plain":
self._verify = self._plain
elif self._encryption == "md5":
elif encryption == "md5":
self._verify = self._md5apr1
elif self._encryption == "sha256":
elif encryption == "sha256":
self._verify = self._sha256
elif self._encryption == "sha512":
elif encryption == "sha512":
self._verify = self._sha512
elif self._encryption == "bcrypt" or self._encryption == "autodetect":
elif encryption == "bcrypt" or encryption == "autodetect":
try:
import bcrypt
except ImportError as e:
if (self._encryption == "autodetect") and (self._htpasswd_bcrypt_use == 0):
logger.warning("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' which can require bycrypt module, but currently no entries found", self._encryption)
else:
raise RuntimeError(
"The htpasswd encryption method 'bcrypt' or 'autodetect' requires "
"the bcrypt module (entries found: %d)." % self._htpasswd_bcrypt_use) from e
else:
self._has_bcrypt = True
if self._encryption == "autodetect":
if self._htpasswd_bcrypt_use == 0:
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' and bycrypt module found, but currently not required", self._encryption)
else:
logger.info("auth htpasswd encryption is 'radicale.auth.htpasswd_encryption.%s' and bycrypt module found (bcrypt entries found: %d)", self._encryption, self._htpasswd_bcrypt_use)
if self._encryption == "bcrypt":
raise RuntimeError(
"The htpasswd encryption method 'bcrypt' or 'autodetect' requires "
"the bcrypt module.") from e
if encryption == "bcrypt":
self._verify = functools.partial(self._bcrypt, bcrypt)
else:
self._verify = self._autodetect
if self._htpasswd_bcrypt_use:
self._verify_bcrypt = functools.partial(self._bcrypt, bcrypt)
self._verify_bcrypt = functools.partial(self._bcrypt, bcrypt)
else:
raise RuntimeError("The htpasswd encryption method %r is not "
"supported." % self._encryption)
"supported." % encryption)
def _plain(self, hash_value: str, password: str) -> tuple[str, bool]:
def _plain(self, hash_value: str, password: str) -> bool:
"""Check if ``hash_value`` and ``password`` match, plain method."""
return ("PLAIN", hmac.compare_digest(hash_value.encode(), password.encode()))
return hmac.compare_digest(hash_value.encode(), password.encode())
def _plain_fallback(self, method_orig, hash_value: str, password: str) -> tuple[str, bool]:
"""Check if ``hash_value`` and ``password`` match, plain method / fallback in case of hash length is not matching on autodetection."""
info = "PLAIN/fallback as hash length not matching for " + method_orig + ": " + str(len(hash_value))
return (info, hmac.compare_digest(hash_value.encode(), password.encode()))
def _bcrypt(self, bcrypt: Any, hash_value: str, password: str) -> bool:
return bcrypt.checkpw(password=password.encode('utf-8'), hashed_password=hash_value.encode())
def _bcrypt(self, bcrypt: Any, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 60:
return self._plain_fallback("BCRYPT", hash_value, password)
else:
return ("BCRYPT", bcrypt.checkpw(password=password.encode('utf-8'), hashed_password=hash_value.encode()))
def _md5apr1(self, hash_value: str, password: str) -> bool:
return apr_md5_crypt.verify(password, hash_value.strip())
def _md5apr1(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 37:
return self._plain_fallback("MD5-APR1", hash_value, password)
else:
return ("MD5-APR1", apr_md5_crypt.verify(password, hash_value.strip()))
def _sha256(self, hash_value: str, password: str) -> bool:
return sha256_crypt.verify(password, hash_value.strip())
def _sha256(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 63:
return self._plain_fallback("SHA-256", hash_value, password)
else:
return ("SHA-256", sha256_crypt.verify(password, hash_value.strip()))
def _sha512(self, hash_value: str, password: str) -> bool:
return sha512_crypt.verify(password, hash_value.strip())
def _sha512(self, hash_value: str, password: str) -> tuple[str, bool]:
if self._encryption == "autodetect" and len(hash_value) != 106:
return self._plain_fallback("SHA-512", hash_value, password)
else:
return ("SHA-512", sha512_crypt.verify(password, hash_value.strip()))
def _autodetect(self, hash_value: str, password: str) -> tuple[str, bool]:
if hash_value.startswith("$apr1$", 0, 6):
def _autodetect(self, hash_value: str, password: str) -> bool:
if hash_value.startswith("$apr1$", 0, 6) and len(hash_value) == 37:
# MD5-APR1
return self._md5apr1(hash_value, password)
elif re.match(r"^\$2(a|b|x|y)?\$", hash_value):
elif hash_value.startswith("$2y$", 0, 4) and len(hash_value) == 60:
# BCRYPT
return self._verify_bcrypt(hash_value, password)
elif hash_value.startswith("$5$", 0, 3):
elif hash_value.startswith("$5$", 0, 3) and len(hash_value) == 63:
# SHA-256
return self._sha256(hash_value, password)
elif hash_value.startswith("$6$", 0, 3):
elif hash_value.startswith("$6$", 0, 3) and len(hash_value) == 106:
# SHA-512
return self._sha512(hash_value, password)
else:
# assumed plaintext
return self._plain(hash_value, password)
def _read_htpasswd(self, init: bool, suppress: bool) -> Tuple[bool, int, dict, int, int]:
"""Read htpasswd file
init == True: stop on error
init == False: warn/skip on error and set mark to log reminder every interval
suppress == True: suppress warnings, change info to debug (used in non-caching mode)
suppress == False: do not suppress warnings (used in caching mode)
"""
htpasswd_ok = True
bcrypt_use = 0
if (init is True) or (suppress is True):
info = "Read"
else:
info = "Re-read"
if suppress is False:
logger.info("%s content of htpasswd file start: %r", info, self._filename)
else:
logger.debug("%s content of htpasswd file start: %r", info, self._filename)
htpasswd: dict[str, str] = dict()
entries = 0
duplicates = 0
errors = 0
try:
with open(self._filename, encoding=self._encoding) as f:
line_num = 0
for line in f:
line_num += 1
line = line.rstrip("\n")
if line.lstrip() and not line.lstrip().startswith("#"):
try:
login, digest = line.split(":", maxsplit=1)
skip = False
if login == "" or digest == "":
if init is True:
raise ValueError("htpasswd file contains problematic line not matching <login>:<digest> in line: %d" % line_num)
else:
errors += 1
logger.warning("htpasswd file contains problematic line not matching <login>:<digest> in line: %d (ignored)", line_num)
htpasswd_ok = False
skip = True
else:
if htpasswd.get(login):
duplicates += 1
if init is True:
raise ValueError("htpasswd file contains duplicate login: '%s'", login, line_num)
else:
logger.warning("htpasswd file contains duplicate login: '%s' (line: %d / ignored)", login, line_num)
htpasswd_ok = False
skip = True
else:
if re.match(r"^\$2(a|b|x|y)?\$", digest) and len(digest) == 60:
if init is True:
bcrypt_use += 1
else:
if self._has_bcrypt is False:
logger.warning("htpasswd file contains bcrypt digest login: '%s' (line: %d / ignored because module is not loaded)", login, line_num)
skip = True
htpasswd_ok = False
if skip is False:
htpasswd[login] = digest
entries += 1
except ValueError as e:
if init is True:
raise RuntimeError("Invalid htpasswd file %r: %s" % (self._filename, e)) from e
except OSError as e:
if init is True:
raise RuntimeError("Failed to load htpasswd file %r: %s" % (self._filename, e)) from e
else:
logger.warning("Failed to load htpasswd file on re-read: %r" % self._filename)
htpasswd_ok = False
htpasswd_size = os.stat(self._filename).st_size
htpasswd_mtime_ns = os.stat(self._filename).st_mtime_ns
if suppress is False:
logger.info("%s content of htpasswd file done: %r (entries: %d, duplicates: %d, errors: %d)", info, self._filename, entries, duplicates, errors)
else:
logger.debug("%s content of htpasswd file done: %r (entries: %d, duplicates: %d, errors: %d)", info, self._filename, entries, duplicates, errors)
if htpasswd_ok is True:
self._htpasswd_not_ok_time = 0
else:
self._htpasswd_not_ok_time = time.time()
return (htpasswd_ok, bcrypt_use, htpasswd, htpasswd_size, htpasswd_mtime_ns)
def _login(self, login: str, password: str) -> str:
"""Validate credentials.
@ -268,52 +134,30 @@ class Auth(auth.BaseAuth):
hash (encrypted password) and check hash against password,
using the method specified in the Radicale config.
Optional: the content of the file is cached and live updates will be detected by
comparing mtime_ns and size
The content of the file is not cached because reading is generally a
very cheap operation, and it's useful to get live updates of the
htpasswd file.
"""
login_ok = False
digest: str
if self._htpasswd_cache is True:
# check and re-read file if required
with self._lock:
htpasswd_size = os.stat(self._filename).st_size
htpasswd_mtime_ns = os.stat(self._filename).st_mtime_ns
if (htpasswd_size != self._htpasswd_size) or (htpasswd_mtime_ns != self._htpasswd_mtime_ns):
(self._htpasswd_ok, self._htpasswd_bcrypt_use, self._htpasswd, self._htpasswd_size, self._htpasswd_mtime_ns) = self._read_htpasswd(False, False)
self._htpasswd_not_ok_time = 0
# log reminder of problemantic file every interval
current_time = time.time()
if (self._htpasswd_ok is False):
if (self._htpasswd_not_ok_time > 0):
if (current_time - self._htpasswd_not_ok_time) > self._htpasswd_not_ok_reminder_seconds:
logger.warning("htpasswd file still contains issues (REMINDER, check warnings in the past): %r" % self._filename)
self._htpasswd_not_ok_time = current_time
else:
self._htpasswd_not_ok_time = current_time
if self._htpasswd.get(login):
digest = self._htpasswd[login]
login_ok = True
else:
# read file on every request
(htpasswd_ok, htpasswd_bcrypt_use, htpasswd, htpasswd_size, htpasswd_mtime_ns) = self._read_htpasswd(False, True)
if htpasswd.get(login):
digest = htpasswd[login]
login_ok = True
if login_ok is True:
try:
(method, password_ok) = self._verify(digest, password)
except ValueError as e:
logger.error("Login verification failed for user: '%s' (htpasswd/%s) with errror '%s'", login, self._encryption, e)
return ""
if password_ok:
logger.debug("Login verification successful for user: '%s' (htpasswd/%s/%s)", login, self._encryption, method)
return login
else:
logger.warning("Login verification failed for user: '%s' (htpasswd/%s/%s)", login, self._encryption, method)
else:
logger.warning("Login verification user not found (htpasswd): '%s'", login)
try:
with open(self._filename, encoding=self._encoding) as f:
for line in f:
line = line.rstrip("\n")
if line.lstrip() and not line.lstrip().startswith("#"):
try:
hash_login, hash_value = line.split(
":", maxsplit=1)
# Always compare both login and password to avoid
# timing attacks, see #591.
login_ok = hmac.compare_digest(
hash_login.encode(), login.encode())
password_ok = self._verify(hash_value, password)
if login_ok and password_ok:
return login
except ValueError as e:
raise RuntimeError("Invalid htpasswd file %r: %s" %
(self._filename, e)) from e
except OSError as e:
raise RuntimeError("Failed to load htpasswd file %r: %s" %
(self._filename, e)) from e
return ""

View file

@ -2,7 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by

View file

@ -1,73 +0,0 @@
# RadicaleIMAP IMAP authentication plugin for Radicale.
# Copyright © 2017, 2020 Unrud <unrud@outlook.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import imaplib
import ssl
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
"""Authenticate user with IMAP."""
def __init__(self, configuration) -> None:
super().__init__(configuration)
self._host, self._port = self.configuration.get("auth", "imap_host")
logger.info("auth imap host: %r", self._host)
self._security = self.configuration.get("auth", "imap_security")
if self._security == "none":
logger.warning("auth imap security: %s (INSECURE, credentials are transmitted in clear text)", self._security)
else:
logger.info("auth imap security: %s", self._security)
if self._security == "tls":
if self._port is None:
self._port = 993
logger.info("auth imap port (autoselected): %d", self._port)
else:
logger.info("auth imap port: %d", self._port)
else:
if self._port is None:
self._port = 143
logger.info("auth imap port (autoselected): %d", self._port)
else:
logger.info("auth imap port: %d", self._port)
def _login(self, login, password) -> str:
try:
connection: imaplib.IMAP4 | imaplib.IMAP4_SSL
if self._security == "tls":
connection = imaplib.IMAP4_SSL(
host=self._host, port=self._port,
ssl_context=ssl.create_default_context())
else:
connection = imaplib.IMAP4(host=self._host, port=self._port)
if self._security == "starttls":
connection.starttls(ssl.create_default_context())
try:
connection.authenticate(
"PLAIN",
lambda _: "{0}\x00{0}\x00{1}".format(login, password).encode(),
)
except imaplib.IMAP4.error as e:
logger.warning("IMAP authentication failed for user %r: %s", login, e, exc_info=False)
return ""
connection.logout()
return login
except (OSError, imaplib.IMAP4.error) as e:
logger.error("Failed to communicate with IMAP server %r: %s" % ("[%s]:%d" % (self._host, self._port), e))
return ""

View file

@ -1,269 +0,0 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2022-2024 Peter Varkoly
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Authentication backend that checks credentials with a LDAP server.
Following parameters are needed in the configuration:
ldap_uri The LDAP URL to the server like ldap://localhost
ldap_base The baseDN of the LDAP server
ldap_reader_dn The DN of a LDAP user with read access to get the user accounts
ldap_secret The password of the ldap_reader_dn
ldap_secret_file The path of the file containing the password of the ldap_reader_dn
ldap_filter The search filter to find the user to authenticate by the username
ldap_user_attribute The attribute to be used as username after authentication
ldap_groups_attribute The attribute containing group memberships in the LDAP user entry
Following parameters controls SSL connections:
ldap_use_ssl If the connection
ldap_ssl_verify_mode The certificate verification mode. NONE, OPTIONAL, default is REQUIRED
ldap_ssl_ca_file
"""
import ssl
from radicale import auth, config
from radicale.log import logger
class Auth(auth.BaseAuth):
_ldap_uri: str
_ldap_base: str
_ldap_reader_dn: str
_ldap_secret: str
_ldap_filter: str
_ldap_attributes: list[str] = []
_ldap_user_attr: str
_ldap_groups_attr: str
_ldap_module_version: int = 3
_ldap_use_ssl: bool = False
_ldap_ssl_verify_mode: int = ssl.CERT_REQUIRED
_ldap_ssl_ca_file: str = ""
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
try:
import ldap3
self.ldap3 = ldap3
except ImportError:
try:
import ldap
self._ldap_module_version = 2
self.ldap = ldap
except ImportError as e:
raise RuntimeError("LDAP authentication requires the ldap3 module") from e
self._ldap_ignore_attribute_create_modify_timestamp = configuration.get("auth", "ldap_ignore_attribute_create_modify_timestamp")
if self._ldap_ignore_attribute_create_modify_timestamp:
self.ldap3.utils.config._ATTRIBUTES_EXCLUDED_FROM_CHECK.extend(['createTimestamp', 'modifyTimestamp'])
logger.info("auth.ldap_ignore_attribute_create_modify_timestamp applied")
self._ldap_uri = configuration.get("auth", "ldap_uri")
self._ldap_base = configuration.get("auth", "ldap_base")
self._ldap_reader_dn = configuration.get("auth", "ldap_reader_dn")
self._ldap_secret = configuration.get("auth", "ldap_secret")
self._ldap_filter = configuration.get("auth", "ldap_filter")
self._ldap_user_attr = configuration.get("auth", "ldap_user_attribute")
self._ldap_groups_attr = configuration.get("auth", "ldap_groups_attribute")
ldap_secret_file_path = configuration.get("auth", "ldap_secret_file")
if ldap_secret_file_path:
with open(ldap_secret_file_path, 'r') as file:
self._ldap_secret = file.read().rstrip('\n')
if self._ldap_module_version == 3:
self._ldap_use_ssl = configuration.get("auth", "ldap_use_ssl")
if self._ldap_use_ssl:
self._ldap_ssl_ca_file = configuration.get("auth", "ldap_ssl_ca_file")
tmp = configuration.get("auth", "ldap_ssl_verify_mode")
if tmp == "NONE":
self._ldap_ssl_verify_mode = ssl.CERT_NONE
elif tmp == "OPTIONAL":
self._ldap_ssl_verify_mode = ssl.CERT_OPTIONAL
logger.info("auth.ldap_uri : %r" % self._ldap_uri)
logger.info("auth.ldap_base : %r" % self._ldap_base)
logger.info("auth.ldap_reader_dn : %r" % self._ldap_reader_dn)
logger.info("auth.ldap_filter : %r" % self._ldap_filter)
if self._ldap_user_attr:
logger.info("auth.ldap_user_attribute : %r" % self._ldap_user_attr)
else:
logger.info("auth.ldap_user_attribute : (not provided)")
if self._ldap_groups_attr:
logger.info("auth.ldap_groups_attribute: %r" % self._ldap_groups_attr)
else:
logger.info("auth.ldap_groups_attribute: (not provided)")
if ldap_secret_file_path:
logger.info("auth.ldap_secret_file_path: %r" % ldap_secret_file_path)
if self._ldap_secret:
logger.info("auth.ldap_secret : (from file)")
else:
logger.info("auth.ldap_secret_file_path: (not provided)")
if self._ldap_secret:
logger.info("auth.ldap_secret : (from config)")
if self._ldap_reader_dn and not self._ldap_secret:
logger.error("auth.ldap_secret : (not provided)")
raise RuntimeError("LDAP authentication requires ldap_secret for ldap_reader_dn")
logger.info("auth.ldap_use_ssl : %s" % self._ldap_use_ssl)
if self._ldap_use_ssl is True:
logger.info("auth.ldap_ssl_verify_mode : %s" % self._ldap_ssl_verify_mode)
if self._ldap_ssl_ca_file:
logger.info("auth.ldap_ssl_ca_file : %r" % self._ldap_ssl_ca_file)
else:
logger.info("auth.ldap_ssl_ca_file : (not provided)")
"""Extend attributes to to be returned in the user query"""
if self._ldap_groups_attr:
self._ldap_attributes.append(self._ldap_groups_attr)
if self._ldap_user_attr:
self._ldap_attributes.append(self._ldap_user_attr)
logger.info("ldap_attributes : %r" % self._ldap_attributes)
def _login2(self, login: str, password: str) -> str:
try:
"""Bind as reader dn"""
logger.debug(f"_login2 {self._ldap_uri}, {self._ldap_reader_dn}")
conn = self.ldap.initialize(self._ldap_uri)
conn.protocol_version = 3
conn.set_option(self.ldap.OPT_REFERRALS, 0)
conn.simple_bind_s(self._ldap_reader_dn, self._ldap_secret)
"""Search for the dn of user to authenticate"""
escaped_login = self.ldap.filter.escape_filter_chars(login)
logger.debug(f"_login2 login escaped for LDAP filters: {escaped_login}")
res = conn.search_s(
self._ldap_base,
self.ldap.SCOPE_SUBTREE,
filterstr=self._ldap_filter.format(escaped_login),
attrlist=self._ldap_attributes
)
if len(res) != 1:
"""User could not be found unambiguously"""
logger.debug(f"_login2 no unique DN found for '{login}'")
return ""
user_entry = res[0]
user_dn = user_entry[0]
logger.debug(f"_login2 found LDAP user DN {user_dn}")
"""Close LDAP connection"""
conn.unbind()
except Exception as e:
raise RuntimeError(f"Invalid LDAP configuration:{e}")
try:
"""Bind as user to authenticate"""
conn = self.ldap.initialize(self._ldap_uri)
conn.protocol_version = 3
conn.set_option(self.ldap.OPT_REFERRALS, 0)
conn.simple_bind_s(user_dn, password)
tmp: list[str] = []
if self._ldap_groups_attr:
tmp = []
for g in user_entry[1][self._ldap_groups_attr]:
"""Get group g's RDN's attribute value"""
try:
rdns = self.ldap.dn.explode_dn(g, notypes=True)
tmp.append(rdns[0])
except Exception:
tmp.append(g.decode('utf8'))
self._ldap_groups = set(tmp)
logger.debug("_login2 LDAP groups of user: %s", ",".join(self._ldap_groups))
if self._ldap_user_attr:
if user_entry[1][self._ldap_user_attr]:
tmplogin = user_entry[1][self._ldap_user_attr][0]
login = tmplogin.decode('utf-8')
logger.debug(f"_login2 user set to: '{login}'")
conn.unbind()
logger.debug(f"_login2 {login} successfully authenticated")
return login
except self.ldap.INVALID_CREDENTIALS:
return ""
def _login3(self, login: str, password: str) -> str:
"""Connect the server"""
try:
logger.debug(f"_login3 {self._ldap_uri}, {self._ldap_reader_dn}")
if self._ldap_use_ssl:
tls = self.ldap3.Tls(validate=self._ldap_ssl_verify_mode)
if self._ldap_ssl_ca_file != "":
tls = self.ldap3.Tls(
validate=self._ldap_ssl_verify_mode,
ca_certs_file=self._ldap_ssl_ca_file
)
server = self.ldap3.Server(self._ldap_uri, use_ssl=True, tls=tls)
else:
server = self.ldap3.Server(self._ldap_uri)
conn = self.ldap3.Connection(server, self._ldap_reader_dn, password=self._ldap_secret)
except self.ldap3.core.exceptions.LDAPSocketOpenError:
raise RuntimeError("Unable to reach LDAP server")
except Exception as e:
logger.debug(f"_login3 error 1 {e}")
pass
if not conn.bind():
logger.debug("_login3 cannot bind")
raise RuntimeError("Unable to read from LDAP server")
logger.debug(f"_login3 bind as {self._ldap_reader_dn}")
"""Search the user dn"""
escaped_login = self.ldap3.utils.conv.escape_filter_chars(login)
logger.debug(f"_login3 login escaped for LDAP filters: {escaped_login}")
conn.search(
search_base=self._ldap_base,
search_filter=self._ldap_filter.format(escaped_login),
search_scope=self.ldap3.SUBTREE,
attributes=self._ldap_attributes
)
if len(conn.entries) != 1:
"""User could not be found unambiguously"""
logger.debug(f"_login3 no unique DN found for '{login}'")
return ""
user_entry = conn.response[0]
conn.unbind()
user_dn = user_entry['dn']
logger.debug(f"_login3 found LDAP user DN {user_dn}")
try:
"""Try to bind as the user itself"""
conn = self.ldap3.Connection(server, user_dn, password=password)
if not conn.bind():
logger.debug(f"_login3 user '{login}' cannot be found")
return ""
tmp: list[str] = []
if self._ldap_groups_attr:
tmp = []
for g in user_entry['attributes'][self._ldap_groups_attr]:
"""Get group g's RDN's attribute value"""
try:
rdns = self.ldap3.utils.dn.parse_dn(g)
tmp.append(rdns[0][1])
except Exception:
tmp.append(g)
self._ldap_groups = set(tmp)
logger.debug("_login3 LDAP groups of user: %s", ",".join(self._ldap_groups))
if self._ldap_user_attr:
if user_entry['attributes'][self._ldap_user_attr]:
login = user_entry['attributes'][self._ldap_user_attr]
logger.debug(f"_login3 user set to: '{login}'")
conn.unbind()
logger.debug(f"_login3 {login} successfully authenticated")
return login
except Exception as e:
logger.debug(f"_login3 error 2 {e}")
pass
return ""
def _login(self, login: str, password: str) -> str:
"""Validate credentials.
In first step we make a connection to the LDAP server with the ldap_reader_dn credential.
In next step the DN of the user to authenticate will be searched.
In the last step the authentication of the user will be proceeded.
"""
if self._ldap_module_version == 2:
return self._login2(login, password)
return self._login3(login, password)

View file

@ -1,66 +0,0 @@
# This file is part of Radicale Server - Calendar Server
#
# Original from https://gitlab.mim-libre.fr/alphabet/radicale_oauth/
# Copyright © 2021-2022 Bruno Boiget
# Copyright © 2022-2022 Daniel Dehennin
#
# Since migration into upstream
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Authentication backend that checks credentials against an oauth2 server auth endpoint
"""
import requests
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration):
super().__init__(configuration)
self._endpoint = configuration.get("auth", "oauth2_token_endpoint")
if not self._endpoint:
logger.error("auth.oauth2_token_endpoint URL missing")
raise RuntimeError("OAuth2 token endpoint URL is required")
logger.info("auth OAuth2 token endpoint: %s" % (self._endpoint))
def _login(self, login, password):
"""Validate credentials.
Sends login credentials to oauth token endpoint and checks that a token is returned
"""
try:
# authenticate to authentication endpoint and return login if ok, else ""
req_params = {
"username": login,
"password": password,
"grant_type": "password",
"client_id": "radicale",
}
req_headers = {"Content-Type": "application/x-www-form-urlencoded"}
response = requests.post(
self._endpoint, data=req_params, headers=req_headers
)
if (
response.status_code == requests.codes.ok
and "access_token" in response.json()
):
return login
except OSError as e:
logger.critical("Failed to authenticate against OAuth2 server %s: %s" % (self._endpoint, e))
logger.warning("User failed to authenticate using OAuth2: %r" % login)
return ""

View file

@ -1,105 +0,0 @@
# -*- coding: utf-8 -*-
#
# This file is part of Radicale Server - Calendar Server
# Copyright © 2011 Henry-Nicolas Tourneur
# Copyright © 2021-2021 Unrud <unrud@outlook.com>
# Copyright © 2025-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
PAM authentication.
Authentication using the ``pam-python`` module.
Important: radicale user need access to /etc/shadow by e.g.
chgrp radicale /etc/shadow
chmod g+r
"""
import grp
import pwd
from radicale import auth
from radicale.log import logger
class Auth(auth.BaseAuth):
def __init__(self, configuration) -> None:
super().__init__(configuration)
try:
import pam
self.pam = pam
except ImportError as e:
raise RuntimeError("PAM authentication requires the Python pam module") from e
self._service = configuration.get("auth", "pam_service")
logger.info("auth.pam_service: %s" % self._service)
self._group_membership = configuration.get("auth", "pam_group_membership")
if (self._group_membership):
logger.info("auth.pam_group_membership: %s" % self._group_membership)
else:
logger.warning("auth.pam_group_membership: (empty, nothing to check / INSECURE)")
def pam_authenticate(self, *args, **kwargs):
return self.pam.authenticate(*args, **kwargs)
def _login(self, login: str, password: str) -> str:
"""Check if ``user``/``password`` couple is valid."""
if login is None or password is None:
return ""
# Check whether the user exists in the PAM system
try:
pwd.getpwnam(login).pw_uid
except KeyError:
logger.debug("PAM user not found: %r" % login)
return ""
else:
logger.debug("PAM user found: %r" % login)
# Check whether the user has a primary group (mandatory)
try:
# Get user primary group
primary_group = grp.getgrgid(pwd.getpwnam(login).pw_gid).gr_name
logger.debug("PAM user %r has primary group: %r" % (login, primary_group))
except KeyError:
logger.debug("PAM user has no primary group: %r" % login)
return ""
# Obtain supplementary groups
members = []
if (self._group_membership):
try:
members = grp.getgrnam(self._group_membership).gr_mem
except KeyError:
logger.debug(
"PAM membership required group doesn't exist: %r" %
self._group_membership)
return ""
# Check whether the user belongs to the required group
# (primary or supplementary)
if (self._group_membership):
if (primary_group != self._group_membership) and (login not in members):
logger.warning("PAM user %r belongs not to the required group: %r" % (login, self._group_membership))
return ""
else:
logger.debug("PAM user %r belongs to the required group: %r" % (login, self._group_membership))
# Check the password
if self.pam_authenticate(login, password, service=self._service):
return login
else:
logger.debug("PAM authentication not successful for user: %r (service %r)" % (login, self._service))
return ""

View file

@ -2,7 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2017-2020 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -104,29 +104,6 @@ def _convert_to_bool(value: Any) -> bool:
return RawConfigParser.BOOLEAN_STATES[value.lower()]
def imap_address(value):
if "]" in value:
pre_address, pre_address_port = value.rsplit("]", 1)
else:
pre_address, pre_address_port = "", value
if ":" in pre_address_port:
pre_address2, port = pre_address_port.rsplit(":", 1)
address = pre_address + pre_address2
else:
address, port = pre_address + pre_address_port, None
try:
return (address.strip(string.whitespace + "[]"),
None if port is None else int(port))
except ValueError:
raise ValueError("malformed IMAP address: %r" % value)
def imap_security(value):
if value not in ("tls", "starttls", "none"):
raise ValueError("unsupported IMAP security: %r" % value)
return value
def json_str(value: Any) -> dict:
if not value:
return {}
@ -164,14 +141,6 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"aliases": ("-s", "--ssl",),
"opposite_aliases": ("-S", "--no-ssl",),
"type": bool}),
("protocol", {
"value": "",
"help": "SSL/TLS protocol (Apache SSLProtocol format)",
"type": str}),
("ciphersuite", {
"value": "",
"help": "SSL/TLS Cipher Suite (OpenSSL cipher list format)",
"type": str}),
("certificate", {
"value": "/etc/ssl/radicale.cert.pem",
"help": "set certificate file",
@ -187,10 +156,6 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"help": "set CA certificate for validating clients",
"aliases": ("--certificate-authority",),
"type": filepath}),
("script_name", {
"value": "",
"help": "script name to strip from URI if called by reverse proxy (default taken from HTTP_X_SCRIPT_NAME or SCRIPT_NAME)",
"type": str}),
("_internal_server", {
"value": "False",
"help": "the internal server is used",
@ -206,51 +171,18 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"type": str})])),
("auth", OrderedDict([
("type", {
"value": "denyall",
"help": "authentication method (" + "|".join(auth.INTERNAL_TYPES) + ")",
"value": "none",
"help": "authentication method",
"type": str_or_callable,
"internal": auth.INTERNAL_TYPES}),
("cache_logins", {
"value": "false",
"help": "cache successful/failed logins for until expiration time",
"type": bool}),
("cache_successful_logins_expiry", {
"value": "15",
"help": "expiration time for caching successful logins in seconds",
"type": int}),
("cache_failed_logins_expiry", {
"value": "90",
"help": "expiration time for caching failed logins in seconds",
"type": int}),
("htpasswd_filename", {
"value": "/etc/radicale/users",
"help": "htpasswd filename",
"type": filepath}),
("htpasswd_encryption", {
"value": "autodetect",
"value": "md5",
"help": "htpasswd encryption method",
"type": str}),
("htpasswd_cache", {
"value": "False",
"help": "enable caching of htpasswd file",
"type": bool}),
("dovecot_connection_type", {
"value": "AF_UNIX",
"help": "Connection type for dovecot authentication",
"type": str_or_callable,
"internal": auth.AUTH_SOCKET_FAMILY}),
("dovecot_socket", {
"value": "/var/run/dovecot/auth-client",
"help": "dovecot auth AF_UNIX socket",
"type": str}),
("dovecot_host", {
"value": "localhost",
"help": "dovecot auth AF_INET or AF_INET6 host",
"type": str}),
("dovecot_port", {
"value": "12345",
"help": "dovecot auth port",
"type": int}),
("realm", {
"value": "Radicale - Password Required",
"help": "message displayed when a password is needed",
@ -259,82 +191,6 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "1",
"help": "incorrect authentication delay",
"type": positive_float}),
("ldap_ignore_attribute_create_modify_timestamp", {
"value": "false",
"help": "Ignore modifyTimestamp and createTimestamp attributes. Need if Authentik LDAP server is used.",
"type": bool}),
("ldap_uri", {
"value": "ldap://localhost",
"help": "URI to the ldap server",
"type": str}),
("ldap_base", {
"value": "",
"help": "LDAP base DN of the ldap server",
"type": str}),
("ldap_reader_dn", {
"value": "",
"help": "the DN of a ldap user with read access to get the user accounts",
"type": str}),
("ldap_secret", {
"value": "",
"help": "the password of the ldap_reader_dn",
"type": str}),
("ldap_secret_file", {
"value": "",
"help": "path of the file containing the password of the ldap_reader_dn",
"type": str}),
("ldap_filter", {
"value": "(cn={0})",
"help": "the search filter to find the user DN to authenticate by the username",
"type": str}),
("ldap_user_attribute", {
"value": "",
"help": "the attribute to be used as username after authentication",
"type": str}),
("ldap_groups_attribute", {
"value": "",
"help": "attribute to read the group memberships from",
"type": str}),
("ldap_use_ssl", {
"value": "False",
"help": "Use ssl on the ldap connection",
"type": bool}),
("ldap_ssl_verify_mode", {
"value": "REQUIRED",
"help": "The certificate verification mode. NONE, OPTIONAL, default is REQUIRED",
"type": str}),
("ldap_ssl_ca_file", {
"value": "",
"help": "The path to the CA file in pem format which is used to certificate the server certificate",
"type": str}),
("imap_host", {
"value": "localhost",
"help": "IMAP server hostname: address|address:port|[address]:port|*localhost*",
"type": imap_address}),
("imap_security", {
"value": "tls",
"help": "Secure the IMAP connection: *tls*|starttls|none",
"type": imap_security}),
("oauth2_token_endpoint", {
"value": "",
"help": "OAuth2 token endpoint URL",
"type": str}),
("pam_group_membership", {
"value": "",
"help": "PAM group user should be member of",
"type": str}),
("pam_service", {
"value": "radicale",
"help": "PAM service",
"type": str}),
("strip_domain", {
"value": "False",
"help": "strip domain from username",
"type": bool}),
("uc_username", {
"value": "False",
"help": "convert username to uppercase, must be true for case-insensitive auth providers",
"type": bool}),
("lc_username", {
"value": "False",
"help": "convert username to lowercase, must be true for case-insensitive auth providers",
@ -349,10 +205,6 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "True",
"help": "permit delete of a collection",
"type": bool}),
("permit_overwrite_collection", {
"value": "True",
"help": "permit overwrite of a collection",
"type": bool}),
("file", {
"value": "/etc/radicale/rights",
"help": "file for rights management from_file",
@ -367,30 +219,6 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "/var/lib/radicale/collections",
"help": "path where collections are stored",
"type": filepath}),
("filesystem_cache_folder", {
"value": "",
"help": "path where cache of collections is stored in case of use_cache_subfolder_* options are active",
"type": filepath}),
("use_cache_subfolder_for_item", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'item' cache file structure instead of inside collection folder",
"type": bool}),
("use_cache_subfolder_for_history", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'history' cache file structure instead of inside collection folder",
"type": bool}),
("use_cache_subfolder_for_synctoken", {
"value": "False",
"help": "use subfolder 'collection-cache' for 'sync-token' cache file structure instead of inside collection folder",
"type": bool}),
("use_mtime_and_size_for_item_cache", {
"value": "False",
"help": "use mtime and file size instead of SHA256 for 'item' cache (improves speed)",
"type": bool}),
("folder_umask", {
"value": "",
"help": "umask for folder creation (empty: system default)",
"type": str}),
("max_sync_token_age", {
"value": "2592000", # 30 days
"help": "delete sync token that are older",
@ -460,26 +288,12 @@ DEFAULT_CONFIG_SCHEMA: types.CONFIG_SCHEMA = OrderedDict([
"value": "False",
"help": "log response content on level=debug",
"type": bool}),
("rights_rule_doesnt_match_on_debug", {
"value": "False",
"help": "log rights rules which doesn't match on level=debug",
"type": bool}),
("storage_cache_actions_on_debug", {
"value": "False",
"help": "log storage cache action on level=debug",
"type": bool}),
("mask_passwords", {
"value": "True",
"help": "mask passwords in logs",
"type": bool})])),
("headers", OrderedDict([
("_allow_extra", str)])),
("reporting", OrderedDict([
("max_freebusy_occurrence", {
"value": "10000",
"help": "number of occurrences per event when reporting",
"type": positive_int})]))
])
("_allow_extra", str)]))])
def parse_compound_paths(*compound_paths: Optional[str]
@ -531,7 +345,7 @@ def load(paths: Optional[Iterable[Tuple[str, bool]]] = None
config_source = "config file %r" % path
config: types.CONFIG
try:
with open(path) as f:
with open(path, "r") as f:
parser.read_file(f)
config = {s: {o: parser[s][o] for o in parser.options(s)}
for s in parser.sections()}

View file

@ -3,23 +3,14 @@ from enum import Enum
from typing import Sequence
from radicale import pathutils, utils
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("none", "rabbitmq")
def load(configuration):
"""Load the storage module chosen in configuration."""
try:
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
except Exception as e:
logger.warning(e)
logger.warning("Hook \"%s\" failed to load, falling back to \"none\"." % configuration.get("hook", "type"))
configuration = configuration.copy()
configuration.update({"hook": {"type": "none"}}, "hook", privileged=True)
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
return utils.load_plugin(
INTERNAL_TYPES, "hook", "Hook", BaseHook, configuration)
class BaseHook:

View file

@ -3,7 +3,7 @@
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -42,10 +42,7 @@ else:
import importlib.abc
from importlib import resources
if sys.version_info < (3, 13):
_TRAVERSABLE_LIKE_TYPE = Union[importlib.abc.Traversable, pathlib.Path]
else:
_TRAVERSABLE_LIKE_TYPE = Union[importlib.resources.abc.Traversable, pathlib.Path]
_TRAVERSABLE_LIKE_TYPE = Union[importlib.abc.Traversable, pathlib.Path]
NOT_ALLOWED: types.WSGIResponse = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
@ -79,9 +76,6 @@ REMOTE_DESTINATION: types.WSGIResponse = (
DIRECTORY_LISTING: types.WSGIResponse = (
client.FORBIDDEN, (("Content-Type", "text/plain"),),
"Directory listings are not supported.")
INSUFFICIENT_STORAGE: types.WSGIResponse = (
client.INSUFFICIENT_STORAGE, (("Content-Type", "text/plain"),),
"Insufficient Storage. Please contact the administrator.")
INTERNAL_SERVER_ERROR: types.WSGIResponse = (
client.INTERNAL_SERVER_ERROR, (("Content-Type", "text/plain"),),
"A server error occurred. Please contact the administrator.")
@ -152,7 +146,7 @@ def read_request_body(configuration: "config.Configuration",
if configuration.get("logging", "request_content_on_debug"):
logger.debug("Request content:\n%s", content)
else:
logger.debug("Request content: suppressed by config/option [logging] request_content_on_debug")
logger.debug("Request content: suppressed by config/option [auth] request_content_on_debug")
return content
@ -196,24 +190,6 @@ def _serve_traversable(
"%a, %d %b %Y %H:%M:%S GMT",
time.gmtime(traversable.stat().st_mtime))
answer = traversable.read_bytes()
if path == "/.web/index.html" or path == "/.web/":
# enable link on the fly in index.html if InfCloud index.html is existing
# class="infcloudlink-hidden" -> class="infcloudlink"
path_posix = str(traversable)
path_posix_infcloud = path_posix.replace("/internal_data/index.html", "/internal_data/infcloud/index.html")
if os.path.isfile(path_posix_infcloud):
# logger.debug("Enable InfCloud link in served page: %r", path)
answer = answer.replace(b"infcloudlink-hidden", b"infcloud")
elif path == "/.web/infcloud/config.js":
# adjust on the fly default config.js of InfCloud installation
# logger.debug("Adjust on-the-fly default InfCloud config.js in served page: %r", path)
answer = answer.replace(b"location.pathname.replace(RegExp('/+[^/]+/*(index\\.html)?$'),'')+", b"location.pathname.replace(RegExp('/\\.web\\.infcloud/(index\\.html)?$'),'')+")
answer = answer.replace(b"'/caldav.php/',", b"'/',")
answer = answer.replace(b"settingsAccount: true,", b"settingsAccount: false,")
elif path == "/.web/infcloud/main.js":
# adjust on the fly default main.js of InfCloud installation
logger.debug("Adjust on-the-fly default InfCloud main.js in served page: %r", path)
answer = answer.replace(b"'InfCloud - the open source CalDAV/CardDAV web client'", b"'InfCloud - the open source CalDAV/CardDAV web client - served through Radicale CalDAV/CardDAV server'")
return client.OK, headers, answer

View file

@ -49,12 +49,6 @@ def read_components(s: str) -> List[vobject.base.Component]:
s = re.sub(r"^(PHOTO(?:;[^:\r\n]*)?;ENCODING=b(?:;[^:\r\n]*)?:)"
r"data:[^;,\r\n]*;base64,", r"\1", s,
flags=re.MULTILINE | re.IGNORECASE)
# Workaround for bug with malformed ICS files containing control codes
# Filter out all control codes except those we expect to find:
# * 0x09 Horizontal Tab
# * 0x0A Line Feed
# * 0x0D Carriage Return
s = re.sub(r'[\x00-\x08\x0B\x0C\x0E-\x1F]', '', s)
return list(vobject.readComponents(s, allowQP=True))
@ -304,7 +298,7 @@ def find_time_range(vobject_item: vobject.base.Component, tag: str
Returns a tuple (``start``, ``end``) where ``start`` and ``end`` are
POSIX timestamps.
This is intended to be used for matching against simplified prefilters.
This is intened to be used for matching against simplified prefilters.
"""
if not tag:

View file

@ -2,8 +2,7 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2015 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -49,34 +48,10 @@ def date_to_datetime(d: date) -> datetime:
if not isinstance(d, datetime):
d = datetime.combine(d, datetime.min.time())
if not d.tzinfo:
# NOTE: using vobject's UTC as it wasn't playing well with datetime's.
d = d.replace(tzinfo=vobject.icalendar.utc)
d = d.replace(tzinfo=timezone.utc)
return d
def parse_time_range(time_filter: ET.Element) -> Tuple[datetime, datetime]:
start_text = time_filter.get("start")
end_text = time_filter.get("end")
if start_text:
start = datetime.strptime(
start_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc)
else:
start = DATETIME_MIN
if end_text:
end = datetime.strptime(
end_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc)
else:
end = DATETIME_MAX
return start, end
def time_range_timestamps(time_filter: ET.Element) -> Tuple[int, int]:
start, end = parse_time_range(time_filter)
return (math.floor(start.timestamp()), math.ceil(end.timestamp()))
def comp_match(item: "item.Item", filter_: ET.Element, level: int = 0) -> bool:
"""Check whether the ``item`` matches the comp ``filter_``.
@ -172,10 +147,21 @@ def time_range_match(vobject_item: vobject.base.Component,
"""Check whether the component/property ``child_name`` of
``vobject_item`` matches the time-range ``filter_``."""
if not filter_.get("start") and not filter_.get("end"):
start_text = filter_.get("start")
end_text = filter_.get("end")
if not start_text and not end_text:
return False
if start_text:
start = datetime.strptime(start_text, "%Y%m%dT%H%M%SZ")
else:
start = datetime.min
if end_text:
end = datetime.strptime(end_text, "%Y%m%dT%H%M%SZ")
else:
end = datetime.max
start = start.replace(tzinfo=timezone.utc)
end = end.replace(tzinfo=timezone.utc)
start, end = parse_time_range(filter_)
matched = False
def range_fn(range_start: datetime, range_end: datetime,
@ -195,35 +181,6 @@ def time_range_match(vobject_item: vobject.base.Component,
return matched
def time_range_fill(vobject_item: vobject.base.Component,
filter_: ET.Element, child_name: str, n: int = 1
) -> List[Tuple[datetime, datetime]]:
"""Create a list of ``n`` occurances from the component/property ``child_name``
of ``vobject_item``."""
if not filter_.get("start") and not filter_.get("end"):
return []
start, end = parse_time_range(filter_)
ranges: List[Tuple[datetime, datetime]] = []
def range_fn(range_start: datetime, range_end: datetime,
is_recurrence: bool) -> bool:
nonlocal ranges
if start < range_end and range_start < end:
ranges.append((range_start, range_end))
if n > 0 and len(ranges) >= n:
return True
if end < range_start and not is_recurrence:
return True
return False
def infinity_fn(range_start: datetime) -> bool:
return False
visit_time_ranges(vobject_item, child_name, range_fn, infinity_fn)
return ranges
def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
range_fn: Callable[[datetime, datetime, bool], bool],
infinity_fn: Callable[[datetime], bool]) -> None:
@ -242,7 +199,7 @@ def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
"""
# HACK: According to rfc5545-3.8.4.4 a recurrence that is rescheduled
# HACK: According to rfc5545-3.8.4.4 an recurrance that is resheduled
# with Recurrence ID affects the recurrence itself and all following
# recurrences too. This is not respected and client don't seem to bother
# either.
@ -274,16 +231,13 @@ def visit_time_ranges(vobject_item: vobject.base.Component, child_name: str,
if hasattr(comp, "recurrence_id") and comp.recurrence_id.value:
recurrences.append(comp.recurrence_id.value)
if comp.rruleset:
if comp.rruleset._len is None:
logger.warning("Ignore empty RRULESET in item at RECURRENCE-ID with value '%s' and UID '%s'", comp.recurrence_id.value, comp.uid.value)
else:
# Prevent possible infinite loop
raise ValueError("Overwritten recurrence with RRULESET")
# Prevent possible infinite loop
raise ValueError("Overwritten recurrence with RRULESET")
rec_main = comp
yield comp, True, []
else:
if main is not None:
raise ValueError("Multiple main components. Got comp: {}".format(comp))
raise ValueError("Multiple main components")
main = comp
if main is None and len(recurrences) == 1:
main = rec_main
@ -589,7 +543,20 @@ def simplify_prefilters(filters: Iterable[ET.Element], collection_tag: str
if time_filter.tag != xmlutils.make_clark("C:time-range"):
simple = False
continue
start, end = time_range_timestamps(time_filter)
start_text = time_filter.get("start")
end_text = time_filter.get("end")
if start_text:
start = math.floor(datetime.strptime(
start_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc).timestamp())
else:
start = TIMESTAMP_MIN
if end_text:
end = math.ceil(datetime.strptime(
end_text, "%Y%m%dT%H%M%SZ").replace(
tzinfo=timezone.utc).timestamp())
else:
end = TIMESTAMP_MAX
return tag, start, end, simple
return tag, TIMESTAMP_MIN, TIMESTAMP_MAX, simple
return None, TIMESTAMP_MIN, TIMESTAMP_MAX, simple

View file

@ -221,31 +221,18 @@ def setup() -> None:
logger.error("Invalid RADICALE_LOG_FORMAT: %r", format_name)
logger_display_backtrace_disabled: bool = False
logger_display_backtrace_enabled: bool = False
def set_level(level: Union[int, str], backtrace_on_debug: bool) -> None:
"""Set logging level for global logger."""
global logger_display_backtrace_disabled
global logger_display_backtrace_enabled
if isinstance(level, str):
level = getattr(logging, level.upper())
assert isinstance(level, int)
logger.setLevel(level)
if level > logging.DEBUG:
if logger_display_backtrace_disabled is False:
logger.info("Logging of backtrace is disabled in this loglevel")
logger_display_backtrace_disabled = True
logger.info("Logging of backtrace is disabled in this loglevel")
logger.addFilter(REMOVE_TRACEBACK_FILTER)
else:
if not backtrace_on_debug:
if logger_display_backtrace_disabled is False:
logger.debug("Logging of backtrace is disabled by option in this loglevel")
logger_display_backtrace_disabled = True
logger.debug("Logging of backtrace is disabled by option in this loglevel")
logger.addFilter(REMOVE_TRACEBACK_FILTER)
else:
if logger_display_backtrace_enabled is False:
logger.debug("Logging of backtrace is enabled by option in this loglevel")
logger_display_backtrace_enabled = True
logger.removeFilter(REMOVE_TRACEBACK_FILTER)

View file

@ -32,7 +32,7 @@ Take a look at the class ``BaseRights`` if you want to implement your own.
"""
from typing import Sequence, Set
from typing import Sequence
from radicale import config, utils
@ -57,8 +57,6 @@ def intersect(a: str, b: str) -> str:
class BaseRights:
_user_groups: Set[str] = set([])
def __init__(self, configuration: "config.Configuration") -> None:
"""Initialize BaseRights.

View file

@ -1,7 +1,6 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -23,7 +22,7 @@ config (section "rights", key "file").
The login is matched against the "user" key, and the collection path
is matched against the "collection" key. In the "collection" regex you can use
`{user}` and get groups from the "user" regex with `{0}`, `{1}`, etc.
In consequence of the parameter substitution you have to write `{{` and `}}`
In consequence of the parameter subsitution you have to write `{{` and `}}`
if you want to use regular curly braces in the "user" and "collection" regexes.
For example, for the "user" key, ".+" means "authenticated user" and ".*"
@ -49,61 +48,40 @@ class Rights(rights.BaseRights):
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
self._filename = configuration.get("rights", "file")
self._log_rights_rule_doesnt_match_on_debug = configuration.get("logging", "rights_rule_doesnt_match_on_debug")
self._rights_config = configparser.ConfigParser()
try:
with open(self._filename, "r") as f:
self._rights_config.read_file(f)
logger.debug("Read rights file")
except Exception as e:
raise RuntimeError("Failed to load rights file %r: %s" %
(self._filename, e)) from e
def authorization(self, user: str, path: str) -> str:
user = user or ""
sane_path = pathutils.strip_path(path)
# Prevent "regex injection"
escaped_user = re.escape(user)
if not self._log_rights_rule_doesnt_match_on_debug:
logger.debug("logging of rules which doesn't match suppressed by config/option [logging] rights_rule_doesnt_match_on_debug")
for section in self._rights_config.sections():
group_match = None
user_match = None
rights_config = configparser.ConfigParser()
try:
with open(self._filename, "r") as f:
rights_config.read_file(f)
except Exception as e:
raise RuntimeError("Failed to load rights file %r: %s" %
(self._filename, e)) from e
for section in rights_config.sections():
try:
user_pattern = self._rights_config.get(section, "user", fallback="")
collection_pattern = self._rights_config.get(section, "collection")
allowed_groups = self._rights_config.get(section, "groups", fallback="").split(",")
try:
group_match = len(self._user_groups.intersection(allowed_groups)) > 0
except Exception:
pass
user_pattern = rights_config.get(section, "user")
collection_pattern = rights_config.get(section, "collection")
# Use empty format() for harmonized handling of curly braces
if user_pattern != "":
user_match = re.fullmatch(user_pattern.format(), user)
user_collection_match = user_match and re.fullmatch(
user_match = re.fullmatch(user_pattern.format(), user)
collection_match = user_match and re.fullmatch(
collection_pattern.format(
*(re.escape(s) for s in user_match.groups()),
user=escaped_user), sane_path)
group_collection_match = group_match and re.fullmatch(
collection_pattern.format(user=escaped_user), sane_path)
except Exception as e:
raise RuntimeError("Error in section %r of rights file %r: "
"%s" % (section, self._filename, e)) from e
if user_match and user_collection_match:
permission = self._rights_config.get(section, "permissions")
if user_match and collection_match:
permission = rights_config.get(section, "permissions")
logger.debug("Rule %r:%r matches %r:%r from section %r permission %r",
user, sane_path, user_pattern,
collection_pattern, section, permission)
return permission
if group_match and group_collection_match:
permission = self._rights_config.get(section, "permissions")
logger.debug("Rule %r:%r matches %r:%r from section %r permission %r by group membership",
user, sane_path, user_pattern,
collection_pattern, section, permission)
return permission
if self._log_rights_rule_doesnt_match_on_debug:
logger.debug("Rule %r:%r doesn't match %r:%r from section %r",
user, sane_path, user_pattern, collection_pattern,
section)
logger.debug("Rights: %r:%r doesn't match any section", user, sane_path)
logger.debug("Rule %r:%r doesn't match %r:%r from section %r",
user, sane_path, user_pattern, collection_pattern,
section)
logger.info("Rights: %r:%r doesn't match any section", user, sane_path)
return ""

View file

@ -2,8 +2,8 @@
# Copyright © 2008 Nicolas Kandel
# Copyright © 2008 Pascal Halter
# Copyright © 2008-2017 Guillaume Ayoub
# Copyright © 2017-2023 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -34,7 +34,7 @@ from typing import (Any, Callable, Dict, List, MutableMapping, Optional, Set,
Tuple, Union)
from urllib.parse import unquote
from radicale import Application, config, utils
from radicale import Application, config
from radicale.log import logger
COMPAT_EAI_ADDRFAMILY: int
@ -58,7 +58,19 @@ elif sys.platform == "win32":
# IPv4 (host, port) and IPv6 (host, port, flowinfo, scopeid)
ADDRESS_TYPE = utils.ADDRESS_TYPE
ADDRESS_TYPE = Union[Tuple[Union[str, bytes, bytearray], int],
Tuple[str, int, int, int]]
def format_address(address: ADDRESS_TYPE) -> str:
host, port, *_ = address
if not isinstance(host, str):
raise NotImplementedError("Unsupported address format: %r" %
(address,))
if host.find(":") == -1:
return "%s:%d" % (host, port)
else:
return "[%s]:%d" % (host, port)
class ParallelHTTPServer(socketserver.ThreadingMixIn,
@ -155,8 +167,6 @@ class ParallelHTTPSServer(ParallelHTTPServer):
certfile: str = self.configuration.get("server", "certificate")
keyfile: str = self.configuration.get("server", "key")
cafile: str = self.configuration.get("server", "certificate_authority")
protocol: str = self.configuration.get("server", "protocol")
ciphersuite: str = self.configuration.get("server", "ciphersuite")
# Test if the files can be read
for name, filename in [("certificate", certfile), ("key", keyfile),
("certificate_authority", cafile)]:
@ -166,40 +176,15 @@ class ParallelHTTPSServer(ParallelHTTPServer):
if name == "certificate_authority" and not filename:
continue
try:
open(filename).close()
open(filename, "r").close()
except OSError as e:
raise RuntimeError(
"Invalid %s value for option %r in section %r in %s: %r "
"(%s)" % (type_name, name, "server", source, filename,
e)) from e
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
logger.info("SSL load files certificate='%s' key='%s'", certfile, keyfile)
context.load_cert_chain(certfile=certfile, keyfile=keyfile)
if protocol:
logger.info("SSL set explicit protocols (maybe not all supported by underlying OpenSSL): '%s'", protocol)
context.options = utils.ssl_context_options_by_protocol(protocol, context.options)
context.minimum_version = utils.ssl_context_minimum_version_by_options(context.options)
if (context.minimum_version == 0):
raise RuntimeError("No SSL minimum protocol active")
context.maximum_version = utils.ssl_context_maximum_version_by_options(context.options)
if (context.maximum_version == 0):
raise RuntimeError("No SSL maximum protocol active")
else:
logger.info("SSL active protocols: (system-default)")
logger.debug("SSL minimum acceptable protocol: %s", context.minimum_version)
logger.debug("SSL maximum acceptable protocol: %s", context.maximum_version)
logger.info("SSL accepted protocols: %s", ' '.join(utils.ssl_get_protocols(context)))
if ciphersuite:
logger.info("SSL set explicit ciphersuite (maybe not all supported by underlying OpenSSL): '%s'", ciphersuite)
context.set_ciphers(ciphersuite)
else:
logger.info("SSL active ciphersuite: (system-default)")
cipherlist = []
for entry in context.get_ciphers():
cipherlist.append(entry["name"])
logger.info("SSL accepted ciphers: %s", ' '.join(cipherlist))
if cafile:
logger.info("SSL enable mandatory client certificate verification using CA file='%s'", cafile)
context.load_verify_locations(cafile=cafile)
context.verify_mode = ssl.CERT_REQUIRED
self.socket = context.wrap_socket(
@ -214,7 +199,7 @@ class ParallelHTTPSServer(ParallelHTTPServer):
except socket.timeout:
raise
except Exception as e:
raise RuntimeError("SSL handshake failed: %s client %s" % (e, str(client_address[0]))) from e
raise RuntimeError("SSL handshake failed: %s" % e) from e
except Exception:
try:
self.handle_error(request, client_address)
@ -250,9 +235,6 @@ class RequestHandler(wsgiref.simple_server.WSGIRequestHandler):
def get_environ(self) -> Dict[str, Any]:
env = super().get_environ()
if isinstance(self.connection, ssl.SSLSocket):
env["HTTPS"] = "on"
env["SSL_CIPHER"] = self.request.cipher()[0]
env["SSL_PROTOCOL"] = self.request.version()
# The certificate can be evaluated by the auth module
env["REMOTE_CERTIFICATE"] = self.connection.getpeercert()
# Parent class only tries latin1 encoding
@ -292,7 +274,7 @@ def serve(configuration: config.Configuration,
"""
logger.info("Starting Radicale (%s)", utils.packages_version())
logger.info("Starting Radicale")
# Copy configuration before modifying
configuration = configuration.copy()
configuration.update({"server": {"_internal_server": "True"}}, "server",
@ -309,20 +291,20 @@ def serve(configuration: config.Configuration,
try:
getaddrinfo = socket.getaddrinfo(address_port[0], address_port[1], 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)
except OSError as e:
logger.warning("cannot retrieve IPv4 or IPv6 address of '%s': %s" % (utils.format_address(address_port), e))
logger.warn("cannot retrieve IPv4 or IPv6 address of '%s': %s" % (format_address(address_port), e))
continue
logger.debug("getaddrinfo of '%s': %s" % (utils.format_address(address_port), getaddrinfo))
logger.debug("getaddrinfo of '%s': %s" % (format_address(address_port), getaddrinfo))
for (address_family, socket_kind, socket_proto, socket_flags, socket_address) in getaddrinfo:
logger.debug("try to create server socket on '%s'" % (utils.format_address(socket_address)))
logger.debug("try to create server socket on '%s'" % (format_address(socket_address)))
try:
server = server_class(configuration, address_family, (socket_address[0], socket_address[1]), RequestHandler)
except OSError as e:
logger.warning("cannot create server socket on '%s': %s" % (utils.format_address(socket_address), e))
logger.warn("cannot create server socket on '%s': %s" % (format_address(socket_address), e))
continue
servers[server.socket] = server
server.set_app(application)
logger.info("Listening on %r%s",
utils.format_address(server.server_address),
format_address(server.server_address),
" with SSL" if use_ssl else "")
if not servers:
raise RuntimeError("No servers started")

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -27,8 +26,8 @@ Take a look at the class ``BaseCollection`` if you want to implement your own.
import json
import xml.etree.ElementTree as ET
from hashlib import sha256
from typing import (Callable, ContextManager, Iterable, Iterator, Mapping,
Optional, Sequence, Set, Tuple, Union, overload)
from typing import (Iterable, Iterator, Mapping, Optional, Sequence, Set,
Tuple, Union, overload)
import vobject
@ -36,19 +35,17 @@ from radicale import config
from radicale import item as radicale_item
from radicale import types, utils
from radicale.item import filter as radicale_filter
from radicale.log import logger
INTERNAL_TYPES: Sequence[str] = ("multifilesystem", "multifilesystem_nolock",)
# NOTE: change only if cache structure is modified to avoid cache invalidation on update
CACHE_VERSION_RADICALE = "3.3.1"
CACHE_VERSION: bytes = ("%s=%s;%s=%s;" % ("radicale", CACHE_VERSION_RADICALE, "vobject", utils.package_version("vobject"))).encode()
CACHE_DEPS: Sequence[str] = ("radicale", "vobject", "python-dateutil",)
CACHE_VERSION: bytes = "".join(
"%s=%s;" % (pkg, utils.package_version(pkg))
for pkg in CACHE_DEPS).encode()
def load(configuration: "config.Configuration") -> "BaseStorage":
"""Load the storage module chosen in configuration."""
logger.debug("storage cache version: %r", str(CACHE_VERSION))
return utils.load_plugin(INTERNAL_TYPES, "storage", "Storage", BaseStorage,
configuration)
@ -285,11 +282,8 @@ class BaseStorage:
"""
self.configuration = configuration
def discover(
self, path: str, depth: str = "0",
child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None,
user_groups: Set[str] = set([])) -> Iterable["types.CollectionOrItem"]:
def discover(self, path: str, depth: str = "0") -> Iterable[
"types.CollectionOrItem"]:
"""Discover a list of collections under the given ``path``.
``path`` is sanitized.

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -25,12 +24,10 @@ Uses one folder per collection and one file per collection entry.
"""
import os
import sys
import time
from typing import ClassVar, Iterator, Optional, Type
from radicale import config
from radicale.log import logger
from radicale.storage.multifilesystem.base import CollectionBase, StorageBase
from radicale.storage.multifilesystem.cache import CollectionPartCache
from radicale.storage.multifilesystem.create_collection import \
@ -47,9 +44,6 @@ from radicale.storage.multifilesystem.sync import CollectionPartSync
from radicale.storage.multifilesystem.upload import CollectionPartUpload
from radicale.storage.multifilesystem.verify import StoragePartVerify
# 999 second, 999 ms, 999 us, 999 ns
MTIME_NS_TEST: int = 999999999999
class Collection(
CollectionPartDelete, CollectionPartMeta, CollectionPartSync,
@ -92,104 +86,6 @@ class Storage(
_collection_class: ClassVar[Type[Collection]] = Collection
def _analyse_mtime(self):
# calculate and display mtime resolution
path = os.path.join(self._get_collection_root_folder(), ".Radicale.mtime_test")
logger.debug("Storage item mtime resolution test with file: %r", path)
try:
with open(path, "w") as f:
f.write("mtime_test")
f.close
except Exception as e:
logger.warning("Storage item mtime resolution test not possible, cannot write file: %r (%s)", path, e)
raise
# set mtime_ns for tests
try:
os.utime(path, times=None, ns=(MTIME_NS_TEST, MTIME_NS_TEST))
except Exception as e:
logger.warning("Storage item mtime resolution test not possible, cannot set utime on file: %r (%s)", path, e)
os.remove(path)
raise
logger.debug("Storage item mtime resoultion test set: %d" % MTIME_NS_TEST)
mtime_ns = os.stat(path).st_mtime_ns
logger.debug("Storage item mtime resoultion test get: %d" % mtime_ns)
# start analysis
precision = 1
mtime_ns_test = MTIME_NS_TEST
while mtime_ns > 0:
if mtime_ns == mtime_ns_test:
break
factor = 2
if int(mtime_ns / factor) == int(mtime_ns_test / factor):
precision = precision * factor
break
factor = 5
if int(mtime_ns / factor) == int(mtime_ns_test / factor):
precision = precision * factor
break
precision = precision * 10
mtime_ns = int(mtime_ns / 10)
mtime_ns_test = int(mtime_ns_test / 10)
unit = "ns"
precision_unit = precision
if precision >= 1000000000:
precision_unit = int(precision / 1000000000)
unit = "s"
elif precision >= 1000000:
precision_unit = int(precision / 1000000)
unit = "ms"
elif precision >= 1000:
precision_unit = int(precision / 1000)
unit = "us"
os.remove(path)
return (precision, precision_unit, unit)
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
logger.info("Storage location: %r", self._filesystem_folder)
if not os.path.exists(self._filesystem_folder):
logger.warning("Storage location: %r not existing, create now", self._filesystem_folder)
self._makedirs_synced(self._filesystem_folder)
logger.info("Storage location subfolder: %r", self._get_collection_root_folder())
if not os.path.exists(self._get_collection_root_folder()):
logger.warning("Storage location subfolder: %r not existing, create now", self._get_collection_root_folder())
self._makedirs_synced(self._get_collection_root_folder())
logger.info("Storage cache subfolder usage for 'item': %s", self._use_cache_subfolder_for_item)
logger.info("Storage cache subfolder usage for 'history': %s", self._use_cache_subfolder_for_history)
logger.info("Storage cache subfolder usage for 'sync-token': %s", self._use_cache_subfolder_for_synctoken)
logger.info("Storage cache use mtime and size for 'item': %s", self._use_mtime_and_size_for_item_cache)
try:
(precision, precision_unit, unit) = self._analyse_mtime()
if precision >= 100000000:
# >= 100 ms
logger.warning("Storage item mtime resolution test result: %d %s (VERY RISKY ON PRODUCTION SYSTEMS)" % (precision_unit, unit))
elif precision >= 10000000:
# >= 10 ms
logger.warning("Storage item mtime resolution test result: %d %s (RISKY ON PRODUCTION SYSTEMS)" % (precision_unit, unit))
else:
logger.info("Storage item mtime resolution test result: %d %s" % (precision_unit, unit))
if self._use_mtime_and_size_for_item_cache is False:
logger.info("Storage cache using mtime and size for 'item' may be an option in case of performance issues")
except Exception:
logger.warning("Storage item mtime resolution test result not successful")
logger.debug("Storage cache action logging: %s", self._debug_cache_actions)
if self._use_cache_subfolder_for_item is True or self._use_cache_subfolder_for_history is True or self._use_cache_subfolder_for_synctoken is True:
logger.info("Storage cache subfolder: %r", self._get_collection_cache_folder())
if not os.path.exists(self._get_collection_cache_folder()):
logger.warning("Storage cache subfolder: %r not existing, create now", self._get_collection_cache_folder())
self._makedirs_synced(self._get_collection_cache_folder())
if sys.platform != "win32":
if not self._folder_umask:
# retrieve current umask by setting a dummy umask
current_umask = os.umask(0o0022)
logger.info("Storage folder umask (from system): '%04o'", current_umask)
# reset to original
os.umask(current_umask)
else:
try:
config_umask = int(self._folder_umask, 8)
except Exception:
logger.critical("storage folder umask defined but invalid: '%s'", self._folder_umask)
raise
logger.info("storage folder umask defined: '%04o'", config_umask)
self._config_umask = config_umask
self._makedirs_synced(self._filesystem_folder)

View file

@ -69,15 +69,7 @@ class StorageBase(storage.BaseStorage):
_collection_class: ClassVar[Type["multifilesystem.Collection"]]
_filesystem_folder: str
_filesystem_cache_folder: str
_filesystem_fsync: bool
_use_cache_subfolder_for_item: bool
_use_cache_subfolder_for_history: bool
_use_cache_subfolder_for_synctoken: bool
_use_mtime_and_size_for_item_cache: bool
_debug_cache_actions: bool
_folder_umask: str
_config_umask: int
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
@ -85,39 +77,10 @@ class StorageBase(storage.BaseStorage):
"storage", "filesystem_folder")
self._filesystem_fsync = configuration.get(
"storage", "_filesystem_fsync")
self._filesystem_cache_folder = configuration.get(
"storage", "filesystem_cache_folder")
self._use_cache_subfolder_for_item = configuration.get(
"storage", "use_cache_subfolder_for_item")
self._use_cache_subfolder_for_history = configuration.get(
"storage", "use_cache_subfolder_for_history")
self._use_cache_subfolder_for_synctoken = configuration.get(
"storage", "use_cache_subfolder_for_synctoken")
self._use_mtime_and_size_for_item_cache = configuration.get(
"storage", "use_mtime_and_size_for_item_cache")
self._folder_umask = configuration.get(
"storage", "folder_umask")
self._debug_cache_actions = configuration.get(
"logging", "storage_cache_actions_on_debug")
def _get_collection_root_folder(self) -> str:
return os.path.join(self._filesystem_folder, "collection-root")
def _get_collection_cache_folder(self) -> str:
if self._filesystem_cache_folder:
return os.path.join(self._filesystem_cache_folder, "collection-cache")
else:
return os.path.join(self._filesystem_folder, "collection-cache")
def _get_collection_cache_subfolder(self, path, folder, subfolder) -> str:
if (self._use_cache_subfolder_for_item is True) and (subfolder == "item"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
elif (self._use_cache_subfolder_for_history is True) and (subfolder == "history"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
elif (self._use_cache_subfolder_for_synctoken is True) and (subfolder == "sync-token"):
path = path.replace(self._get_collection_root_folder(), self._get_collection_cache_folder())
return os.path.join(path, folder, subfolder)
def _fsync(self, f: IO[AnyStr]) -> None:
if self._filesystem_fsync:
try:
@ -154,8 +117,6 @@ class StorageBase(storage.BaseStorage):
if os.path.isdir(filesystem_path):
return
parent_filesystem_path = os.path.dirname(filesystem_path)
if sys.platform != "win32" and self._folder_umask:
oldmask = os.umask(self._config_umask)
# Prevent infinite loop
if filesystem_path != parent_filesystem_path:
# Create parent dirs recursively
@ -163,5 +124,3 @@ class StorageBase(storage.BaseStorage):
# Possible race!
os.makedirs(filesystem_path, exist_ok=True)
self._sync_directory(parent_filesystem_path)
if sys.platform != "win32" and self._folder_umask:
os.umask(oldmask)

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -73,10 +72,6 @@ class CollectionPartCache(CollectionBase):
_hash.update(raw_text)
return _hash.hexdigest()
@staticmethod
def _item_cache_mtime_and_size(size: int, raw_text: int) -> str:
return str(storage.CACHE_VERSION.decode()) + "size=" + str(size) + ";mtime=" + str(raw_text)
def _item_cache_content(self, item: radicale_item.Item) -> CacheContent:
return CacheContent(item.uid, item.etag, item.serialize(), item.name,
item.component_name, *item.time_range)
@ -84,12 +79,10 @@ class CollectionPartCache(CollectionBase):
def _store_item_cache(self, href: str, item: radicale_item.Item,
cache_hash: str = "") -> CacheContent:
if not cache_hash:
if self._storage._use_mtime_and_size_for_item_cache is True:
raise RuntimeError("_store_item_cache called without cache_hash is not supported if [storage] use_mtime_and_size_for_item_cache is True")
else:
cache_hash = self._item_cache_hash(
item.serialize().encode(self._encoding))
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
cache_hash = self._item_cache_hash(
item.serialize().encode(self._encoding))
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
content = self._item_cache_content(item)
self._storage._makedirs_synced(cache_folder)
# Race: Other processes might have created and locked the file.
@ -102,21 +95,14 @@ class CollectionPartCache(CollectionBase):
def _load_item_cache(self, href: str, cache_hash: str
) -> Optional[CacheContent]:
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
path = os.path.join(cache_folder, href)
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
try:
with open(path, "rb") as f:
with open(os.path.join(cache_folder, href), "rb") as f:
hash_, *remainder = pickle.load(f)
if hash_ and hash_ == cache_hash:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache match : %r with hash %r", path, cache_hash)
return CacheContent(*remainder)
else:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache no match : %r with hash %r", path, cache_hash)
except FileNotFoundError:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache not found : %r with hash %r", path, cache_hash)
pass
except (pickle.UnpicklingError, ValueError) as e:
logger.warning("Failed to load item cache entry %r in %r: %s",
@ -124,7 +110,8 @@ class CollectionPartCache(CollectionBase):
return None
def _clean_item_cache(self) -> None:
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache",
"item")
self._clean_cache(cache_folder, (
e.name for e in os.scandir(cache_folder) if not
os.path.isfile(os.path.join(self._filesystem_path, e.name))))

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -23,7 +22,6 @@ from typing import Iterable, Optional, cast
import radicale.item as radicale_item
from radicale import pathutils
from radicale.log import logger
from radicale.storage import multifilesystem
from radicale.storage.multifilesystem.base import StorageBase
@ -38,7 +36,6 @@ class StoragePartCreateCollection(StorageBase):
# Path should already be sanitized
sane_path = pathutils.strip_path(href)
filesystem_path = pathutils.path_to_filesystem(folder, sane_path)
logger.debug("Create collection: %r" % filesystem_path)
if not props:
self._makedirs_synced(filesystem_path)
@ -50,31 +47,27 @@ class StoragePartCreateCollection(StorageBase):
self._makedirs_synced(parent_dir)
# Create a temporary directory with an unsafe name
try:
with TemporaryDirectory(prefix=".Radicale.tmp-", dir=parent_dir
) as tmp_dir:
# The temporary directory itself can't be renamed
tmp_filesystem_path = os.path.join(tmp_dir, "collection")
os.makedirs(tmp_filesystem_path)
col = self._collection_class(
cast(multifilesystem.Storage, self),
pathutils.unstrip_path(sane_path, True),
filesystem_path=tmp_filesystem_path)
col.set_meta(props)
if items is not None:
if props.get("tag") == "VCALENDAR":
col._upload_all_nonatomic(items, suffix=".ics")
elif props.get("tag") == "VADDRESSBOOK":
col._upload_all_nonatomic(items, suffix=".vcf")
with TemporaryDirectory(prefix=".Radicale.tmp-", dir=parent_dir
) as tmp_dir:
# The temporary directory itself can't be renamed
tmp_filesystem_path = os.path.join(tmp_dir, "collection")
os.makedirs(tmp_filesystem_path)
col = self._collection_class(
cast(multifilesystem.Storage, self),
pathutils.unstrip_path(sane_path, True),
filesystem_path=tmp_filesystem_path)
col.set_meta(props)
if items is not None:
if props.get("tag") == "VCALENDAR":
col._upload_all_nonatomic(items, suffix=".ics")
elif props.get("tag") == "VADDRESSBOOK":
col._upload_all_nonatomic(items, suffix=".vcf")
if os.path.lexists(filesystem_path):
pathutils.rename_exchange(tmp_filesystem_path, filesystem_path)
else:
os.rename(tmp_filesystem_path, filesystem_path)
self._sync_directory(parent_dir)
except Exception as e:
raise ValueError("Failed to create collection %r as %r %s" %
(href, filesystem_path, e)) from e
if os.path.lexists(filesystem_path):
pathutils.rename_exchange(tmp_filesystem_path, filesystem_path)
else:
os.rename(tmp_filesystem_path, filesystem_path)
self._sync_directory(parent_dir)
return self._collection_class(
cast(multifilesystem.Storage, self),

View file

@ -2,7 +2,6 @@
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -54,9 +53,3 @@ class CollectionPartDelete(CollectionPartHistory, CollectionBase):
# Track the change
self._update_history_etag(href, None)
self._clean_history()
# Remove item from cache
cache_folder = self._storage._get_collection_cache_subfolder(os.path.dirname(path), ".Radicale.cache", "item")
cache_file = os.path.join(cache_folder, os.path.basename(path))
if os.path.isfile(cache_file):
os.remove(cache_file)
self._storage._sync_directory(cache_folder)

View file

@ -16,10 +16,9 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import base64
import os
import posixpath
from typing import Callable, ContextManager, Iterator, Optional, Set, cast
from typing import Callable, ContextManager, Iterator, Optional, cast
from radicale import pathutils, types
from radicale.log import logger
@ -36,10 +35,8 @@ def _null_child_context_manager(path: str,
class StoragePartDiscover(StorageBase):
def discover(
self, path: str, depth: str = "0",
child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None,
user_groups: Set[str] = set([])
self, path: str, depth: str = "0", child_context_manager: Optional[
Callable[[str, Optional[str]], ContextManager[None]]] = None
) -> Iterator[types.CollectionOrItem]:
# assert isinstance(self, multifilesystem.Storage)
if child_context_manager is None:
@ -105,13 +102,3 @@ class StoragePartDiscover(StorageBase):
with child_context_manager(sane_child_path, None):
yield self._collection_class(
cast(multifilesystem.Storage, self), child_path)
for group in user_groups:
href = base64.b64encode(group.encode('utf-8')).decode('ascii')
logger.debug(f"searching for group calendar {group} {href}")
sane_child_path = f"GROUPS/{href}"
if not os.path.isdir(pathutils.path_to_filesystem(folder, sane_child_path)):
continue
child_path = f"/GROUPS/{href}/"
with child_context_manager(sane_child_path, None):
yield self._collection_class(
cast(multifilesystem.Storage, self), child_path)

View file

@ -80,20 +80,11 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
raise
# The hash of the component in the file system. This is used to check,
# if the entry in the cache is still valid.
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache check for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(raw_text)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache check for: %r with hash %r", path, cache_hash)
cache_hash = self._item_cache_hash(raw_text)
cache_content = self._load_item_cache(href, cache_hash)
if cache_content is None:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache miss for: %r", path)
with self._acquire_cache_lock("item"):
# Lock the item cache to prevent multiple processes from
# Lock the item cache to prevent multpile processes from
# generating the same data in parallel.
# This improves the performance for multiple requests.
if self._storage._lock.locked == "r":
@ -108,8 +99,6 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
vobject_item, = vobject_items
temp_item = radicale_item.Item(
collection=self, vobject_item=vobject_item)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r", path)
cache_content = self._store_item_cache(
href, temp_item, cache_hash)
except Exception as e:
@ -124,9 +113,6 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
if not self._item_cache_cleaned:
self._item_cache_cleaned = True
self._clean_item_cache()
else:
if self._storage._debug_cache_actions is True:
logger.debug("Item cache hit for: %r", path)
last_modified = time.strftime(
"%a, %d %b %Y %H:%M:%S GMT",
time.gmtime(os.path.getmtime(path)))
@ -141,7 +127,7 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
def get_multi(self, hrefs: Iterable[str]
) -> Iterator[Tuple[str, Optional[radicale_item.Item]]]:
# It's faster to check for file name collisions here, because
# It's faster to check for file name collissions here, because
# we only need to call os.listdir once.
files = None
for href in hrefs:
@ -160,7 +146,7 @@ class CollectionPartGet(CollectionPartCache, CollectionPartLock,
def get_all(self) -> Iterator[radicale_item.Item]:
for href in self._list():
# We don't need to check for collisions, because the file names
# We don't need to check for collissions, because the file names
# are from os.listdir.
item = self._get(href, verify_href=False)
if item is not None:

View file

@ -47,7 +47,8 @@ class CollectionPartHistory(CollectionBase):
string for deleted items) and a history etag, which is a hash over
the previous history etag and the etag separated by "/".
"""
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
try:
with open(os.path.join(history_folder, href), "rb") as f:
cache_etag, history_etag = pickle.load(f)
@ -75,7 +76,8 @@ class CollectionPartHistory(CollectionBase):
def _get_deleted_history_hrefs(self):
"""Returns the hrefs of all deleted items that are still in the
history cache."""
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
with contextlib.suppress(FileNotFoundError):
for entry in os.scandir(history_folder):
href = entry.name
@ -87,6 +89,7 @@ class CollectionPartHistory(CollectionBase):
def _clean_history(self):
# Delete all expired history entries of deleted items.
history_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "history")
history_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "history")
self._clean_cache(history_folder, self._get_deleted_history_hrefs(),
max_age=self._max_sync_token_age)

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2023-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -38,11 +37,10 @@ class CollectionPartLock(CollectionBase):
if self._storage._lock.locked == "w":
yield
return
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", ns)
cache_folder = os.path.join(self._filesystem_path, ".Radicale.cache")
self._storage._makedirs_synced(cache_folder)
lock_path = os.path.join(cache_folder,
".Radicale.lock" + (".%s" % ns if ns else ""))
logger.debug("Lock file (CollectionPartLock): %r" % lock_path)
lock = pathutils.RwLock(lock_path)
with lock.acquire("w"):
yield
@ -56,12 +54,11 @@ class StoragePartLock(StorageBase):
def __init__(self, configuration: config.Configuration) -> None:
super().__init__(configuration)
lock_path = os.path.join(self._filesystem_folder, ".Radicale.lock")
logger.debug("Lock file (StoragePartLock): %r" % lock_path)
self._lock = pathutils.RwLock(lock_path)
self._hook = configuration.get("storage", "hook")
@types.contextmanager
def acquire_lock(self, mode: str, user: str = "", *args, **kwargs) -> Iterator[None]:
def acquire_lock(self, mode: str, user: str = "") -> Iterator[None]:
with self._lock.acquire(mode):
yield
# execute hook
@ -76,46 +73,29 @@ class StoragePartLock(StorageBase):
else:
# Process group is also used to identify child processes
preexec_fn = os.setpgrp
# optional argument
path = kwargs.get('path', "")
try:
command = self._hook % {
"path": shlex.quote(self._get_collection_root_folder() + path),
"cwd": shlex.quote(self._filesystem_folder),
"user": shlex.quote(user or "Anonymous")}
except KeyError as e:
logger.error("Storage hook contains not supported placeholder %s (skip execution of: %r)" % (e, self._hook))
return
logger.debug("Executing storage hook: '%s'" % command)
try:
p = subprocess.Popen(
command, stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE if debug else subprocess.DEVNULL,
stderr=subprocess.PIPE if debug else subprocess.DEVNULL,
shell=True, universal_newlines=True, preexec_fn=preexec_fn,
cwd=self._filesystem_folder, creationflags=creationflags)
except Exception as e:
logger.error("Execution of storage hook not successful on 'Popen': %s" % e)
return
logger.debug("Executing storage hook started 'Popen'")
command = self._hook % {
"user": shlex.quote(user or "Anonymous")}
logger.debug("Running storage hook")
p = subprocess.Popen(
command, stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE if debug else subprocess.DEVNULL,
stderr=subprocess.PIPE if debug else subprocess.DEVNULL,
shell=True, universal_newlines=True, preexec_fn=preexec_fn,
cwd=self._filesystem_folder, creationflags=creationflags)
try:
stdout_data, stderr_data = p.communicate()
except BaseException as e: # e.g. KeyboardInterrupt or SystemExit
logger.error("Execution of storage hook not successful on 'communicate': %s" % e)
except BaseException: # e.g. KeyboardInterrupt or SystemExit
p.kill()
p.wait()
return
raise
finally:
if sys.platform != "win32":
# Kill remaining children identified by process group
with contextlib.suppress(OSError):
os.killpg(p.pid, signal.SIGKILL)
logger.debug("Executing storage hook finished")
if stdout_data:
logger.debug("Captured stdout from storage hook:\n%s", stdout_data)
logger.debug("Captured stdout from hook:\n%s", stdout_data)
if stderr_data:
logger.debug("Captured stderr from storage hook:\n%s", stderr_data)
logger.debug("Captured stderr from hook:\n%s", stderr_data)
if p.returncode != 0:
logger.error("Execution of storage hook not successful: %s" % subprocess.CalledProcessError(p.returncode, p.args))
return
raise subprocess.CalledProcessError(p.returncode, p.args)

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -63,9 +62,6 @@ class CollectionPartMeta(CollectionBase):
def set_meta(self, props: Mapping[str, str]) -> None:
# TODO: better fix for "mypy"
try:
with self._atomic_write(self._props_path, "w") as fo: # type: ignore
f = cast(TextIO, fo)
json.dump(props, f, sort_keys=True)
except OSError as e:
raise ValueError("Failed to write meta data %r %s" % (self._props_path, e)) from e
with self._atomic_write(self._props_path, "w") as fo: # type: ignore
f = cast(TextIO, fo)
json.dump(props, f, sort_keys=True)

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2021 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -21,7 +20,6 @@ import os
from radicale import item as radicale_item
from radicale import pathutils, storage
from radicale.log import logger
from radicale.storage import multifilesystem
from radicale.storage.multifilesystem.base import StorageBase
@ -35,28 +33,24 @@ class StoragePartMove(StorageBase):
assert isinstance(to_collection, multifilesystem.Collection)
assert isinstance(item.collection, multifilesystem.Collection)
assert item.href
move_from = pathutils.path_to_filesystem(item.collection._filesystem_path, item.href)
move_to = pathutils.path_to_filesystem(to_collection._filesystem_path, to_href)
try:
os.replace(move_from, move_to)
except OSError as e:
raise ValueError("Failed to move file %r => %r %s" % (move_from, move_to, e)) from e
os.replace(pathutils.path_to_filesystem(
item.collection._filesystem_path, item.href),
pathutils.path_to_filesystem(
to_collection._filesystem_path, to_href))
self._sync_directory(to_collection._filesystem_path)
if item.collection._filesystem_path != to_collection._filesystem_path:
self._sync_directory(item.collection._filesystem_path)
# Move the item cache entry
cache_folder = self._get_collection_cache_subfolder(item.collection._filesystem_path, ".Radicale.cache", "item")
to_cache_folder = self._get_collection_cache_subfolder(to_collection._filesystem_path, ".Radicale.cache", "item")
cache_folder = os.path.join(item.collection._filesystem_path,
".Radicale.cache", "item")
to_cache_folder = os.path.join(to_collection._filesystem_path,
".Radicale.cache", "item")
self._makedirs_synced(to_cache_folder)
move_from = os.path.join(cache_folder, item.href)
move_to = os.path.join(to_cache_folder, to_href)
try:
os.replace(move_from, move_to)
os.replace(os.path.join(cache_folder, item.href),
os.path.join(to_cache_folder, to_href))
except FileNotFoundError:
pass
except OSError as e:
logger.error("Failed to move cache file %r => %r %s" % (move_from, move_to, e))
pass
else:
self._makedirs_synced(to_cache_folder)
if cache_folder != to_cache_folder:

View file

@ -67,7 +67,8 @@ class CollectionPartSync(CollectionPartCache, CollectionPartHistory,
if token_name == old_token_name:
# Nothing changed
return token, ()
token_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "sync-token")
token_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "sync-token")
token_path = os.path.join(token_folder, token_name)
old_state = {}
if old_token_name:

View file

@ -1,8 +1,7 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -25,7 +24,6 @@ from typing import Iterable, Iterator, TextIO, cast
import radicale.item as radicale_item
from radicale import pathutils
from radicale.log import logger
from radicale.storage.multifilesystem.base import CollectionBase
from radicale.storage.multifilesystem.cache import CollectionPartCache
from radicale.storage.multifilesystem.get import CollectionPartGet
@ -39,28 +37,19 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
) -> radicale_item.Item:
if not pathutils.is_safe_filesystem_path_component(href):
raise pathutils.UnsafePathError(href)
path = pathutils.path_to_filesystem(self._filesystem_path, href)
try:
with self._atomic_write(path, newline="") as fo: # type: ignore
f = cast(TextIO, fo)
f.write(item.serialize())
self._store_item_cache(href, item)
except Exception as e:
raise ValueError("Failed to store item %r in collection %r: %s" %
(href, self.path, e)) from e
# store cache file
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(item.serialize().encode(self._encoding))
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with hash %r", path, cache_hash)
try:
self._store_item_cache(href, item, cache_hash)
except Exception as e:
raise ValueError("Failed to store item cache of %r in collection %r: %s" %
(href, self.path, e)) from e
path = pathutils.path_to_filesystem(self._filesystem_path, href)
# TODO: better fix for "mypy"
with self._atomic_write(path, newline="") as fo: # type: ignore
f = cast(TextIO, fo)
f.write(item.serialize())
# Clean the cache after the actual item is stored, or the cache entry
# will be removed again.
self._clean_item_cache()
# Track the change
self._update_history_etag(href, item)
self._clean_history()
@ -86,16 +75,20 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
yield radicale_item.find_available_uid(
lambda href: not is_safe_free_href(href), suffix)
cache_folder = self._storage._get_collection_cache_subfolder(self._filesystem_path, ".Radicale.cache", "item")
cache_folder = os.path.join(self._filesystem_path,
".Radicale.cache", "item")
self._storage._makedirs_synced(cache_folder)
for item in items:
uid = item.uid
logger.debug("Store item from list with uid: '%s'" % uid)
cache_content = self._item_cache_content(item)
try:
cache_content = self._item_cache_content(item)
except Exception as e:
raise ValueError(
"Failed to store item %r in temporary collection %r: %s" %
(uid, self.path, e)) from e
for href in get_safe_free_hrefs(uid):
path = os.path.join(self._filesystem_path, href)
try:
f = open(path,
f = open(os.path.join(self._filesystem_path, href),
"w", newline="", encoding=self._encoding)
except OSError as e:
if (sys.platform != "win32" and e.errno == errno.EINVAL or
@ -107,31 +100,12 @@ class CollectionPartUpload(CollectionPartGet, CollectionPartCache,
else:
raise RuntimeError("No href found for item %r in temporary "
"collection %r" % (uid, self.path))
try:
with f:
f.write(item.serialize())
f.flush()
self._storage._fsync(f)
except Exception as e:
raise ValueError(
"Failed to store item %r in temporary collection %r: %s" %
(uid, self.path, e)) from e
# store cache file
if self._storage._use_mtime_and_size_for_item_cache is True:
cache_hash = self._item_cache_mtime_and_size(os.stat(path).st_size, os.stat(path).st_mtime_ns)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with mtime and size %r", path, cache_hash)
else:
cache_hash = self._item_cache_hash(item.serialize().encode(self._encoding))
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store for: %r with hash %r", path, cache_hash)
path_cache = os.path.join(cache_folder, href)
if self._storage._debug_cache_actions is True:
logger.debug("Item cache store into: %r", path_cache)
with f:
f.write(item.serialize())
f.flush()
self._storage._fsync(f)
with open(os.path.join(cache_folder, href), "wb") as fb:
pickle.dump((cache_hash, *cache_content), fb)
pickle.dump(cache_content, fb)
fb.flush()
self._storage._fsync(fb)
self._storage._sync_directory(cache_folder)

View file

@ -29,8 +29,6 @@ class StoragePartVerify(StoragePartDiscover, StorageBase):
def verify(self) -> bool:
item_errors = collection_errors = 0
logger.info("Disable fsync during storage verification")
self._filesystem_fsync = False
@types.contextmanager
def exception_cm(sane_path: str, href: Optional[str]

View file

@ -29,15 +29,13 @@ import wsgiref.util
import xml.etree.ElementTree as ET
from io import BytesIO
from typing import Any, Dict, List, Optional, Tuple, Union
from urllib.parse import quote
import defusedxml.ElementTree as DefusedET
import vobject
import radicale
from radicale import app, config, types, xmlutils
RESPONSES = Dict[str, Union[int, Dict[str, Tuple[int, ET.Element]], vobject.base.Component]]
RESPONSES = Dict[str, Union[int, Dict[str, Tuple[int, ET.Element]]]]
# Enable debug output
radicale.log.logger.setLevel(logging.DEBUG)
@ -109,11 +107,12 @@ class BaseTest:
def parse_responses(text: str) -> RESPONSES:
xml = DefusedET.fromstring(text)
assert xml.tag == xmlutils.make_clark("D:multistatus")
path_responses: RESPONSES = {}
path_responses: Dict[str, Union[
int, Dict[str, Tuple[int, ET.Element]]]] = {}
for response in xml.findall(xmlutils.make_clark("D:response")):
href = response.find(xmlutils.make_clark("D:href"))
assert href.text not in path_responses
prop_responses: Dict[str, Tuple[int, ET.Element]] = {}
prop_respones: Dict[str, Tuple[int, ET.Element]] = {}
for propstat in response.findall(
xmlutils.make_clark("D:propstat")):
status = propstat.find(xmlutils.make_clark("D:status"))
@ -122,22 +121,16 @@ class BaseTest:
for element in propstat.findall(
"./%s/*" % xmlutils.make_clark("D:prop")):
human_tag = xmlutils.make_human_tag(element.tag)
assert human_tag not in prop_responses
prop_responses[human_tag] = (status_code, element)
assert human_tag not in prop_respones
prop_respones[human_tag] = (status_code, element)
status = response.find(xmlutils.make_clark("D:status"))
if status is not None:
assert not prop_responses
assert not prop_respones
assert status.text.startswith("HTTP/1.1 ")
status_code = int(status.text.split(" ")[1])
path_responses[href.text] = status_code
else:
path_responses[href.text] = prop_responses
return path_responses
@staticmethod
def parse_free_busy(text: str) -> RESPONSES:
path_responses: RESPONSES = {}
path_responses[""] = vobject.readOne(text)
path_responses[href.text] = prop_respones
return path_responses
def get(self, path: str, check: Optional[int] = 200, **kwargs
@ -168,7 +161,7 @@ class BaseTest:
assert answer is not None
responses = self.parse_responses(answer)
if kwargs.get("HTTP_DEPTH", "0") == "0":
assert len(responses) == 1 and quote(path) in responses
assert len(responses) == 1 and path in responses
return status, responses
def proppatch(self, path: str, data: Optional[str] = None,
@ -184,18 +177,13 @@ class BaseTest:
return status, responses
def report(self, path: str, data: str, check: Optional[int] = 207,
is_xml: Optional[bool] = True,
**kwargs) -> Tuple[int, RESPONSES]:
status, _, answer = self.request("REPORT", path, data, check=check,
**kwargs)
if status < 200 or 300 <= status:
return status, {}
assert answer is not None
if is_xml:
parsed = self.parse_responses(answer)
else:
parsed = self.parse_free_busy(answer)
return status, parsed
return status, self.parse_responses(answer)
def delete(self, path: str, check: Optional[int] = 200, **kwargs
) -> Tuple[int, RESPONSES]:

View file

@ -29,7 +29,7 @@ from radicale import auth
class Auth(auth.BaseAuth):
def _login(self, login: str, password: str) -> str:
def login(self, login: str, password: str) -> str:
if login == "tmp":
return login
return ""

View file

@ -1,36 +0,0 @@
BEGIN:VCALENDAR
PRODID:-//Mozilla.org/NONSGML Mozilla Calendar V1.1//EN
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Europe/Paris
X-LIC-LOCATION:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:19700329T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:19701025T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CREATED:20130902T150157Z
LAST-MODIFIED:20130902T150158Z
DTSTAMP:20130902T150158Z
UID:event10
SUMMARY:Event
CATEGORIES:some_category1,another_category2
ORGANIZER:mailto:unclesam@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=TENTATIVE;CN=Jane Doe:MAILTO:janedoe@example.com
ATTENDEE;ROLE=REQ-PARTICIPANT;DELEGATED-FROM="MAILTO:bob@host.com";PARTSTAT=ACCEPTED;CN=John Doe:MAILTO:johndoe@example.com
DTSTART;TZID=Europe/Paris:20130901T180000
DTEND;TZID=Europe/Paris:20130901T190000
STATUS:CANCELLED
END:VEVENT
END:VCALENDAR

View file

@ -1,35 +0,0 @@
BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060102T120000
DURATION:PT1H
RRULE:FREQ=DAILY;COUNT=5
SUMMARY:Event #2
UID:event_daily_rrule_overridden
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060104T140000
DURATION:PT1H
RECURRENCE-ID;TZID=US/Eastern:20060104T120000
SUMMARY:Event #2 bis
UID:event_daily_rrule_overridden
END:VEVENT
END:VCALENDAR

View file

@ -1,55 +0,0 @@
BEGIN:VCALENDAR
VERSION:2.0
PRODID:DAVx5/4.4.3.2-ose ical4j/3.2.19
BEGIN:VEVENT
DTSTAMP:20241125T195941Z
UID:9fb6578a-07a6-4c61-8406-69229713d40e
SEQUENCE:3
SUMMARY:Escalade
DTSTART;TZID=Europe/Paris:20240606T193000
DTEND;TZID=Europe/Paris:20240606T203000
RRULE:FREQ=WEEKLY;WKST=MO;BYDAY=TH
EXDATE;TZID=Europe/Paris:20240704T193000
CLASS:PUBLIC
STATUS:CONFIRMED
BEGIN:VALARM
TRIGGER:-P1D
ACTION:DISPLAY
DESCRIPTION:Escalade
END:VALARM
END:VEVENT
BEGIN:VEVENT
DTSTAMP:20241125T195941Z
UID:9fb6578a-07a6-4c61-8406-69229713d40e
RECURRENCE-ID;TZID=Europe/Paris:20241128T193000
SEQUENCE:1
SUMMARY:Escalade avec Romain
DTSTART;TZID=Europe/Paris:20241128T193000
DTEND;TZID=Europe/Paris:20241128T203000
EXDATE;TZID=Europe/Paris:20240704T193000
CLASS:PUBLIC
STATUS:CONFIRMED
BEGIN:VALARM
TRIGGER:-P1D
ACTION:DISPLAY
DESCRIPTION:Escalade avec Romain
END:VALARM
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:STANDARD
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
DTSTART:19961027T030000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:DAYLIGHT
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
DTSTART:19810329T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR

View file

@ -5,14 +5,14 @@ BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404T020000
DTSTART:20000404
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026T020000
DTSTART:20001026
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400

View file

@ -1,28 +0,0 @@
BEGIN:VCALENDAR
VERSION:2.0
BEGIN:VTIMEZONE
LAST-MODIFIED:20040110T032845Z
TZID:US/Eastern
BEGIN:DAYLIGHT
DTSTART:20000404T020000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20001026T020000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=US/Eastern:20060321T150000
DURATION:PT1H
RRULE:FREQ=WEEKLY;COUNT=5
SUMMARY:Recurring event
UID:event_weekly_rrule
END:VEVENT
END:VCALENDAR

View file

@ -2,7 +2,7 @@
# Copyright © 2012-2016 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -22,8 +22,6 @@ Radicale tests with simple requests and authentication.
"""
import base64
import logging
import os
import sys
from typing import Iterable, Tuple, Union
@ -41,14 +39,6 @@ class TestBaseAuthRequests(BaseTest):
"""
# test for available bcrypt module
try:
import bcrypt
except ImportError:
has_bcrypt = 0
else:
has_bcrypt = 1
def _test_htpasswd(self, htpasswd_encryption: str, htpasswd_content: str,
test_matrix: Union[str, Iterable[Tuple[str, str, bool]]]
= "ascii") -> None:
@ -79,9 +69,6 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_plain(self) -> None:
self._test_htpasswd("plain", "tmp:bepo")
def test_htpasswd_plain_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:bepo")
def test_htpasswd_plain_password_split(self) -> None:
self._test_htpasswd("plain", "tmp:be:po", (
("tmp", "be:po", True), ("tmp", "bepo", False)))
@ -92,9 +79,6 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_md5(self) -> None:
self._test_htpasswd("md5", "tmp:$apr1$BI7VKCZh$GKW4vq2hqDINMr8uv7lDY/")
def test_htpasswd_md5_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$apr1$BI7VKCZh$GKW4vq2hqDINMr8uv7lDY/")
def test_htpasswd_md5_unicode(self):
self._test_htpasswd(
"md5", "😀:$apr1$w4ev89r1$29xO8EvJmS2HEAadQ5qy11", "unicode")
@ -102,99 +86,20 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_sha256(self) -> None:
self._test_htpasswd("sha256", "tmp:$5$i4Ni4TQq6L5FKss5$ilpTjkmnxkwZeV35GB9cYSsDXTALBn6KtWRJAzNlCL/")
def test_htpasswd_sha256_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$5$i4Ni4TQq6L5FKss5$ilpTjkmnxkwZeV35GB9cYSsDXTALBn6KtWRJAzNlCL/")
def test_htpasswd_sha512(self) -> None:
self._test_htpasswd("sha512", "tmp:$6$3Qhl8r6FLagYdHYa$UCH9yXCed4A.J9FQsFPYAOXImzZUMfvLa0lwcWOxWYLOF5sE/lF99auQ4jKvHY2vijxmefl7G6kMqZ8JPdhIJ/")
def test_htpasswd_sha512_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$6$3Qhl8r6FLagYdHYa$UCH9yXCed4A.J9FQsFPYAOXImzZUMfvLa0lwcWOxWYLOF5sE/lF99auQ4jKvHY2vijxmefl7G6kMqZ8JPdhIJ/")
def test_htpasswd_bcrypt(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3V"
"NTRI3w5KDnj8NTUKJNWfVpvRq")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2a(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2a$10$Mj4A9vMecAp/K7.0fMKoVOk1SjgR.RBhl06a52nvzXhxlT3HB7Reu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2a_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2a$10$Mj4A9vMecAp/K7.0fMKoVOk1SjgR.RBhl06a52nvzXhxlT3HB7Reu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2b(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2b$12$7a4z/fdmXlBIfkz0smvzW.1Nds8wpgC/bo2DVOb4OSQKWCDL1A1wu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2b_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2b$12$7a4z/fdmXlBIfkz0smvzW.1Nds8wpgC/bo2DVOb4OSQKWCDL1A1wu")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2y(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3VNTRI3w5KDnj8NTUKJNWfVpvRq")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_2y_autodetect(self) -> None:
self._test_htpasswd("autodetect", "tmp:$2y$05$oD7hbiQFQlvCM7zoalo/T.MssV3VNTRI3w5KDnj8NTUKJNWfVpvRq")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_C10(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$10$bZsWq06ECzxqi7RmulQvC.T1YHUnLW2E3jn.MU2pvVTGn1dfORt2a")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_C10_autodetect(self) -> None:
self._test_htpasswd("bcrypt", "tmp:$2y$10$bZsWq06ECzxqi7RmulQvC.T1YHUnLW2E3jn.MU2pvVTGn1dfORt2a")
@pytest.mark.skipif(has_bcrypt == 0, reason="No bcrypt module installed")
def test_htpasswd_bcrypt_unicode(self) -> None:
self._test_htpasswd("bcrypt", "😀:$2y$10$Oyz5aHV4MD9eQJbk6GPemOs4T6edK6U9Sqlzr.W1mMVCS8wJUftnW", "unicode")
self._test_htpasswd("bcrypt", "😀:$2y$10$Oyz5aHV4MD9eQJbk6GPemOs4T6edK"
"6U9Sqlzr.W1mMVCS8wJUftnW", "unicode")
def test_htpasswd_multi(self) -> None:
self._test_htpasswd("plain", "ign:ign\ntmp:bepo")
# login cache successful
def test_htpasswd_login_cache_successful_plain(self, caplog) -> None:
caplog.set_level(logging.INFO)
self.configure({"auth": {"cache_logins": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (("tmp", "bepo", True), ("tmp", "bepo", True)))
htpasswd_found = False
htpasswd_cached_found = False
for line in caplog.messages:
if line == "Successful login: 'tmp' (htpasswd)":
htpasswd_found = True
elif line == "Successful login: 'tmp' (htpasswd / cached)":
htpasswd_cached_found = True
if (htpasswd_found is False) or (htpasswd_cached_found is False):
raise ValueError("Logging misses expected log lines")
# login cache failed
def test_htpasswd_login_cache_failed_plain(self, caplog) -> None:
caplog.set_level(logging.INFO)
self.configure({"auth": {"cache_logins": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (("tmp", "bepo1", False), ("tmp", "bepo1", False)))
htpasswd_found = False
htpasswd_cached_found = False
for line in caplog.messages:
if line == "Failed login attempt from unknown: 'tmp' (htpasswd)":
htpasswd_found = True
elif line == "Failed login attempt from unknown: 'tmp' (htpasswd / cached)":
htpasswd_cached_found = True
if (htpasswd_found is False) or (htpasswd_cached_found is False):
raise ValueError("Logging misses expected log lines")
# htpasswd file cache
def test_htpasswd_file_cache(self, caplog) -> None:
self.configure({"auth": {"htpasswd_cache": "True"}})
self._test_htpasswd("plain", "tmp:bepo")
# detection of broken htpasswd file entries
def test_htpasswd_broken(self) -> None:
for userpass in ["tmp:", ":tmp"]:
try:
self._test_htpasswd("plain", userpass)
except RuntimeError:
pass
else:
raise
@pytest.mark.skipif(sys.platform == "win32", reason="leading and trailing "
"whitespaces not allowed in file names")
def test_htpasswd_whitespace_user(self) -> None:
@ -210,21 +115,6 @@ class TestBaseAuthRequests(BaseTest):
def test_htpasswd_comment(self) -> None:
self._test_htpasswd("plain", "#comment\n #comment\n \ntmp:bepo\n\n")
def test_htpasswd_lc_username(self) -> None:
self.configure({"auth": {"lc_username": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (
("tmp", "bepo", True), ("TMP", "bepo", True), ("tmp1", "bepo", False)))
def test_htpasswd_uc_username(self) -> None:
self.configure({"auth": {"uc_username": "True"}})
self._test_htpasswd("plain", "TMP:bepo", (
("tmp", "bepo", True), ("TMP", "bepo", True), ("TMP1", "bepo", False)))
def test_htpasswd_strip_domain(self) -> None:
self.configure({"auth": {"strip_domain": "True"}})
self._test_htpasswd("plain", "tmp:bepo", (
("tmp", "bepo", True), ("tmp@domain.example", "bepo", True), ("tmp1", "bepo", False)))
def test_remote_user(self) -> None:
self.configure({"auth": {"type": "remote_user"}})
_, responses = self.propfind("/", """\
@ -259,118 +149,6 @@ class TestBaseAuthRequests(BaseTest):
href_element = prop.find(xmlutils.make_clark("D:href"))
assert href_element is not None and href_element.text == "/test/"
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def _test_dovecot(
self, user, password, expected_status,
response=b'FAIL\n1\n', mech=[b'PLAIN'], broken=None):
import socket
from unittest.mock import DEFAULT, patch
self.configure({"auth": {"type": "dovecot",
"dovecot_socket": "./dovecot.sock"}})
if broken is None:
broken = []
handshake = b''
if "version" not in broken:
handshake += b'VERSION\t'
if "incompatible" in broken:
handshake += b'2'
else:
handshake += b'1'
handshake += b'\t2\n'
if "mech" not in broken:
handshake += b'MECH\t%b\n' % b' '.join(mech)
if "duplicate" in broken:
handshake += b'VERSION\t1\t2\n'
if "done" not in broken:
handshake += b'DONE\n'
with patch.multiple(
'socket.socket',
connect=DEFAULT,
send=DEFAULT,
recv=DEFAULT
) as mock_socket:
if "socket" in broken:
mock_socket["connect"].side_effect = socket.error(
"Testing error with the socket"
)
mock_socket["recv"].side_effect = [handshake, response]
status, _, answer = self.request(
"PROPFIND", "/",
HTTP_AUTHORIZATION="Basic %s" % base64.b64encode(
("%s:%s" % (user, password)).encode()).decode())
assert status == expected_status
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_no_user(self):
self._test_dovecot("", "", 401)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_no_password(self):
self._test_dovecot("user", "", 401)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_version(self):
self._test_dovecot("user", "password", 401, broken=["version"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_incompatible(self):
self._test_dovecot("user", "password", 401, broken=["incompatible"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_duplicate(self):
self._test_dovecot(
"user", "password", 207, response=b'OK\t1',
broken=["duplicate"]
)
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_mech(self):
self._test_dovecot("user", "password", 401, broken=["mech"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_unsupported_mech(self):
self._test_dovecot("user", "password", 401, mech=[b'ONE', b'TWO'])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_handshake_no_done(self):
self._test_dovecot("user", "password", 401, broken=["done"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_broken_socket(self):
self._test_dovecot("user", "password", 401, broken=["socket"])
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_good1(self):
self._test_dovecot("user", "password", 207, response=b'OK\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_good2(self):
self._test_dovecot(
"user", "password", 207, response=b'OK\t1',
mech=[b'PLAIN\nEXTRA\tTERM']
)
self._test_dovecot("user", "password", 207, response=b'OK\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_bad1(self):
self._test_dovecot("user", "password", 401, response=b'FAIL\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_bad2(self):
self._test_dovecot("user", "password", 401, response=b'CONT\t1')
@pytest.mark.skipif(sys.platform == 'win32', reason="Not supported on Windows")
def test_dovecot_auth_id_mismatch(self):
self._test_dovecot("user", "password", 401, response=b'OK\t2')
def test_custom(self) -> None:
"""Custom authentication."""
self.configure({"auth": {"type": "radicale.tests.custom.auth"}})

View file

@ -1,7 +1,6 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -26,7 +25,6 @@ import posixpath
from typing import Any, Callable, ClassVar, Iterable, List, Optional, Tuple
import defusedxml.ElementTree as DefusedET
import vobject
from radicale import storage, xmlutils
from radicale.tests import RESPONSES, BaseTest
@ -44,26 +42,6 @@ class TestBaseRequests(BaseTest):
rights_file_path = os.path.join(self.colpath, "rights")
with open(rights_file_path, "w") as f:
f.write("""\
[permit delete collection]
user: .*
collection: test-permit-delete
permissions: RrWwD
[forbid delete collection]
user: .*
collection: test-forbid-delete
permissions: RrWwd
[permit overwrite collection]
user: .*
collection: test-permit-overwrite
permissions: RrWwO
[forbid overwrite collection]
user: .*
collection: test-forbid-overwrite
permissions: RrWwo
[allow all]
user: .*
collection: .*
@ -167,12 +145,6 @@ permissions: RrWw""")
event = get_file_content("event_mixed_datetime_and_date.ics")
self.put("/calendar.ics/event.ics", event)
def test_add_event_with_exdate_without_rrule(self) -> None:
"""Test event with EXDATE but not having RRULE."""
self.mkcalendar("/calendar.ics/")
event = get_file_content("event_exdate_without_rrule.ics")
self.put("/calendar.ics/event.ics", event)
def test_add_todo(self) -> None:
"""Add a todo."""
self.mkcalendar("/calendar.ics/")
@ -387,7 +359,7 @@ permissions: RrWw""")
self.get(path1, check=404)
self.get(path2)
def test_move_between_collections(self) -> None:
def test_move_between_colections(self) -> None:
"""Move a item."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -400,7 +372,7 @@ permissions: RrWw""")
self.get(path1, check=404)
self.get(path2)
def test_move_between_collections_duplicate_uid(self) -> None:
def test_move_between_colections_duplicate_uid(self) -> None:
"""Move a item to a collection which already contains the UID."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -416,7 +388,7 @@ permissions: RrWw""")
assert xml.tag == xmlutils.make_clark("D:error")
assert xml.find(xmlutils.make_clark("C:no-uid-conflict")) is not None
def test_move_between_collections_overwrite(self) -> None:
def test_move_between_colections_overwrite(self) -> None:
"""Move a item to a collection which already contains the item."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -430,8 +402,8 @@ permissions: RrWw""")
self.request("MOVE", path1, check=204, HTTP_OVERWRITE="T",
HTTP_DESTINATION="http://127.0.0.1/"+path2)
def test_move_between_collections_overwrite_uid_conflict(self) -> None:
"""Move an item to a collection which already contains the item with
def test_move_between_colections_overwrite_uid_conflict(self) -> None:
"""Move a item to a collection which already contains the item with
a different UID."""
self.mkcalendar("/calendar1.ics/")
self.mkcalendar("/calendar2.ics/")
@ -466,33 +438,6 @@ permissions: RrWw""")
assert responses["/calendar.ics/"] == 200
self.get("/calendar.ics/", check=404)
def test_delete_collection_global_forbid(self) -> None:
"""Delete a collection (expect forbidden)."""
self.configure({"rights": {"permit_delete_collection": False}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
self.put("/calendar.ics/event1.ics", event)
_, responses = self.delete("/calendar.ics/", check=401)
self.get("/calendar.ics/", check=200)
def test_delete_collection_global_forbid_explicit_permit(self) -> None:
"""Delete a collection with permitted path (expect permit)."""
self.configure({"rights": {"permit_delete_collection": False}})
self.mkcalendar("/test-permit-delete/")
event = get_file_content("event1.ics")
self.put("/test-permit-delete/event1.ics", event)
_, responses = self.delete("/test-permit-delete/", check=200)
self.get("/test-permit-delete/", check=404)
def test_delete_collection_global_permit_explicit_forbid(self) -> None:
"""Delete a collection with permitted path (expect forbid)."""
self.configure({"rights": {"permit_delete_collection": True}})
self.mkcalendar("/test-forbid-delete/")
event = get_file_content("event1.ics")
self.put("/test-forbid-delete/event1.ics", event)
_, responses = self.delete("/test-forbid-delete/", check=401)
self.get("/test-forbid-delete/", check=200)
def test_delete_root_collection(self) -> None:
"""Delete the root collection."""
self.mkcalendar("/calendar.ics/")
@ -504,30 +449,6 @@ permissions: RrWw""")
self.get("/calendar.ics/", check=404)
self.get("/event1.ics", 404)
def test_overwrite_collection_global_forbid(self) -> None:
"""Overwrite a collection (expect forbid)."""
self.configure({"rights": {"permit_overwrite_collection": False}})
event = get_file_content("event1.ics")
self.put("/calender.ics/", event, check=401)
def test_overwrite_collection_global_forbid_explict_permit(self) -> None:
"""Overwrite a collection with permitted path (expect permit)."""
self.configure({"rights": {"permit_overwrite_collection": False}})
event = get_file_content("event1.ics")
self.put("/test-permit-overwrite/", event, check=201)
def test_overwrite_collection_global_permit(self) -> None:
"""Overwrite a collection (expect permit)."""
self.configure({"rights": {"permit_overwrite_collection": True}})
event = get_file_content("event1.ics")
self.put("/calender.ics/", event, check=201)
def test_overwrite_collection_global_permit_explict_forbid(self) -> None:
"""Overwrite a collection with forbidden path (expect forbid)."""
self.configure({"rights": {"permit_overwrite_collection": True}})
event = get_file_content("event1.ics")
self.put("/test-forbid-overwrite/", event, check=401)
def test_propfind(self) -> None:
calendar_path = "/calendar.ics/"
self.mkcalendar("/calendar.ics/")
@ -1439,45 +1360,10 @@ permissions: RrWw""")
</C:calendar-query>""")
assert len(responses) == 1
response = responses[event_path]
assert isinstance(response, dict)
assert not isinstance(response, int)
status, prop = response["D:getetag"]
assert status == 200 and prop.text
def test_report_free_busy(self) -> None:
"""Test free busy report on a few items"""
calendar_path = "/calendar.ics/"
self.mkcalendar(calendar_path)
for i in (1, 2, 10):
filename = "event{}.ics".format(i)
event = get_file_content(filename)
self.put(posixpath.join(calendar_path, filename), event)
code, responses = self.report(calendar_path, """\
<?xml version="1.0" encoding="utf-8" ?>
<C:free-busy-query xmlns:C="urn:ietf:params:xml:ns:caldav">
<C:time-range start="20130901T140000Z" end="20130908T220000Z"/>
</C:free-busy-query>""", 200, is_xml=False)
for response in responses.values():
assert isinstance(response, vobject.base.Component)
assert len(responses) == 1
vcalendar = list(responses.values())[0]
assert isinstance(vcalendar, vobject.base.Component)
assert len(vcalendar.vfreebusy_list) == 3
types = {}
for vfb in vcalendar.vfreebusy_list:
fbtype_val = vfb.fbtype.value
if fbtype_val not in types:
types[fbtype_val] = 0
types[fbtype_val] += 1
assert types == {'BUSY': 2, 'FREE': 1}
# Test max_freebusy_occurrence limit
self.configure({"reporting": {"max_freebusy_occurrence": 1}})
code, responses = self.report(calendar_path, """\
<?xml version="1.0" encoding="utf-8" ?>
<C:free-busy-query xmlns:C="urn:ietf:params:xml:ns:caldav">
<C:time-range start="20130901T140000Z" end="20130908T220000Z"/>
</C:free-busy-query>""", 400, is_xml=False)
def _report_sync_token(
self, calendar_path: str, sync_token: Optional[str] = None
) -> Tuple[str, RESPONSES]:
@ -1639,6 +1525,184 @@ permissions: RrWw""")
calendar_path, "http://radicale.org/ns/sync/INVALID")
assert not sync_token
def test_report_with_expand_property(self) -> None:
"""Test report with expand property"""
self.put("/calendar.ics/", get_file_content("event_daily_rrule.ics"))
req_body_without_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses['/calendar.ics/event_daily_rrule.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
assert "BEGIN:VTIMEZONE" in element.text
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == "event_daily_rrule"
uids.append(uid)
assert len(uids) == 1
req_body_with_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="20060103T000000Z" end="20060105T000000Z"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses['/calendar.ics/event_daily_rrule.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == "UID:event_daily_rrule"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in ["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"]
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line == "DTSTART:20060102T170000Z"
assert len(uids) == 2
assert len(set(recurrence_ids)) == 2
def test_report_with_expand_property_all_day_event(self) -> None:
"""Test report with expand property"""
self.put("/calendar.ics/", get_file_content("event_full_day_rrule.ics"))
req_body_without_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses['/calendar.ics/event_full_day_rrule.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == "event_full_day_rrule"
uids.append(uid)
assert len(uids) == 1
req_body_with_expand = \
"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="20060103T000000Z" end="20060105T000000Z"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="20060103T000000Z" end="20060105T000000Z"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses['/calendar.ics/event_full_day_rrule.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == "UID:event_full_day_rrule"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in ["RECURRENCE-ID:20060103", "RECURRENCE-ID:20060104", "RECURRENCE-ID:20060105"]
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line == "DTSTART:20060102"
if line.startswith("DTEND:"):
assert line == "DTEND:20060103"
assert len(uids) == 3
assert len(set(recurrence_ids)) == 3
def test_propfind_sync_token(self) -> None:
"""Retrieve the sync-token with a propfind request"""
calendar_path = "/calendar.ics/"
@ -1714,7 +1778,6 @@ permissions: RrWw""")
assert status == 200 and prop.text == "text/vcard;charset=utf-8"
def test_authorization(self) -> None:
self.configure({"auth": {"type": "none"}})
_, responses = self.propfind("/", """\
<?xml version="1.0" encoding="utf-8"?>
<propfind xmlns="DAV:">
@ -1741,7 +1804,6 @@ permissions: RrWw""")
def test_principal_collection_creation(self) -> None:
"""Verify existence of the principal collection."""
self.configure({"auth": {"type": "none"}})
self.propfind("/user/", login="user:")
def test_authentication_current_user_principal_hack(self) -> None:

View file

@ -1,230 +0,0 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
"""
Radicale tests with expand requests.
"""
import os
from typing import ClassVar, List
from radicale.tests import BaseTest
from radicale.tests.helpers import get_file_content
ONLY_DATES = True
CONTAINS_TIMES = False
class TestExpandRequests(BaseTest):
"""Tests with expand requests."""
# Allow skipping sync-token tests, when not fully supported by the backend
full_sync_token_support: ClassVar[bool] = True
def setup_method(self) -> None:
BaseTest.setup_method(self)
rights_file_path = os.path.join(self.colpath, "rights")
with open(rights_file_path, "w") as f:
f.write("""\
[permit delete collection]
user: .*
collection: test-permit-delete
permissions: RrWwD
[forbid delete collection]
user: .*
collection: test-forbid-delete
permissions: RrWwd
[permit overwrite collection]
user: .*
collection: test-permit-overwrite
permissions: RrWwO
[forbid overwrite collection]
user: .*
collection: test-forbid-overwrite
permissions: RrWwo
[allow all]
user: .*
collection: .*
permissions: RrWw""")
self.configure({"rights": {"file": rights_file_path,
"type": "from_file"}})
def _test_expand(self,
expected_uid: str,
start: str,
end: str,
expected_recurrence_ids: List[str],
expected_start_times: List[str],
expected_end_times: List[str],
only_dates: bool,
nr_uids: int) -> None:
self.put("/calendar.ics/", get_file_content(f"{expected_uid}.ics"))
req_body_without_expand = \
f"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="{start}" end="{end}"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_without_expand)
assert len(responses) == 1
response_without_expand = responses[f'/calendar.ics/{expected_uid}.ics']
assert not isinstance(response_without_expand, int)
status, element = response_without_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" in element.text
if not only_dates:
assert "BEGIN:VTIMEZONE" in element.text
if nr_uids == 1:
assert "RECURRENCE-ID" not in element.text
uids: List[str] = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
uid = line[len("UID:"):]
assert uid == expected_uid
uids.append(uid)
assert len(uids) == nr_uids
req_body_with_expand = \
f"""<?xml version="1.0" encoding="utf-8" ?>
<C:calendar-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:caldav">
<D:prop>
<C:calendar-data>
<C:expand start="{start}" end="{end}"/>
</C:calendar-data>
</D:prop>
<C:filter>
<C:comp-filter name="VCALENDAR">
<C:comp-filter name="VEVENT">
<C:time-range start="{start}" end="{end}"/>
</C:comp-filter>
</C:comp-filter>
</C:filter>
</C:calendar-query>
"""
_, responses = self.report("/calendar.ics/", req_body_with_expand)
assert len(responses) == 1
response_with_expand = responses[f'/calendar.ics/{expected_uid}.ics']
assert not isinstance(response_with_expand, int)
status, element = response_with_expand["C:calendar-data"]
assert status == 200 and element.text
assert "RRULE" not in element.text
assert "BEGIN:VTIMEZONE" not in element.text
uids = []
recurrence_ids = []
for line in element.text.split("\n"):
if line.startswith("UID:"):
assert line == f"UID:{expected_uid}"
uids.append(line)
if line.startswith("RECURRENCE-ID:"):
assert line in expected_recurrence_ids
recurrence_ids.append(line)
if line.startswith("DTSTART:"):
assert line in expected_start_times
if line.startswith("DTEND:"):
assert line in expected_end_times
assert len(uids) == len(expected_recurrence_ids)
assert len(set(recurrence_ids)) == len(expected_recurrence_ids)
def test_report_with_expand_property(self) -> None:
"""Test report with expand property"""
self._test_expand(
"event_daily_rrule",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"],
["DTSTART:20060103T170000Z", "DTSTART:20060104T170000Z"],
[],
CONTAINS_TIMES,
1
)
def test_report_with_expand_property_all_day_event(self) -> None:
"""Test report with expand property for all day events"""
self._test_expand(
"event_full_day_rrule",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103", "RECURRENCE-ID:20060104", "RECURRENCE-ID:20060105"],
["DTSTART:20060103", "DTSTART:20060104", "DTSTART:20060105"],
["DTEND:20060104", "DTEND:20060105", "DTEND:20060106"],
ONLY_DATES,
1
)
def test_report_with_expand_property_overridden(self) -> None:
"""Test report with expand property with overridden events"""
self._test_expand(
"event_daily_rrule_overridden",
"20060103T000000Z",
"20060105T000000Z",
["RECURRENCE-ID:20060103T170000Z", "RECURRENCE-ID:20060104T170000Z"],
["DTSTART:20060103T170000Z", "DTSTART:20060104T190000Z"],
[],
CONTAINS_TIMES,
2
)
def test_report_with_expand_property_timezone(self):
self._test_expand(
"event_weekly_rrule",
"20060320T000000Z",
"20060414T000000Z",
[
"RECURRENCE-ID:20060321T200000Z",
"RECURRENCE-ID:20060328T200000Z",
"RECURRENCE-ID:20060404T190000Z",
"RECURRENCE-ID:20060411T190000Z",
],
[
"DTSTART:20060321T200000Z",
"DTSTART:20060328T200000Z",
"DTSTART:20060404T190000Z",
"DTSTART:20060411T190000Z",
],
[],
CONTAINS_TIMES,
1
)

View file

@ -30,10 +30,10 @@ class TestBaseRightsRequests(BaseTest):
def _test_rights(self, rights_type: str, user: str, path: str, mode: str,
expected_status: int, with_auth: bool = True) -> None:
assert mode in ("r", "w")
assert user in ("", "tmp", "user@domain.test")
assert user in ("", "tmp")
htpasswd_file_path = os.path.join(self.colpath, ".htpasswd")
with open(htpasswd_file_path, "w") as f:
f.write("tmp:bepo\nother:bepo\nuser@domain.test:bepo")
f.write("tmp:bepo\nother:bepo")
self.configure({
"rights": {"type": rights_type},
"auth": {"type": "htpasswd" if with_auth else "none",
@ -42,9 +42,8 @@ class TestBaseRightsRequests(BaseTest):
for u in ("tmp", "other"):
# Indirect creation of principal collection
self.propfind("/%s/" % u, login="%s:bepo" % u)
os.makedirs(os.path.join(self.colpath, "collection-root", "domain.test"), exist_ok=True)
(self.propfind if mode == "r" else self.proppatch)(
path, check=expected_status, login="%s:bepo" % user if user else None)
path, check=expected_status, login="tmp:bepo" if user else None)
def test_owner_only(self) -> None:
self._test_rights("owner_only", "", "/", "r", 401)
@ -111,23 +110,14 @@ permissions: RrWw
[custom]
user: .*
collection: custom(/.*)?
permissions: Rr
[read-domain-principal]
user: .+@([^@]+)
collection: {0}
permissions: R""")
permissions: Rr""")
self.configure({"rights": {"file": rights_file_path}})
self._test_rights("from_file", "", "/other/", "r", 401)
self._test_rights("from_file", "tmp", "/tmp/", "r", 207)
self._test_rights("from_file", "tmp", "/other/", "r", 403)
self._test_rights("from_file", "", "/custom/sub", "r", 404)
self._test_rights("from_file", "tmp", "/custom/sub", "r", 404)
self._test_rights("from_file", "", "/custom/sub", "w", 401)
self._test_rights("from_file", "tmp", "/custom/sub", "w", 403)
self._test_rights("from_file", "tmp", "/custom/sub", "w", 403)
self._test_rights("from_file", "user@domain.test", "/domain.test/", "r", 207)
self._test_rights("from_file", "user@domain.test", "/tmp/", "r", 403)
self._test_rights("from_file", "user@domain.test", "/other/", "r", 403)
def test_from_file_limited_get(self):
rights_file_path = os.path.join(self.colpath, "rights")
@ -143,7 +133,6 @@ collection: public/[^/]*
permissions: i""")
self.configure({"rights": {"type": "from_file",
"file": rights_file_path}})
self.configure({"auth": {"type": "none"}})
self.mkcalendar("/tmp/calendar", login="tmp:bepo")
self.mkcol("/public", login="tmp:bepo")
self.mkcalendar("/public/calendar", login="tmp:bepo")
@ -166,7 +155,6 @@ permissions: i""")
Items are allowed at "/.../.../...".
"""
self.configure({"auth": {"type": "none"}})
self.mkcalendar("/", check=401)
self.mkcalendar("/user/", check=401)
self.mkcol("/user/")
@ -177,7 +165,6 @@ permissions: i""")
def test_put_collections_and_items(self) -> None:
"""Test rights for creation of calendars and items with PUT."""
self.configure({"auth": {"type": "none"}})
self.put("/user/", "BEGIN:VCALENDAR\r\nEND:VCALENDAR", check=401)
self.mkcol("/user/")
self.put("/user/calendar/", "BEGIN:VCALENDAR\r\nEND:VCALENDAR")

View file

@ -141,19 +141,13 @@ class TestBaseServerRequests(BaseTest):
def test_bind_fail(self) -> None:
for address_family, address in [(socket.AF_INET, "::1"),
(socket.AF_INET6, "127.0.0.1")]:
try:
with socket.socket(address_family, socket.SOCK_STREAM) as sock:
if address_family == socket.AF_INET6:
# Only allow IPv6 connections to the IPv6 socket
sock.setsockopt(server.COMPAT_IPPROTO_IPV6,
socket.IPV6_V6ONLY, 1)
with pytest.raises(OSError) as exc_info:
sock.bind((address, 0))
except OSError as e:
if e.errno in (errno.EADDRNOTAVAIL, errno.EAFNOSUPPORT,
errno.EPROTONOSUPPORT):
continue
raise
with socket.socket(address_family, socket.SOCK_STREAM) as sock:
if address_family == socket.AF_INET6:
# Only allow IPv6 connections to the IPv6 socket
sock.setsockopt(server.COMPAT_IPPROTO_IPV6,
socket.IPV6_V6ONLY, 1)
with pytest.raises(OSError) as exc_info:
sock.bind((address, 0))
# See ``radicale.server.serve``
assert (isinstance(exc_info.value, socket.gaierror) and
exc_info.value.errno in (

View file

@ -1,7 +1,6 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2022 Unrud <unrud@outlook.com>
# Copyright © 2024-2024 Peter Bieringer <pb@bieringer.de>
# Copyright © 2017-2019 Unrud <unrud@outlook.com>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -77,14 +76,13 @@ class TestMultiFileSystem(BaseTest):
"""Verify that the hooks runs when a new user is created."""
self.configure({"storage": {"hook": "mkdir %s" % os.path.join(
"collection-root", "created_by_hook")}})
self.configure({"auth": {"type": "none"}})
self.propfind("/", login="user:")
self.propfind("/created_by_hook/")
def test_hook_fail(self) -> None:
"""Verify that a request succeeded if the hook still fails (anyhow no rollback implemented)."""
"""Verify that a request fails if the hook fails."""
self.configure({"storage": {"hook": "exit 1"}})
self.mkcalendar("/calendar.ics/", check=201)
self.mkcalendar("/calendar.ics/", check=500)
def test_item_cache_rebuild(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
@ -101,38 +99,6 @@ class TestMultiFileSystem(BaseTest):
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_item_cache_rebuild_subfolder(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
self.configure({"storage": {"use_cache_subfolder_for_item": "True"}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
path = "/calendar.ics/event1.ics"
self.put(path, event)
_, answer1 = self.get(path)
cache_folder = os.path.join(self.colpath, "collection-cache",
"calendar.ics", ".Radicale.cache", "item")
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
shutil.rmtree(cache_folder)
_, answer2 = self.get(path)
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_item_cache_rebuild_mtime_and_size(self) -> None:
"""Delete the item cache and verify that it is rebuild."""
self.configure({"storage": {"use_mtime_and_size_for_item_cache": "True"}})
self.mkcalendar("/calendar.ics/")
event = get_file_content("event1.ics")
path = "/calendar.ics/event1.ics"
self.put(path, event)
_, answer1 = self.get(path)
cache_folder = os.path.join(self.colpath, "collection-root",
"calendar.ics", ".Radicale.cache", "item")
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
shutil.rmtree(cache_folder)
_, answer2 = self.get(path)
assert answer1 == answer2
assert os.path.exists(os.path.join(cache_folder, "event1.ics"))
def test_put_whole_calendar_uids_used_as_file_names(self) -> None:
"""Test if UIDs are used as file names."""
_TestBaseRequests.test_put_whole_calendar(

View file

@ -15,9 +15,9 @@
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import contextlib
import sys
from typing import (Any, Callable, ContextManager, Iterator, List, Mapping,
MutableMapping, Protocol, Sequence, Tuple, TypeVar, Union,
runtime_checkable)
MutableMapping, Sequence, Tuple, TypeVar, Union)
WSGIResponseHeaders = Union[Mapping[str, str], Sequence[Tuple[str, str]]]
WSGIResponse = Tuple[int, WSGIResponseHeaders, Union[None, str, bytes]]
@ -41,17 +41,20 @@ def contextmanager(func: Callable[..., Iterator[_T]]
return result
@runtime_checkable
class InputStream(Protocol):
def read(self, size: int = ...) -> bytes: ...
if sys.version_info >= (3, 8):
from typing import Protocol, runtime_checkable
@runtime_checkable
class InputStream(Protocol):
def read(self, size: int = ...) -> bytes: ...
@runtime_checkable
class ErrorStream(Protocol):
def flush(self) -> object: ...
def write(self, s: str) -> object: ...
@runtime_checkable
class ErrorStream(Protocol):
def flush(self) -> object: ...
def write(self, s: str) -> object: ...
else:
ErrorStream = Any
InputStream = Any
from radicale import item, storage # noqa:E402 isort:skip

View file

@ -2,7 +2,6 @@
# Copyright © 2014 Jean-Marc Martins
# Copyright © 2012-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -17,29 +16,20 @@
# You should have received a copy of the GNU General Public License
# along with Radicale. If not, see <http://www.gnu.org/licenses/>.
import ssl
import sys
from importlib import import_module, metadata
from typing import Callable, Sequence, Tuple, Type, TypeVar, Union
from importlib import import_module
from typing import Callable, Sequence, Type, TypeVar, Union
from radicale import config
from radicale.log import logger
if sys.version_info < (3, 8):
import pkg_resources
else:
from importlib import metadata
_T_co = TypeVar("_T_co", covariant=True)
RADICALE_MODULES: Sequence[str] = ("radicale", "vobject", "passlib", "defusedxml",
"dateutil",
"bcrypt",
"pika",
"ldap",
"ldap3",
"pam")
# IPv4 (host, port) and IPv6 (host, port, flowinfo, scopeid)
ADDRESS_TYPE = Union[Tuple[Union[str, bytes, bytearray], int],
Tuple[str, int, int, int]]
def load_plugin(internal_types: Sequence[str], module_name: str,
class_name: str, base_class: Type[_T_co],
@ -62,155 +52,6 @@ def load_plugin(internal_types: Sequence[str], module_name: str,
def package_version(name):
if sys.version_info < (3, 8):
return pkg_resources.get_distribution(name).version
return metadata.version(name)
def packages_version():
versions = []
versions.append("python=%s.%s.%s" % (sys.version_info[0], sys.version_info[1], sys.version_info[2]))
for pkg in RADICALE_MODULES:
try:
versions.append("%s=%s" % (pkg, package_version(pkg)))
except Exception:
try:
versions.append("%s=%s" % (pkg, package_version("python-" + pkg)))
except Exception:
versions.append("%s=%s" % (pkg, "n/a"))
return " ".join(versions)
def format_address(address: ADDRESS_TYPE) -> str:
host, port, *_ = address
if not isinstance(host, str):
raise NotImplementedError("Unsupported address format: %r" %
(address,))
if host.find(":") == -1:
return "%s:%d" % (host, port)
else:
return "[%s]:%d" % (host, port)
def ssl_context_options_by_protocol(protocol: str, ssl_context_options):
logger.debug("SSL protocol string: '%s' and current SSL context options: '0x%x'", protocol, ssl_context_options)
# disable any protocol by default
logger.debug("SSL context options, disable ALL by default")
ssl_context_options |= ssl.OP_NO_SSLv2
ssl_context_options |= ssl.OP_NO_SSLv3
ssl_context_options |= ssl.OP_NO_TLSv1
ssl_context_options |= ssl.OP_NO_TLSv1_1
ssl_context_options |= ssl.OP_NO_TLSv1_2
ssl_context_options |= ssl.OP_NO_TLSv1_3
logger.debug("SSL cleared SSL context options: '0x%x'", ssl_context_options)
for entry in protocol.split():
entry = entry.strip('+') # remove trailing '+'
if entry == "ALL":
logger.debug("SSL context options, enable ALL (some maybe not supported by underlying OpenSSL, SSLv2 not enabled at all)")
ssl_context_options &= ~ssl.OP_NO_SSLv3
ssl_context_options &= ~ssl.OP_NO_TLSv1
ssl_context_options &= ~ssl.OP_NO_TLSv1_1
ssl_context_options &= ~ssl.OP_NO_TLSv1_2
ssl_context_options &= ~ssl.OP_NO_TLSv1_3
elif entry == "SSLv2":
logger.warning("SSL context options, ignore SSLv2 (totally insecure)")
elif entry == "SSLv3":
ssl_context_options &= ~ssl.OP_NO_SSLv3
logger.debug("SSL context options, enable SSLv3 (maybe not supported by underlying OpenSSL)")
elif entry == "TLSv1":
ssl_context_options &= ~ssl.OP_NO_TLSv1
logger.debug("SSL context options, enable TLSv1 (maybe not supported by underlying OpenSSL)")
elif entry == "TLSv1.1":
logger.debug("SSL context options, enable TLSv1.1 (maybe not supported by underlying OpenSSL)")
ssl_context_options &= ~ssl.OP_NO_TLSv1_1
elif entry == "TLSv1.2":
logger.debug("SSL context options, enable TLSv1.2")
ssl_context_options &= ~ssl.OP_NO_TLSv1_2
elif entry == "TLSv1.3":
logger.debug("SSL context options, enable TLSv1.3")
ssl_context_options &= ~ssl.OP_NO_TLSv1_3
elif entry == "-ALL":
logger.debug("SSL context options, disable ALL")
ssl_context_options |= ssl.OP_NO_SSLv2
ssl_context_options |= ssl.OP_NO_SSLv3
ssl_context_options |= ssl.OP_NO_TLSv1
ssl_context_options |= ssl.OP_NO_TLSv1_1
ssl_context_options |= ssl.OP_NO_TLSv1_2
ssl_context_options |= ssl.OP_NO_TLSv1_3
elif entry == "-SSLv2":
ssl_context_options |= ssl.OP_NO_SSLv2
logger.debug("SSL context options, disable SSLv2")
elif entry == "-SSLv3":
ssl_context_options |= ssl.OP_NO_SSLv3
logger.debug("SSL context options, disable SSLv3")
elif entry == "-TLSv1":
logger.debug("SSL context options, disable TLSv1")
ssl_context_options |= ssl.OP_NO_TLSv1
elif entry == "-TLSv1.1":
logger.debug("SSL context options, disable TLSv1.1")
ssl_context_options |= ssl.OP_NO_TLSv1_1
elif entry == "-TLSv1.2":
logger.debug("SSL context options, disable TLSv1.2")
ssl_context_options |= ssl.OP_NO_TLSv1_2
elif entry == "-TLSv1.3":
logger.debug("SSL context options, disable TLSv1.3")
ssl_context_options |= ssl.OP_NO_TLSv1_3
else:
raise RuntimeError("SSL protocol config contains unsupported entry '%s'" % (entry))
logger.debug("SSL resulting context options: '0x%x'", ssl_context_options)
return ssl_context_options
def ssl_context_minimum_version_by_options(ssl_context_options):
logger.debug("SSL calculate minimum version by context options: '0x%x'", ssl_context_options)
ssl_context_minimum_version = ssl.TLSVersion.SSLv3 # default
if ((ssl_context_options & ssl.OP_NO_SSLv3) and (ssl_context_minimum_version == ssl.TLSVersion.SSLv3)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1
if ((ssl_context_options & ssl.OP_NO_TLSv1) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_1
if ((ssl_context_options & ssl.OP_NO_TLSv1_1) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_1)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_2
if ((ssl_context_options & ssl.OP_NO_TLSv1_2) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_2)):
ssl_context_minimum_version = ssl.TLSVersion.TLSv1_3
if ((ssl_context_options & ssl.OP_NO_TLSv1_3) and (ssl_context_minimum_version == ssl.TLSVersion.TLSv1_3)):
ssl_context_minimum_version = 0 # all disabled
logger.debug("SSL context options: '0x%x' results in minimum version: %s", ssl_context_options, ssl_context_minimum_version)
return ssl_context_minimum_version
def ssl_context_maximum_version_by_options(ssl_context_options):
logger.debug("SSL calculate maximum version by context options: '0x%x'", ssl_context_options)
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_3 # default
if ((ssl_context_options & ssl.OP_NO_TLSv1_3) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_3)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_2
if ((ssl_context_options & ssl.OP_NO_TLSv1_2) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_2)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1_1
if ((ssl_context_options & ssl.OP_NO_TLSv1_1) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1_1)):
ssl_context_maximum_version = ssl.TLSVersion.TLSv1
if ((ssl_context_options & ssl.OP_NO_TLSv1) and (ssl_context_maximum_version == ssl.TLSVersion.TLSv1)):
ssl_context_maximum_version = ssl.TLSVersion.SSLv3
if ((ssl_context_options & ssl.OP_NO_SSLv3) and (ssl_context_maximum_version == ssl.TLSVersion.SSLv3)):
ssl_context_maximum_version = 0
logger.debug("SSL context options: '0x%x' results in maximum version: %s", ssl_context_options, ssl_context_maximum_version)
return ssl_context_maximum_version
def ssl_get_protocols(context):
protocols = []
if not (context.options & ssl.OP_NO_SSLv3):
if (context.minimum_version < ssl.TLSVersion.TLSv1):
protocols.append("SSLv3")
if not (context.options & ssl.OP_NO_TLSv1):
if (context.minimum_version < ssl.TLSVersion.TLSv1_1) and (context.maximum_version >= ssl.TLSVersion.TLSv1):
protocols.append("TLSv1")
if not (context.options & ssl.OP_NO_TLSv1_1):
if (context.minimum_version < ssl.TLSVersion.TLSv1_2) and (context.maximum_version >= ssl.TLSVersion.TLSv1_1):
protocols.append("TLSv1.1")
if not (context.options & ssl.OP_NO_TLSv1_2):
if (context.minimum_version <= ssl.TLSVersion.TLSv1_2) and (context.maximum_version >= ssl.TLSVersion.TLSv1_2):
protocols.append("TLSv1.2")
if not (context.options & ssl.OP_NO_TLSv1_3):
if (context.minimum_version <= ssl.TLSVersion.TLSv1_3) and (context.maximum_version >= ssl.TLSVersion.TLSv1_3):
protocols.append("TLSv1.3")
return protocols

View file

@ -39,17 +39,6 @@ main{
color: #484848;
}
#loginscene .infcloudlink{
margin: 0;
width: 100%;
text-align: center;
color: #484848;
}
#loginscene .infcloudlink-hidden{
visibility: hidden;
}
#loginscene input{
}

View file

@ -1,8 +1,6 @@
/**
* This file is part of Radicale Server - Calendar Server
* Copyright © 2017-2024 Unrud <unrud@outlook.com>
* Copyright © 2023-2024 Matthew Hana <matthew.hana@gmail.com>
* Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@ -396,7 +394,7 @@ function create_edit_collection(user, password, collection, create, callback) {
calendar_color = escape_xml(collection.color + (collection.color ? "ff" : ""));
calendar_description = escape_xml(collection.description);
resourcetype = '<CS:subscribed />';
calendar_source = escape_xml(collection.source);
calendar_source = collection.source;
} else {
calendar_color = escape_xml(collection.color + (collection.color ? "ff" : ""));
calendar_description = escape_xml(collection.description);
@ -875,7 +873,8 @@ function UploadCollectionScene(user, password, collection) {
upload_btn.onclick = upload_start;
uploadfile_form.onchange = onfileschange;
href_form.value = "";
let href = random_uuid();
href_form.value = href;
/** @type {?number} */ let scene_index = null;
/** @type {?XMLHttpRequest} */ let upload_req = null;
@ -927,7 +926,7 @@ function UploadCollectionScene(user, password, collection) {
if(files.length > 1 || href.length == 0){
href = random_uuid();
}
let upload_href = collection.href + href + "/";
let upload_href = collection.href + "/" + href + "/";
upload_req = upload_collection(user, password, upload_href, file, function(result) {
upload_req = null;
results.push(result);
@ -993,12 +992,10 @@ function UploadCollectionScene(user, password, collection) {
hreflimitmsg_html.classList.remove("hidden");
href_form.classList.add("hidden");
href_label.classList.add("hidden");
href_form.value = random_uuid(); // dummy, will be replaced on upload
}else{
hreflimitmsg_html.classList.add("hidden");
href_form.classList.remove("hidden");
href_label.classList.remove("hidden");
href_form.value = files[0].name.replace(/\.(ics|vcf)$/, '');
}
return false;
}
@ -1007,12 +1004,6 @@ function UploadCollectionScene(user, password, collection) {
scene_index = scene_stack.length - 1;
html_scene.classList.remove("hidden");
close_btn.onclick = onclose;
if(error){
error_form.textContent = "Error: " + error;
error_form.classList.remove("hidden");
}else{
error_form.classList.add("hidden");
}
};
this.hide = function() {
@ -1221,7 +1212,7 @@ function CreateEditCollectionScene(user, password, collection) {
alert("You must enter a valid HREF");
return false;
}
href = collection.href + newhreftxtvalue + "/";
href = collection.href + "/" + newhreftxtvalue + "/";
}
displayname = displayname_form.value;
description = description_form.value;
@ -1325,12 +1316,6 @@ function CreateEditCollectionScene(user, password, collection) {
fill_form();
submit_btn.onclick = onsubmit;
cancel_btn.onclick = oncancel;
if(error){
error_form.textContent = "Error: " + error;
error_form.classList.remove("hidden");
}else{
error_form.classList.add("hidden");
}
};
this.hide = function() {
read_form();
@ -1362,10 +1347,8 @@ function cleanHREFinput(a) {
href_form = a.target;
}
let currentTxtVal = href_form.value.trim().toLowerCase();
//Clean the HREF to remove not permitted chars
currentTxtVal = currentTxtVal.replace(/(?![0-9a-z\-\_\.])./g, '');
//Clean the HREF to remove leading . (would result in hidden directory)
currentTxtVal = currentTxtVal.replace(/^\./, '');
//Clean the HREF to remove non lowercase letters and dashes
currentTxtVal = currentTxtVal.replace(/(?![0-9a-z\-\_])./g, '');
href_form.value = currentTxtVal;
}

View file

@ -1,10 +1,4 @@
<!DOCTYPE html>
<!--
* Copyright © 2018-2020 Unrud <unrud@outlook.com>
* Copyright © 2023-2023 Henning <github@henning-ullrich.de>
* Copyright © 2023-2024 Matthew Hana <matthew.hana@gmail.com>
* Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
-->
<html lang="en">
<head>
<meta charset="utf-8">
@ -33,15 +27,8 @@
</section>
<section id="loginscene" class="container hidden">
<div class="infcloudlink-hidden">
<form action="infcloud/" method="get" target="_blank">
<button class="blue" type="submit">Collection content<br>(InfCloud web client)</button>
</form>
</div>
<div class="logocontainer">
<img src="css/logo.svg" alt="Radicale">
<br>
Collection management
</div>
<h1>Sign in</h1>
<br>
@ -129,8 +116,6 @@
<button type="submit" class="green" data-name="submit">Save</button>
<button type="button" class="red" data-name="cancel">Cancel</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="createcollectionscene" class="container hidden">
@ -164,8 +149,6 @@
<button type="submit" class="green" data-name="submit">Create</button>
<button type="button" class="red" data-name="cancel">Cancel</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="uploadcollectionscene" class="container hidden">
@ -189,8 +172,6 @@
<button type="submit" class="green" data-name="submit">Upload</button>
<button type="button" class="red" data-name="close">Close</button>
</form>
<span class="error hidden" data-name="error"></span>
<br>
</section>
<section id="deletecollectionscene" class="container hidden">

24
rights
View file

@ -1,29 +1,5 @@
# -*- mode: conf -*-
# vim:ft=cfg
# Allow all rights for the Administrator
#[root]
#user: Administrator
#collection: .*
#permissions: RW
# Allow reading principal collection (same as username)
#[principal]
#user: .+
#collection: {user}
#permissions: R
# Allow reading and writing private collection (same as username)
#[private]
#user: .+
#collection: {user}/private/
#permissions: RW
# Allow reading calendars and address books that are direct
# children of the principal collection for other users
#[calendarsReader]
#user: .+
#collection: {user}/[^/]+
#permissions: r
# Rights management file for Radicale - A simple calendar server
#

View file

@ -1,6 +1,57 @@
[tool:pytest]
addopts = --typeguard-packages=radicale
[tox:tox]
[testenv]
extras = test
deps =
flake8
isort
# mypy installation fails with pypy<3.9
mypy; implementation_name!='pypy' or python_version>='3.9'
types-setuptools
pytest-cov
commands =
flake8 .
isort --check --diff .
# Run mypy if it's installed
python -c 'import importlib.util, subprocess, sys; \
importlib.util.find_spec("mypy") \
and sys.exit(subprocess.run(["mypy", "."]).returncode) \
or print("Skipped: mypy is not installed")'
pytest -r s --cov --cov-report=term --cov-report=xml .
[tool:isort]
known_standard_library = _dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib
known_third_party = defusedxml,passlib,pkg_resources,pytest,vobject
[flake8]
# Only enable default tests (https://github.com/PyCQA/flake8/issues/790#issuecomment-812823398)
# DNE: DOES-NOT-EXIST
select = E,F,W,C90,DNE000
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501,E261
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501
extend-exclude = build
[mypy]
ignore_missing_imports = True
show_error_codes = True
exclude = (^|/)build($|/)
[coverage:run]
branch = True
source = radicale
omit = tests/*,*/tests/*
[coverage:report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if __name__ == .__main__.:

View file

@ -1,62 +0,0 @@
[tool:pytest]
[tox:tox]
min_version = 4.0
envlist = py, flake8, isort, mypy
[testenv]
extras =
test
deps =
pytest
pytest-cov
commands = pytest -r s --cov --cov-report=term --cov-report=xml .
[testenv:flake8]
deps = flake8==7.1.0
commands = flake8 .
skip_install = True
[testenv:isort]
deps = isort==5.13.2
commands = isort --check --diff .
skip_install = True
[testenv:mypy]
deps = mypy==1.11.0
commands = mypy --install-types --non-interactive .
skip_install = True
[tool:isort]
known_standard_library = _dummy_thread,_thread,abc,aifc,argparse,array,ast,asynchat,asyncio,asyncore,atexit,audioop,base64,bdb,binascii,binhex,bisect,builtins,bz2,cProfile,calendar,cgi,cgitb,chunk,cmath,cmd,code,codecs,codeop,collections,colorsys,compileall,concurrent,configparser,contextlib,contextvars,copy,copyreg,crypt,csv,ctypes,curses,dataclasses,datetime,dbm,decimal,difflib,dis,distutils,doctest,dummy_threading,email,encodings,ensurepip,enum,errno,faulthandler,fcntl,filecmp,fileinput,fnmatch,formatter,fpectl,fractions,ftplib,functools,gc,getopt,getpass,gettext,glob,grp,gzip,hashlib,heapq,hmac,html,http,imaplib,imghdr,imp,importlib,inspect,io,ipaddress,itertools,json,keyword,lib2to3,linecache,locale,logging,lzma,macpath,mailbox,mailcap,marshal,math,mimetypes,mmap,modulefinder,msilib,msvcrt,multiprocessing,netrc,nis,nntplib,ntpath,numbers,operator,optparse,os,ossaudiodev,parser,pathlib,pdb,pickle,pickletools,pipes,pkgutil,platform,plistlib,poplib,posix,posixpath,pprint,profile,pstats,pty,pwd,py_compile,pyclbr,pydoc,queue,quopri,random,re,readline,reprlib,resource,rlcompleter,runpy,sched,secrets,select,selectors,shelve,shlex,shutil,signal,site,smtpd,smtplib,sndhdr,socket,socketserver,spwd,sqlite3,sre,sre_compile,sre_constants,sre_parse,ssl,stat,statistics,string,stringprep,struct,subprocess,sunau,symbol,symtable,sys,sysconfig,syslog,tabnanny,tarfile,telnetlib,tempfile,termios,test,textwrap,threading,time,timeit,tkinter,token,tokenize,trace,traceback,tracemalloc,tty,turtle,turtledemo,types,typing,unicodedata,unittest,urllib,uu,uuid,venv,warnings,wave,weakref,webbrowser,winreg,winsound,wsgiref,xdrlib,xml,xmlrpc,zipapp,zipfile,zipimport,zlib
known_third_party = defusedxml,passlib,pkg_resources,pytest,vobject
[flake8]
# Only enable default tests (https://github.com/PyCQA/flake8/issues/790#issuecomment-812823398)
# DNE: DOES-NOT-EXIST
select = E,F,W,C90,DNE000
ignore = E121,E123,E126,E226,E24,E704,W503,W504,DNE000,E501
extend-exclude = build
[mypy]
ignore_missing_imports = True
show_error_codes = True
exclude = (^|/)build($|/)
[coverage:run]
branch = True
source = radicale
omit = tests/*,*/tests/*
[coverage:report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if __name__ == .__main__.:

View file

@ -1,7 +1,6 @@
# This file is part of Radicale - CalDAV and CardDAV server
# Copyright © 2009-2017 Guillaume Ayoub
# Copyright © 2017-2018 Unrud <unrud@outlook.com>
# Copyright © 2024-2025 Peter Bieringer <pb@bieringer.de>
#
# This library is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -20,7 +19,7 @@ from setuptools import find_packages, setup
# When the version is updated, a new section in the CHANGELOG.md file must be
# added too.
VERSION = "3.5.1.dev"
VERSION = "3.dev"
with open("README.md", encoding="utf-8") as f:
long_description = f.read()
@ -37,12 +36,11 @@ web_files = ["web/internal_data/css/icon.png",
"web/internal_data/index.html"]
install_requires = ["defusedxml", "passlib", "vobject>=0.9.6",
"python-dateutil>=2.7.3",
"pika>=1.1.0",
"requests",
]
"setuptools; python_version<'3.9'"]
bcrypt_requires = ["bcrypt"]
ldap_requires = ["ldap3"]
test_requires = ["pytest>=7", "waitress", *bcrypt_requires]
test_requires = ["pytest>=7", "typeguard<4.3", "waitress", *bcrypt_requires]
setup(
name="Radicale",
@ -60,9 +58,9 @@ setup(
package_data={"radicale": [*web_files, "py.typed"]},
entry_points={"console_scripts": ["radicale = radicale.__main__:run"]},
install_requires=install_requires,
extras_require={"test": test_requires, "bcrypt": bcrypt_requires, "ldap": ldap_requires},
extras_require={"test": test_requires, "bcrypt": bcrypt_requires},
keywords=["calendar", "addressbook", "CalDAV", "CardDAV"],
python_requires=">=3.9.0",
python_requires=">=3.8.0",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
@ -72,11 +70,11 @@ setup(
"License :: OSI Approved :: GNU General Public License (GPL)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Office/Business :: Groupware"])