From: Christian Heller Date: Sat, 29 Aug 2020 17:12:36 +0000 (+0200) Subject: Use scrape.py to update daily infections instead of unreliable CSV URL. X-Git-Url: https://plomlompom.com/repos/%27%29;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20chunks.push%28escapeHTML%28span%5B2%5D%29%29;%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20chunks.push%28%27?a=commitdiff_plain;h=c284e595f130dc4ca3354d474932c59c305e8149;p=berlin-corona-table Use scrape.py to update daily infections instead of unreliable CSV URL. --- diff --git a/README.txt b/README.txt index 3385d6d..2887eee 100644 --- a/README.txt +++ b/README.txt @@ -2,7 +2,7 @@ Daily updated table of development of Berlin's Corona virus infections. How this works: -Each day, the Berlin health ministry publishes by-district data of the day's +Each day, the Berlin health ministry publishes new by-district data of the day's registered new Corona infections at [1]. ./update.sh crawls this and appends the day's data as a single line to ./daily_infections_table.txt, then calls ./enhance_table.py which outputs an enhanced version of the data to @@ -11,7 +11,7 @@ A systemd timer whose files are provided as ./berlin-corona-table.service and ./berlin-corona-table.timer calls ./update.sh once per day, when the new daily data are expected to be found at [1]. -[1] https://www.berlin.de/lageso/_assets/gesundheit/publikationen/corona/bezirkstabelle.csv +[1] https://www.berlin.de/sen/gpg/service/presse/2020/ [2] https://plomlompom.com/berlin_corona.txt & https://plomlompom.com/berlin_corona.html Set-up: diff --git a/update.sh b/update.sh index 008f716..fee5ef5 100755 --- a/update.sh +++ b/update.sh @@ -1,21 +1,10 @@ #!/bin/sh set -e -CSV_URL=https://www.berlin.de/lageso/_assets/gesundheit/publikationen/corona/bezirkstabelle.csv table_path=daily_infections_table.txt -# If we don't have a table file yet, we need to provide its header. -header=" CW FK Li MH Mi Ne Pa Re Sp SZ TS TK sum" -if [ ! -f "${table_path}" ]; then - echo "${header}" > "${table_path}" -fi - -# Parse Lageso day table of new infections by district into new line for history table. -today="$(date +%Y-%m-%d)" -curl "${CSV_URL}" \ -| awk 'BEGIN { FS=";"; ORS=""; print "'${today}'" }; '\ -'!/^"Bezirk"/ { printf "%4d", $3 }; '\ -'END { printf "\n" }' "${filename}" >> "${table_path}" +# Re-build infections table. +./scrape.py > "${table_path}" # Write enhanced table output to directory served by web server. #