Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release SecureDrop 1.0.0 #4724

Closed
22 of 23 tasks
redshiftzero opened this issue Aug 30, 2019 · 12 comments
Closed
22 of 23 tasks

Release SecureDrop 1.0.0 #4724

redshiftzero opened this issue Aug 30, 2019 · 12 comments
Milestone

Comments

@redshiftzero
Copy link
Contributor

redshiftzero commented Aug 30, 2019

This is a tracking issue for the upcoming release of SecureDrop 1.0.0 - tasks may get added or modified.

String and feature freeze: 2019-08-29 (1700 PDT)
String comment period: 2019-08-29 (1700 PDT) to 2019-09-02 (2000 PDT)
Feature freeze: 2019-08-30 (1700 PDT) (release branch will be cut AM Eastern time on September 3rd)
Translation period: 2019-09-03 (1700 PDT) to 2019-09-15 (1700 PDT)
Pre-release announcement: 2019-09-10
Translation freeze: 2019-09-15 (1700 PDT)
Release date: 2019-09-17

Release manager: @kushaldas
Deputy release manager: @emkll
Localization manager: @rmol
Deputy localization manager: @redshiftzero

SecureDrop maintainers and testers: As you QA 1.0.0, please report back your testing results as comments on this ticket. File GitHub issues for any problems found, tag them "QA: Release", and associate them with the 1.0.0 milestone for tracking (or ask a maintainer to do so).

Test debian packages will be posted on https://apt-test.freedom.press signed with the test key. An Ansible playbook testing the upgrade path is here.

QA Matrix for 1.0.0

Test Plan for 1.0.0

Prepare release candidate (1.0.0~rc1)

Other pre-release tasks

  • Prepare and distribute pre-release messaging - @eloquence

Prepare release candidate (1.0.0~rc2)

Note: For SecureDrop 1.0.0, we will cut at least two release candidates. Additional release candidates may follow if issues are found in rc2.

Prepare release candidate (1.0.0~rc3)

  • Prepare 1.0.0-rc3 release changelog - @emkll
  • Prepare 1.0.0~rc3 - @kushaldas
  • Build debs and put up 1.0.0~rc3 on test apt server - @kushaldas

After each test, please update the QA matrix and post details for Basic Server Testing, Application Acceptance Testing and 1.0.0-specific testing below in comments to this ticket.

Final release

  • Ensure builder in release branch is updated and/or update builder image - @emkll
  • Merge final translations - @rmol
  • Push signed tag - @emkll
  • Build final Debian packages for 1.0.0 - @conorsch
  • Upload package build logs to wiki - @conorsch
  • Upload Debian packages to apt QA server - @conorsch
  • Pre-Flight: Test install and upgrade (both cron-apt on Trusty, and Ansible on Xenial) of 1.0.0 works w/ prod repo debs, test updater logic in Tails - @zenmonkeykstop, @emkll, @rmol
  • Prepare and distribute release messaging - @eloquence

Post release

@redshiftzero redshiftzero added this to the 1.0.0 milestone Aug 30, 2019
@emkll emkll pinned this issue Sep 4, 2019
@emkll
Copy link
Contributor

emkll commented Sep 6, 2019

Clean install - VMs with V3 onion URLs (Complete)

Environment

  • Install target: VMs
  • Tails version: 3.16
  • Test Scenario: Clean install
  • SSH over Tor: Yes
  • Onion service version: V3
  • Release candidate: RC2
  • General notes:

Basic Server Testing

  • I can access both the source and journalist interfaces
  • I can SSH into both machines over Tor
  • AppArmor is loaded on app
    • 0 processes are running unconfined
  • AppArmor is loaded on mon
    • 0 processes are running unconfined
  • Both servers are running grsec kernels
  • iptables rules loaded
  • OSSEC emails begin to flow after install
  • OSSEC emails are decrypted to correct key and I am able to decrypt them
  • QA Matrix checks pass

Command Line User Generation

  • Can successfully add admin user and login

Administration

  • I have backed up and successfully restored the app server following the documentation here: https://docs.securedrop.org/en/latest/backup_and_restore.html
  • If doing upgrade testing, make a backup on 0.14.0 and restore this backup on 1.0.0
  • "Send Test OSSEC Alert" button in the journalist triggers an OSSEC alert and an email is sent.

Application Acceptance Testing

Source Interface

Landing page base cases
  • JS warning bar does not appear when using Security Slider high
  • JS warning bar does appear when using Security Slider Low
First submission base cases
  • On generate page, refreshing codename produces a new 7-word codename
  • On submit page, empty submissions produce flashed message
  • On submit page, short message submitted successfully
  • On submit page, file greater than 500 MB produces "The connection was reset" in Tor Browser quickly before the entire file is uploaded
  • On submit page, file less than 500 MB submitted successfully
Returning source base cases
  • Nonexistent codename cannot log in
  • Empty codename cannot log in
  • Legitimate codename can log in
  • Returning user can view journalist replies - need to log into journalist interface to test

Journalist Interface

Login base cases
  • Can log in with 2FA tokens
  • incorrect password cannot log in
  • invalid 2fa token cannot log in
  • 2fa immediate reuse cannot log in
Index base cases
  • Filter by codename works
  • Starring and unstarring works
  • Click select all selects all submissions
  • Selecting all and clicking "Download" works
Individual source page
  • You can submit a reply and a flashed message and new row appears
  • You cannot submit an empty reply
  • Clicking "Delete Source And Submissions" and the source and docs are deleted
  • You can click on a document and successfully decrypt using application private key

Basic Tails Testing

Updater GUI

After updating to this release candidate and running securedrop-admin tailsconfig

  • The Updater GUI appears on boot
  • Updating occurs without issue

1.0.0-specific changes

Note that it is not expected that a single tester test each one of the Tor onion services scenarios, please just indicate which scenarios you covered in the comment on the release ticket and the row at the end of the QA matrix (please fill the QA matrix in as you begin QA such that work is not duplicated).

From a 1.0.0 install:

Tor onion services: upgrade to v2

  • Do not rerun ./securedrop-admin sdconfig. Using the same site-specific as from before your upgrade to 1.0.0, run ./securedrop-admin install. V2 should still be enabled, and v3 should not be enabled.

Tor onion services: upgrade to v2+v3

Precondition:

  • Save the site-specific from v2 only. This will be used in a test towards the end of this section.
  • Perform a backup on v2.
  • rerun ./securedrop-admin sdconfig to enable v2 and v3 onion services, and then do ./securedrop-admin install. Then run securedrop-admin tailsconfig and check if the source and journalist desktop shortcuts have working v3 onion address.
  • Now disable SSH over Tor, rerun securedrop-admin install and ./securedrop-admin tailsconfig:
    • Verify that ~/.ssh/config contains IP addresses rather than Onion service addresses for the app and mon hosts
    • Verify that ssh app and ssh mon work as expected.
  • Use make self-signed-https-certs to generate self-signed certificates for testing, and run ./securedrop-admin sdconfig enabling HTTPS:
    • Verify that a warning is shown to the user indicating that they should update their certificate prior to sharing their v3 onion URL with users.
  • Test multi-admin behavior. Conduct this test step after v3 is enabled on the server:
    • Back up site-specific, and copy the version from before the upgrade into place instead. Re-run ./securedrop-admin install, and verify that it fails with a user-friendly message, due to v3_onion_services=False.
    • Restore the v3 site-specific and move the tor_v3_keys.json file out of install_files/ansible-base. Re-run ./securedrop-admin install and verify that it fails with a message due to the missing keys file.
  • Restore the backup from v2. The v3 onions should not be disabled.
  • Now run the backup and restore again to a new v2+v3 install. The v3 onions should be enabled.

Tor onion service: v3 only, no HTTPS

❗ Note: I don't recall the install run throwing a timeout when running ./securedrop-admin install to disable ssh over tor:

RUNNING HANDLER [tor-hidden-services : Waiting for SSH connection (slow)...] ***********************************
fatal: [mon -> localhost]: FAILED! => {"changed": false, "elapsed": 300, "msg": "Timeout when waiting for search string OpenSSH in myawesomev3onionurl1.onion:22"}
fatal: [app -> localhost]: FAILED! => {"changed": false, "elapsed": 301, "msg": "Timeout when waiting for search string OpenSSH in myawesomev3onionurl2.onion:22"}

Tor onion service: adding v3 interfaces with SSH over LAN

  • From a v2-only instance using SSH over LAN, upgrade to v3 only. You should continue to be able to SSH over LAN, and the v2 and v3 source and journalist interfaces should be available.

Deletion functionality

Testing detection and correction of disconnected submissions

Visit the source interface and send two messages. First we'll test a disconnected database record.

In your www-data shell:

cd /var/lib/securedrop/store
ls -laR

You should see the two message files. Remove one with rm.

cd /var/www/securedrop
  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.

  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.

  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).

Now we'll delete the remaining database record and verify that its disconnected file is detected. Still in your www-data shell:

sqlite3 /var/lib/securedrop/db.sqlite

Delete the submission record for the remaining message (substitute your filename):

delete from submissions where filename = '1-exhausted_overmantel-msg.gpg';
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

Establish two SSH connections to the app server. In one, become root with sudo su - and in the other become www-data with sudo -u www-data bash. In the www-data shell:

Activate the securedrop-app-code virtualenv: . /opt/venvs/securedrop-app-code/bin/activate

cd /var/www/securedrop

Create a big file that will take a while to delete: dd if=/dev/zero of=/var/lib/securedrop/store/bigfile bs=1M count=1000

Submit a job to delete it:

python3
>>> import rm
>>> import worker
>>> q = worker.create_queue()
>>> q.enqueue(rm.secure_delete, "/var/lib/securedrop/store/bigfile")

Exit Python.

In the root shell:
Reboot, then reconnect.

Look at the rqrequeue log: less /var/log/securedrop_worker/rqrequeue.err -- at the end you should see lines like this:

2019-08-08 17:31:01,118 INFO Running every 60 seconds.
2019-08-08 17:31:01,141 INFO Requeuing job <Job 1082e71f-7581-448c-b84b-027e55b4ef8e: rm.secure_delete('/var/lib/securedrop/store/bigfile')>
2019-08-08 17:32:01,192 INFO Skipping job 1082e71f-7581-448c-b84b-027e55b4ef8e, which is already being run by worker rq:worker:6a6b548310f948e291fa954743b8094f

That indicates the interrupted job was found and restarted, but was left alone at the next check because it was already running. The job should run to completion, /var/lib/securedrop/store/bigfile should be deleted, and the rqrequeue log should start saying:

2019-08-08 17:33:01,253 INFO No interrupted jobs found in started job registry.
  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
    ❗ The alert is Level1, perhaps we should consider changing the level?
  • I otherwise got no manage.py notifications about this functionality.

Miscellaneous other changes

  • Python 2 should not be used anywhere on the system. Inspect the version of python that is used by running ps -aux | grep python and verify that /opt/venvs/securedrop-app-code/bin/python is used instead of /usr/bin/python.

❗ supervisor uses system python but that's because it's managed upstream:

root      1295  0.1  2.5  59068 12416 ?        Ss   15:47   0:10 /usr/bin/python /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
www-data  1502  0.0  3.1  70864 15548 ?        S    15:47   0:02 /opt/venvs/securedrop-app-code/bin/python /opt/venvs/securedrop-app-code/bin/rqworker
www-data  1503  0.1  2.9  70884 14452 ?        S    15:47   0:10 /opt/venvs/securedrop-app-code/bin/python /var/www/securedrop/scripts/rqrequeue --interval 60
vagrant  10038  0.0  0.1  11284   896 pts/1    S+   17:55   0:00 grep --color=auto python

@eloquence
Copy link
Member

WIP pre-release messaging here, first comments welcome: https://docs.google.com/document/d/1Rc7Z-WsFZUWjTaDGta2_lz4MBo51Tb4Z49GQPPOSM4I/edit#

@kushaldas
Copy link
Contributor

kushaldas commented Sep 10, 2019

QA plan

  • NUC5s
  • NUC7s
  • Mac Minis
  • 1U servers in SF

1.0.0 QA Checklist

For both upgrades and fresh installs, here is a list of functionality that requires testing. You can use this for copy/pasting into your QA report. Feel free to edit this message to update the plan as appropriate.

If you have submitted a QA report already for a 1.0.0 release candidate with successful basic server testing and application acceptance testing sections, then you can skip these sections in subsequent reports, unless otherwise indicated by the Release Manager. This is to ensure that you focus your QA effort on the 1.0.0-specific changes as well as changes since the previous release candidate.

Environment

  • Install target: NUC5
  • Tails version: 3.16
  • Test Scenario:
  • SSH over Tor: No
  • Onion service version: v2 + v3
  • Release candidate: rc2
  • General notes:

Basic Server Testing

  • I can access both the source and journalist interfaces
  • I can SSH into both machines over Tor
  • AppArmor is loaded on app
    • 0 processes are running unconfined
  • AppArmor is loaded on mon
    • 0 processes are running unconfined
  • Both servers are running grsec kernels
  • iptables rules loaded
  • OSSEC emails begin to flow after install
  • OSSEC emails are decrypted to correct key and I am able to decrypt them
  • QA Matrix checks pass

Command Line User Generation

  • Can successfully add admin user and login

Administration

  • I have backed up and successfully restored the app server following the documentation here: https://docs.securedrop.org/en/latest/backup_and_restore.html
  • If doing upgrade testing, make a backup on 0.14.0 and restore this backup on 1.0.0
  • "Send Test OSSEC Alert" button in the journalist triggers an OSSEC alert and an email is sent.

Application Acceptance Testing

Source Interface

Landing page base cases
  • JS warning bar does not appear when using Security Slider high
  • JS warning bar does appear when using Security Slider Low
First submission base cases
  • On generate page, refreshing codename produces a new 7-word codename
  • On submit page, empty submissions produce flashed message
  • On submit page, short message submitted successfully
  • On submit page, file greater than 500 MB produces "The connection was reset" in Tor Browser quickly before the entire file is uploaded
  • On submit page, file less than 500 MB submitted successfully
Returning source base cases
  • Nonexistent codename cannot log in
  • Empty codename cannot log in
  • Legitimate codename can log in
  • Returning user can view journalist replies - need to log into journalist interface to test

Journalist Interface

Login base cases
  • Can log in with 2FA tokens
  • incorrect password cannot log in
  • invalid 2fa token cannot log in
  • 2fa immediate reuse cannot log in
Index base cases
  • Filter by codename works
  • Starring and unstarring works
  • Click select all selects all submissions
  • Selecting all and clicking "Download" works
Individual source page
  • You can submit a reply and a flashed message and new row appears
  • You cannot submit an empty reply
  • Clicking "Delete Source And Submissions" and the source and docs are deleted
  • You can click on a document and successfully decrypt using application private key

Basic Tails Testing

Updater GUI

After updating to this release candidate and running securedrop-admin tailsconfig

  • The Updater GUI appears on boot
  • Updating occurs without issue

1.0.0-specific changes

Note that it is not expected that a single tester test each one of the Tor onion services scenarios, please just indicate which scenarios you covered in the comment on the release ticket and the row at the end of the QA matrix (please fill the QA matrix in as you begin QA such that work is not duplicated).

From a 1.0.0 install:

Tor onion services: upgrade to v2

  • Do not rerun ./securedrop-admin sdconfig. Using the same site-specific as from before your upgrade to 1.0.0, run ./securedrop-admin install. V2 should still be enabled, and v3 should not be enabled.

Tor onion services: upgrade to v2+v3

Precondition:

  • Save the site-specific from v2 only. This will be used in a test towards the end of this section.
  • Perform a backup on v2.
  • rerun ./securedrop-admin sdconfig to enable v2 and v3 onion services, and then do ./securedrop-admin install. Then run securedrop-admin tailsconfig and check if the source and journalist desktop shortcuts have working v3 onion address.
  • [] Now disable SSH over Tor, rerun securedrop-admin install and ./securedrop-admin tailsconfig:
    • Verify that ~/.ssh/config contains IP addresses rather than Onion service addresses for the app and mon hosts
    • Verify that ssh app and ssh mon work as expected.
  • Use make self-signed-https-certs to generate self-signed certificates for testing, and run ./securedrop-admin sdconfig enabling HTTPS:
    • Verify that a warning is shown to the user indicating that they should update their certificate prior to sharing their v3 onion URL with users.
  • Test multi-admin behavior. Conduct this test step after v3 is enabled on the server:
    • Back up site-specific, and copy the version from before the upgrade into place instead. Re-run ./securedrop-admin install, and verify that it fails with a user-friendly message, due to v3_onion_services=False.
    • Restore the v3 site-specific and move the tor_v3_keys.json file out of install_files/ansible-base. Re-run ./securedrop-admin install and verify that it fails with a message due to the missing keys file.
  • Restore the backup from v2. The v3 onions should not be disabled.
  • Now run the backup and restore again to a new v2+v3 install. The v3 onions should be enabled.

Tor onion service: v3 only, no HTTPS

Tor onion service: adding v3 interfaces with SSH over LAN

  • From a v2-only instance using SSH over LAN, upgrade to v3 only. You should continue to be able to SSH over LAN, and the v2 and v3 source and journalist interfaces should be available.

Deletion functionality

Testing detection and correction of disconnected submissions

Visit the source interface and send two messages. First we'll test a disconnected database record.

In your www-data shell:

cd /var/lib/securedrop/store
ls -laR

You should see the two message files. Remove one with rm.

cd /var/www/securedrop
  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.

  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.

  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).

Now we'll delete the remaining database record and verify that its disconnected file is detected. Still in your www-data shell:

sqlite3 /var/lib/securedrop/db.sqlite

Delete the submission record for the remaining message (substitute your filename):

delete from submissions where filename = '1-exhausted_overmantel-msg.gpg';
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

Establish two SSH connections to the app server. In one, become root with sudo su - and in the other become www-data with sudo -u www-data bash. In the www-data shell:

Activate the securedrop-app-code virtualenv: . /opt/venvs/securedrop-app-code/bin/activate

cd /var/www/securedrop

Create a big file that will take a while to delete: dd if=/dev/zero of=/var/lib/securedrop/store/bigfile bs=1M count=1000

Submit a job to delete it:

python3
>>> import rm
>>> import worker
>>> q = worker.create_queue()
>>> q.enqueue(rm.secure_delete, "/var/lib/securedrop/store/bigfile")

Exit Python.

In the root shell:
Reboot, then reconnect.

Look at the rqrequeue log: less /var/log/securedrop_worker/rqrequeue.err -- at the end you should see lines like this:

2019-08-08 17:31:01,118 INFO Running every 60 seconds.
2019-08-08 17:31:01,141 INFO Requeuing job <Job 1082e71f-7581-448c-b84b-027e55b4ef8e: rm.secure_delete('/var/lib/securedrop/store/bigfile')>
2019-08-08 17:32:01,192 INFO Skipping job 1082e71f-7581-448c-b84b-027e55b4ef8e, which is already being run by worker rq:worker:6a6b548310f948e291fa954743b8094f

That indicates the interrupted job was found and restarted, but was left alone at the next check because it was already running. The job should run to completion, /var/lib/securedrop/store/bigfile should be deleted, and the rqrequeue log should start saying:

2019-08-08 17:33:01,253 INFO No interrupted jobs found in started job registry.
  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
  • I otherwise got no manage.py notifications about this functionality.

Miscellaneous other changes

  • Python 2 should not be used anywhere on the system. Inspect the version of python that is used by running ps -aux | grep python and verify that /opt/venvs/securedrop-app-code/bin/python is used instead of /usr/bin/python.
  • Journalist notifications continue to work as expected on 1.0.0.
  • Check that both app and mon servers are running Tor version 0.4.x (Move to tor 0.4.x release series #4658).
  • Login as a journalist, and via another admin account reset the password of the journalist, this should invalidate the journalist session, and the journalist must relogin (Invalidate Session When Admin Resets Journalist Password #4679)
  • There are no linux-image generic packages installed (non-grsec kernels) (apt list --installed | grep linux-image) (Remove old kernels as part of common role and not via grsecurity role #4641)
  • Shortly after uploading a file via the Source Interface, a checksum is added to the submissions table (managed via rq)

Preflight

  • Ensure the builder image is up-to-date on release day

These tests should be performed the day of release prior to live debian packages on apt.freedom.press

Basic testing

  • Install or upgrade occurs without error
  • Source interface is available and version string indicates it is 1.0.0
  • A message can be successfully submitted

Tails

  • The updater GUI appears on boot
  • The update successfully occurs to 1.0.0
  • After reboot, updater GUI no longer appears

@redshiftzero
Copy link
Contributor Author

1.0.0 QA Checklist

Environment

  • Tails version: 4.0-beta2
  • Test Scenario: Fresh install on prod VMs
  • SSH over Tor: On
  • Onion service version: v2 only
  • Release candidate: 1.0.0-rc2

Basic Server Testing

  • I can access both the source and journalist interfaces
  • I can SSH into both machines over Tor - DID NOT TEST (did most testing via console)
  • AppArmor is loaded on app
    • 0 processes are running unconfined
  • AppArmor is loaded on mon
    • 0 processes are running unconfined
  • Both servers are running grsec kernels
  • iptables rules loaded
  • OSSEC emails begin to flow after install
  • OSSEC emails are decrypted to correct key and I am able to decrypt them
  • QA Matrix checks pass - not true, see QA matrix

Command Line User Generation

  • Can successfully add admin user and login

Administration

Application Acceptance Testing

Source Interface

Landing page base cases
  • JS warning bar does not appear when using Security Slider high
  • JS warning bar does appear when using Security Slider Low
First submission base cases
  • On generate page, refreshing codename produces a new 7-word codename
  • On submit page, empty submissions produce flashed message
  • On submit page, short message submitted successfully
  • On submit page, file greater than 500 MB produces "The connection was reset" in Tor Browser quickly before the entire file is uploaded
  • On submit page, file less than 500 MB submitted successfully
Returning source base cases
  • Nonexistent codename cannot log in
  • Empty codename cannot log in
  • Legitimate codename can log in
  • Returning user can view journalist replies

Journalist Interface

Login base cases
  • Can log in with 2FA tokens
  • incorrect password cannot log in
  • invalid 2fa token cannot log in
  • 2fa immediate reuse cannot log in
Index base cases
  • Filter by codename works
  • Starring and unstarring works
  • Click select all selects all submissions
  • Selecting all and clicking "Download” works
Individual source page
  • You can submit a reply and a flashed message and new row appears
  • You cannot submit an empty reply
  • Clicking "Delete Source And Submissions" and the source and docs are deleted
  • You can click on a document and successfully decrypt using application private key

Basic Tails Testing

Updater GUI

After updating to this release candidate and running securedrop-admin tailsconfig

  • The Updater GUI appears on boot
  • Updating occurs without issue - I hit a keyserver timeout

1.0.0-specific changes

From a 1.0.0 install:

Tor onion services: Fresh install on v2 (used a site-specific from the previous release and did not re-run sdconfig)

[I DID NOT TEST]

  • rerun ./securedrop-admin sdconfig to enable v2 and v3 onion services, and then do ./securedrop-admin install. Then run securedrop-admin tailsconfig and check if the source and journalist desktop shortcuts have working v3 onion address.
  • Use make self-signed-https-certs to generate self-signed certificates for testing, and run ./securedrop-admin sdconfig enabling HTTPS:
    • Verify that a warning is shown to the user indicating that they should update their certificate prior to sharing their v3 onion URL with users.
  • Restore the v3 site-specific and move the tor_v3_keys.json file out of install_files/ansible-base. Re-run ./securedrop-admin install and verify that it fails with a message due to the missing keys file.

Testing detection and correction of disconnected submissions

(also made a reply before all this to test #4734)

  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.
  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.
  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
  • I otherwise got no manage.py notifications about this functionality.

Miscellaneous other changes

@eloquence
Copy link
Member

Pre-release announcement is live:
https://securedrop.org/news/securedrop-100-pre-release-announcement/

Tweeted here:
https://twitter.com/SecureDrop/status/1171587448360460288

Redmine bulk distribution completed.

@zenmonkeykstop
Copy link
Contributor

zenmonkeykstop commented Sep 11, 2019

QA plan IN PROGRESS

  • NUC5s
  • NUC7s
  • Mac Minis
  • 1U servers in SF

1.0.0 QA Checklist

For both upgrades and fresh installs, here is a list of functionality that requires testing. You can use this for copy/pasting into your QA report. Feel free to edit this message to update the plan as appropriate.

If you have submitted a QA report already for a 1.0.0 release candidate with successful basic server testing and application acceptance testing sections, then you can skip these sections in subsequent reports, unless otherwise indicated by the Release Manager. This is to ensure that you focus your QA effort on the 1.0.0-specific changes as well as changes since the previous release candidate.

Environment

  • Install target: Mac Mini
  • Tails version: 3.1.6
  • Test Scenario: fresh
  • SSH over Tor: yes
  • Onion service version: v2 initially
  • Release candidate: rc2
  • General notes:

Basic Server Testing

  • I can access both the source and journalist interfaces
  • I can SSH into both machines over Tor
  • AppArmor is loaded on app
    • 0 processes are running unconfined
  • AppArmor is loaded on mon
    • 0 processes are running unconfined
  • Both servers are running grsec kernels
  • iptables rules loaded
  • OSSEC emails begin to flow after install
  • OSSEC emails are decrypted to correct key and I am able to decrypt them
  • QA Matrix checks pass

Command Line User Generation

  • Can successfully add admin user and login

Administration

  • I have backed up and successfully restored the app server following the documentation here: https://docs.securedrop.org/en/latest/backup_and_restore.html
  • If doing upgrade testing, make a backup on 0.14.0 and restore this backup on 1.0.0
  • "Send Test OSSEC Alert" button in the journalist triggers an OSSEC alert and an email is sent. n/a

Application Acceptance Testing

Source Interface

Landing page base cases
  • JS warning bar does not appear when using Security Slider high
  • JS warning bar does appear when using Security Slider Low
First submission base cases
  • On generate page, refreshing codename produces a new 7-word codename
  • On submit page, empty submissions produce flashed message
  • On submit page, short message submitted successfully
  • On submit page, file greater than 500 MB produces "The connection was reset" in Tor Browser quickly before the entire file is uploaded
  • On submit page, file less than 500 MB submitted successfully
Returning source base cases
  • Nonexistent codename cannot log in
  • Empty codename cannot log in
  • Legitimate codename can log in
  • Returning user can view journalist replies - need to log into journalist interface to test

Journalist Interface

Login base cases
  • Can log in with 2FA tokens
  • incorrect password cannot log in
  • invalid 2fa token cannot log in
  • 2fa immediate reuse cannot log in
Index base cases
  • Filter by codename works
  • Starring and unstarring works
  • Click select all selects all submissions
  • Selecting all and clicking "Download" works
Individual source page
  • You can submit a reply and a flashed message and new row appears
  • You cannot submit an empty reply
  • Clicking "Delete Source And Submissions" and the source and docs are deleted
  • You can click on a document and successfully decrypt using application private key

Basic Tails Testing

Updater GUI

After updating to this release candidate and running securedrop-admin tailsconfig

  • The Updater GUI appears on boot
  • Updating occurs without issue

1.0.0-specific changes

Note that it is not expected that a single tester test each one of the Tor onion services scenarios, please just indicate which scenarios you covered in the comment on the release ticket and the row at the end of the QA matrix (please fill the QA matrix in as you begin QA such that work is not duplicated).

From a 1.0.0 install:

Tor onion services: upgrade to v2

  • Do not rerun ./securedrop-admin sdconfig. Using the same site-specific as from before your upgrade to 1.0.0, run ./securedrop-admin install. V2 should still be enabled, and v3 should not be enabled.

Tor onion services: upgrade to v2+v3

Precondition:

  • Save the site-specific from v2 only. This will be used in a test towards the end of this section.
  • Perform a backup on v2.
  • rerun ./securedrop-admin sdconfig to enable v2 and v3 onion services, and then do ./securedrop-admin install. Then run securedrop-admin tailsconfig and check if the source and journalist desktop shortcuts have working v3 onion address.
  • Now disable SSH over Tor, rerun securedrop-admin install and ./securedrop-admin tailsconfig:
    • Verify that ~/.ssh/config contains IP addresses rather than Onion service addresses for the app and mon hosts FAIL, install stalls on "Force reboot" task. On second run, install fails and admin is locked out from SSH.
    • Verify that ssh app and ssh mon work as expected. FAIL, not updated because of stall above
  • Use make self-signed-https-certs to generate self-signed certificates for testing, and run ./securedrop-admin sdconfig enabling HTTPS:
    • Verify that a warning is shown to the user indicating that they should update their certificate prior to sharing their v3 onion URL with users.
  • Test multi-admin behavior. Conduct this test step after v3 is enabled on the server:
    • Back up site-specific, and copy the version from before the upgrade into place instead. Re-run ./securedrop-admin install, and verify that it fails with a user-friendly message, due to v3_onion_services=False.
    • Restore the v3 site-specific and move the tor_v3_keys.json file out of install_files/ansible-base. Re-run ./securedrop-admin install and verify that it fails with a message due to the missing keys file.
  • Restore the backup from v2. The v3 onions should not be disabled. FAIL - restore stalls after tor restart, because v3 onions are disabled by the /etc/tor/torrc from the backup
  • Now run the backup and restore again to a new v2+v3 install. The v3 onions should be enabled.

Tor onion service: v3 only, no HTTPS

Tor onion service: adding v3 interfaces with SSH over LAN

  • From a v2-only instance using SSH over LAN, upgrade to v3 only. You should continue to be able to SSH over LAN, and the v2 and v3 source and journalist interfaces should be available.

Deletion functionality

Testing detection and correction of disconnected submissions

Visit the source interface and send two messages. First we'll test a disconnected database record.

In your www-data shell:

cd /var/lib/securedrop/store
ls -laR

You should see the two message files. Remove one with rm.

cd /var/www/securedrop
  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.

  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.

  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).

Now we'll delete the remaining database record and verify that its disconnected file is detected. Still in your www-data shell:

sqlite3 /var/lib/securedrop/db.sqlite

Delete the submission record for the remaining message (substitute your filename):

delete from submissions where filename = '1-exhausted_overmantel-msg.gpg';
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

Establish two SSH connections to the app server. In one, become root with sudo su - and in the other become www-data with sudo -u www-data bash. In the www-data shell:

Activate the securedrop-app-code virtualenv: . /opt/venvs/securedrop-app-code/bin/activate

cd /var/www/securedrop

Create a big file that will take a while to delete: dd if=/dev/zero of=/var/lib/securedrop/store/bigfile bs=1M count=1000

Submit a job to delete it:

python3
>>> import rm
>>> import worker
>>> q = worker.create_queue()
>>> q.enqueue(rm.secure_delete, "/var/lib/securedrop/store/bigfile")

Exit Python.

In the root shell:
Reboot, then reconnect.

Look at the rqrequeue log: less /var/log/securedrop_worker/rqrequeue.err -- at the end you should see lines like this:

2019-08-08 17:31:01,118 INFO Running every 60 seconds.
2019-08-08 17:31:01,141 INFO Requeuing job <Job 1082e71f-7581-448c-b84b-027e55b4ef8e: rm.secure_delete('/var/lib/securedrop/store/bigfile')>
2019-08-08 17:32:01,192 INFO Skipping job 1082e71f-7581-448c-b84b-027e55b4ef8e, which is already being run by worker rq:worker:6a6b548310f948e291fa954743b8094f

That indicates the interrupted job was found and restarted, but was left alone at the next check because it was already running. The job should run to completion, /var/lib/securedrop/store/bigfile should be deleted, and the rqrequeue log should start saying:

2019-08-08 17:33:01,253 INFO No interrupted jobs found in started job registry.
  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
  • I otherwise got no manage.py notifications about this functionality.

Miscellaneous other changes

  • Python 2 should not be used anywhere on the system. Inspect the version of python that is used by running ps -aux | grep python and verify that /opt/venvs/securedrop-app-code/bin/python is used instead of /usr/bin/python. FAIL - supervisord run by python 2.7
  • Journalist notifications continue to work as expected on 1.0.0.
  • Check that both app and mon servers are running Tor version 0.4.x (Move to tor 0.4.x release series #4658).
  • Login as a journalist, and via another admin account reset the password of the journalist, this should invalidate the journalist session, and the journalist must relogin (Invalidate Session When Admin Resets Journalist Password #4679) Not tested no separate Admin WS available
  • There are no linux-image generic packages installed (non-grsec kernels) (apt list --installed | grep linux-image) (Remove old kernels as part of common role and not via grsecurity role #4641)
  • Shortly after uploading a file via the Source Interface, a checksum is added to the submissions table (managed via rq)

Preflight

  • Ensure the builder image is up-to-date on release day

These tests should be performed the day of release prior to live debian packages on apt.freedom.press

Basic testing

  • Install or upgrade occurs without error
  • Source interface is available and version string indicates it is 1.0.0
  • A message can be successfully submitted

Tails

  • The updater GUI appears on boot
  • The update successfully occurs to 1.0.0
  • After reboot, updater GUI no longer appears

@emkll emkll mentioned this issue Sep 16, 2019
2 tasks
@rmol
Copy link
Contributor

rmol commented Sep 16, 2019

Environment

  • Install target: NUC 7i5BNH
  • Tails version: 3.15
  • Test Scenario: cron-apt update
  • SSH over Tor: yes
  • Onion service version: v2
  • Release candidate: 1.0.0~rc3
  • General notes: Installed at 0.14.0, upgraded to 1.0.0~rc3

Basic Server Testing

  • I can access both the source and journalist interfaces
  • I can SSH into both machines over Tor
  • AppArmor is loaded on app
    • 0 processes are running unconfined
  • AppArmor is loaded on mon
    • 0 processes are running unconfined
  • Both servers are running grsec kernels
  • iptables rules loaded
  • OSSEC emails begin to flow after install
  • OSSEC emails are decrypted to correct key and I am able to decrypt them
  • QA Matrix checks pass

Command Line User Generation

  • Can successfully add admin user and login

Administration

  • I have backed up and successfully restored the app server following the documentation here: https://docs.securedrop.org/en/latest/backup_and_restore.html
  • If doing upgrade testing, make a backup on 0.14.0 and restore this backup on 1.0.0
  • "Send Test OSSEC Alert" button in the journalist triggers an OSSEC alert and an email is sent.

Application Acceptance Testing

Source Interface

Landing page base cases
  • JS warning bar does not appear when using Security Slider high
  • JS warning bar does appear when using Security Slider Low
First submission base cases
  • On generate page, refreshing codename produces a new 7-word codename
  • On submit page, empty submissions produce flashed message
  • On submit page, short message submitted successfully
  • On submit page, file greater than 500 MB produces "The connection was reset" in Tor Browser quickly before the entire file is uploaded
  • On submit page, file less than 500 MB submitted successfully
Returning source base cases
  • Nonexistent codename cannot log in
  • Empty codename cannot log in
  • Legitimate codename can log in
  • Returning user can view journalist replies - need to log into journalist interface to test

Journalist Interface

Login base cases
  • Can log in with 2FA tokens
  • incorrect password cannot log in
  • invalid 2fa token cannot log in
  • 2fa immediate reuse cannot log in
Index base cases
  • Filter by codename works
  • Starring and unstarring works
  • Click select all selects all submissions
  • Selecting all and clicking "Download" works
Individual source page
  • You can submit a reply and a flashed message and new row appears
  • You cannot submit an empty reply
  • Clicking "Delete Source And Submissions" and the source and docs are deleted
  • You can click on a document and successfully decrypt using application private key

Basic Tails Testing

Updater GUI

After updating to this release candidate and running securedrop-admin tailsconfig

  • The Updater GUI appears on boot
  • Updating occurs without issue

Deletion functionality

  • Upgrade test: Prior to upgrading to 1.0.0, run the QA loader:
    https://docs.securedrop.org/en/release-0.14.0/development/database_migrations.html#release-testing-migrations. Then after upgrade ensure that there are no submissions with either NULL sources or sources that do not exist in the database any longer. The corresponding files on disk should also be gone.

    NOTE: This went cleanly with lower values of qa_loader.py --multiplier, like 1 or 5. With the default value of 25, I have seen some files not being deleted, but the database is always properly cleaned and those files will be detected by the daily submission cleanup checks, then can be removed with manage.py delete-disconnected-fs-submissions.

Testing detection and correction of disconnected submissions

Visit the source interface and send two messages. First we'll test a disconnected database record.

In your www-data shell:

cd /var/lib/securedrop/store
ls -laR

You should see the two message files. Remove one with rm.

cd /var/www/securedrop
  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.

  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.

  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).

Now we'll delete the remaining database record and verify that its disconnected file is detected. Still in your www-data shell:

sqlite3 /var/lib/securedrop/db.sqlite

Delete the submission record for the remaining message (substitute your filename):

delete from submissions where filename = '1-exhausted_overmantel-msg.gpg';
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

Establish two SSH connections to the app server. In one, become root with sudo su - and in the other become www-data with sudo -u www-data bash. In the www-data shell:

Activate the securedrop-app-code virtualenv: . /opt/venvs/securedrop-app-code/bin/activate

cd /var/www/securedrop

Create a big file that will take a while to delete: dd if=/dev/zero of=/var/lib/securedrop/store/bigfile bs=1M count=1000

Submit a job to delete it:

python3
>>> import rm
>>> import worker
>>> q = worker.create_queue()
>>> q.enqueue(rm.secure_delete, "/var/lib/securedrop/store/bigfile")

Exit Python.

In the root shell:
Reboot, then reconnect.

Look at the rqrequeue log: less /var/log/securedrop_worker/rqrequeue.err -- at the end you should see lines like this:

2019-08-08 17:31:01,118 INFO Running every 60 seconds.
2019-08-08 17:31:01,141 INFO Requeuing job <Job 1082e71f-7581-448c-b84b-027e55b4ef8e: rm.secure_delete('/var/lib/securedrop/store/bigfile')>
2019-08-08 17:32:01,192 INFO Skipping job 1082e71f-7581-448c-b84b-027e55b4ef8e, which is already being run by worker rq:worker:6a6b548310f948e291fa954743b8094f

That indicates the interrupted job was found and restarted, but was left alone at the next check because it was already running. The job should run to completion, /var/lib/securedrop/store/bigfile should be deleted, and the rqrequeue log should start saying:

2019-08-08 17:33:01,253 INFO No interrupted jobs found in started job registry.
  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
  • I otherwise got no manage.py notifications about this functionality.

Design changes:

  • The design changes from Design updates for SecureDrop 1.0.0, rev1 #4634 are present on the Source Interface
    • The default logo is updated
    • Text colours are updated.
  • Desktop shortcut icons are updated in the Admin Workstation.
  • The updater logo and icon has been updated in the Admin Workstation

Miscellaneous other changes

  • On the Application Server, inspect the version of Python by running ps -aux | grep python

    • /opt/venvs/securedrop-app-code/bin/python is used by the application instead of /usr/bin/python.
    • /opt/venvs/securedrop-app-code/bin/python version is Python 3.5
    • Other running python processes use Python 3 with the exception of supervisord
  • Journalist notifications continue to work as expected on 1.0.0.

  • Both app and mon servers are running Tor version 0.4.x (Move to tor 0.4.x release series #4658).

  • Login as a journalist, and via another admin account reset the password of the journalist, this should invalidate the journalist session, and the journalist must relogin (Invalidate Session When Admin Resets Journalist Password #4679)

  • There are no linux-image generic packages installed (non-grsec kernels) (apt list --installed | grep linux-image) (Remove old kernels as part of common role and not via grsecurity role #4641)

  • Shortly after uploading a file via the Source Interface, a checksum is added to the submissions table (managed via rq)

@eloquence
Copy link
Member

@rmol
Copy link
Contributor

rmol commented Sep 17, 2019

Environment

  • Install target: NUC 7i5BNH
  • Tails version: 3.16
  • Test Scenario: clean install
  • SSH over Tor: yes
  • Onion service version: v2+v3
  • Release candidate: 1.0.0~rc4
  • General notes:

NOTE: Skipped basic server and application and some upgrade tests; will do those in rc4 cron-apt upgrade scenario.

1.0.0-specific changes

Note that it is not expected that a single tester test each one of the Tor onion services scenarios, please just indicate which scenarios you covered in the comment on the release ticket and the row at the end of the QA matrix (please fill the QA matrix in as you begin QA such that work is not duplicated).

From a 1.0.0 install:

Tor onion services: upgrade to v2+v3

Precondition:

  • Save the site-specific from v2 only. This will be used in a test towards the end of this section.
  • Perform a backup on v2.
  • rerun ./securedrop-admin sdconfig to enable v2 and v3 onion services, and then do ./securedrop-admin install. Then run securedrop-admin tailsconfig and check if the source and journalist desktop shortcuts have working v3 onion address.
  • Now disable SSH over Tor, rerun securedrop-admin install and ./securedrop-admin tailsconfig:
    • Verify that ~/.ssh/config contains IP addresses rather than Onion service addresses for the app and mon hosts
    • Verify that ssh app and ssh mon work as expected.
  • Use make self-signed-https-certs to generate self-signed certificates for testing, and run ./securedrop-admin sdconfig enabling HTTPS:
    • Verify that a warning is shown to the user indicating that they should update their certificate prior to sharing their v3 onion URL with users.
  • Use ./securedrop-admin restore to attempt to restore the v2 backup file created earlier.
    • Verify that the restore command fails with a message indicating that a v2 backup can't be used with an instance with v3 enabled.
  • Test multi-admin behavior. Conduct this test step after v3 is enabled on the server:
    • Back up site-specific, and copy the version from before the upgrade into place instead. Re-run ./securedrop-admin install, and verify that it fails with a user-friendly message, due to v3_onion_services=False.
    • Restore the v3 site-specific and move the tor_v3_keys.json file out of install_files/ansible-base. Re-run ./securedrop-admin install and verify that it fails with a message due to the missing keys file.
  • Restore the tor_v3_keys.json file. Perform a backup and restore against the v2+v3 install. The v3 onions should still be enabled after the restore.

Tor onion service: v3 only, no HTTPS

Deletion functionality

Testing detection and correction of disconnected submissions

Visit the source interface and send two messages. First we'll test a disconnected database record.

In your www-data shell:

cd /var/lib/securedrop/store
ls -laR

You should see the two message files. Remove one with rm.

cd /var/www/securedrop
  • ./manage.py check-disconnected-db-submissions should report There are submissions in the database with no corresponding files. Run "manage.py list-disconnected-db-submissions" for details.

  • ./manage.py list-disconnected-db-submissions should list the ID of the deleted submission, e.g. 2.

  • ./manage.py delete-disconnected-db-submissions should prompt you with Enter 'y' to delete all submissions missing files: -- reply y and you should see Removing submission IDs [2]... (the ID may differ).

Now we'll delete the remaining database record and verify that its disconnected file is detected. Still in your www-data shell:

sqlite3 /var/lib/securedrop/db.sqlite

Delete the submission record for the remaining message (substitute your filename):

delete from submissions where filename = '1-exhausted_overmantel-msg.gpg';
  • ./manage.py check-disconnected-fs-submissions should report There are files in the submission area with no corresponding records in the database. Run "manage.py list-disconnected-fs-submissions" for details..
  • ./manage.py list-disconnected-fs-submissions should show a list like:
    /var/lib/securedrop/store/B3A5GPU4OHPQK736R76HKJUP5VONIOMKZLXK77GPTGNW7EJ63AY5YBX27P3DB2X4DZBXPX3LGBBXAJZYG3HQRHE4B6UE5YYBPGDYZOA=/1-exhausted_overmantel-msg.gpg
  • ./manage.py delete-disconnected-fs-submissions should prompt you to delete that file. Do so and it should be deleted.

Testing automatic requeuing of interrupted deletions

Establish two SSH connections to the app server. In one, become root with sudo su - and in the other become www-data with sudo -u www-data bash. In the www-data shell:

Activate the securedrop-app-code virtualenv: . /opt/venvs/securedrop-app-code/bin/activate

cd /var/www/securedrop

Create a big file that will take a while to delete: dd if=/dev/zero of=/var/lib/securedrop/store/bigfile bs=1M count=1000

Submit a job to delete it:

python3
>>> import rm
>>> import worker
>>> q = worker.create_queue()
>>> q.enqueue(rm.secure_delete, "/var/lib/securedrop/store/bigfile")

Exit Python.

In the root shell:
Reboot, then reconnect.

Look at the rqrequeue log: less /var/log/securedrop_worker/rqrequeue.err -- at the end you should see lines like this:

2019-08-08 17:31:01,118 INFO Running every 60 seconds.
2019-08-08 17:31:01,141 INFO Requeuing job <Job 1082e71f-7581-448c-b84b-027e55b4ef8e: rm.secure_delete('/var/lib/securedrop/store/bigfile')>
2019-08-08 17:32:01,192 INFO Skipping job 1082e71f-7581-448c-b84b-027e55b4ef8e, which is already being run by worker rq:worker:6a6b548310f948e291fa954743b8094f

That indicates the interrupted job was found and restarted, but was left alone at the next check because it was already running. The job should run to completion, /var/lib/securedrop/store/bigfile should be deleted, and the rqrequeue log should start saying:

2019-08-08 17:33:01,253 INFO No interrupted jobs found in started job registry.
  • Verified the requeue behavior

Testing OSSEC reporting of disconnects

Create a file under /var/lib/securedrop/store with touch /var/lib/securedrop/store/testfile. If you don't feel like waiting a day for the OSSEC report, you can edit /var/ossec/etc/ossec.conf, look for check-disconnect, and reduce the <frequency>, then service ossec restart.

  • An OSSEC alert was sent indicating a disconnect had occurred.
  • I otherwise got no manage.py notifications about this functionality.

Design changes:

  • The design changes from Design updates for SecureDrop 1.0.0, rev1 #4634 are present on the Source Interface
    • The default logo is updated
    • Text colours are updated.
  • Desktop shortcut icons are updated in the Admin Workstation.
  • The updater logo and icon has been updated in the Admin Workstation

Miscellaneous other changes

  • On the Application Server, inspect the version of Python by running ps -aux | grep python

    • /opt/venvs/securedrop-app-code/bin/python is used by the application instead of /usr/bin/python.
    • /opt/venvs/securedrop-app-code/bin/python version is Python 3.5
    • Other running python processes use Python 3 with the exception of supervisord
  • Journalist notifications continue to work as expected on 1.0.0.

  • Both app and mon servers are running Tor version 0.4.x (Move to tor 0.4.x release series #4658).

  • Login as a journalist, and via another admin account reset the password of the journalist, this should invalidate the journalist session, and the journalist must relogin (Invalidate Session When Admin Resets Journalist Password #4679)

  • There are no linux-image generic packages installed (non-grsec kernels) (apt list --installed | grep linux-image) (Remove old kernels as part of common role and not via grsecurity role #4641)

  • Shortly after uploading a file via the Source Interface, a checksum is added to the submissions table (managed via rq)

@eloquence
Copy link
Member

Release blog post:
https://securedrop.org/news/securedrop-100-released/

Tweet:
https://twitter.com/SecureDrop/status/1174092303834599424

All Redmine instances notified and previous pre-release messaging closed out.

@emkll
Copy link
Contributor

emkll commented Sep 23, 2019

Final item on the checklist was closed in #4857

@emkll emkll closed this as completed Sep 23, 2019
@conorsch
Copy link
Contributor

Upload package build logs to wiki

That task was assigned to me, and I neglected to complete it during final release procedures. For future reference, the procedure is documented here: https://github.com/freedomofpress/securedrop/wiki/Build-logs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants