Browse Source

Update documentation (content, presentation and images) (#702)

* Update documentation (content, presentation and images)

* Add some links

* Revert image change for now

Co-authored-by: baarkerlounger <db@slothlife.xyz>
pull/703/head
Paul Robert Lloyd 3 years ago committed by GitHub
parent
commit
84169aa93f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 21
      README.md
  2. 142
      docs/developer_setup.md
  3. 12
      docs/exports.md
  4. 119
      docs/form_builder.md
  5. 18
      docs/form_runner.md
  6. 12
      docs/frontend.md
  7. BIN
      docs/images/logs_list.png
  8. BIN
      docs/images/organisational_relationships.png
  9. BIN
      docs/images/service.png
  10. BIN
      docs/images/user_log_permissions.png
  11. 141
      docs/infrastructure.md
  12. 12
      docs/monitoring.md
  13. 23
      docs/organisation_relationships.md
  14. 25
      docs/organisations.md
  15. 8
      docs/schemes.md
  16. 2
      docs/service_overview.md
  17. 9
      docs/testing.md
  18. 13
      docs/user_roles.md
  19. 15
      docs/users.md

21
README.md

@ -8,20 +8,21 @@ Ruby on Rails app that handles the submission of lettings and sales of social ho
## Domain documentation
- [Service overview](docs/service_overview.md)
- [User roles](docs/user_roles.md)
- [Schemes](docs/schemes.md)
- [Organisation relationships (Parent/Child)](docs/organisation_relationships.md)
- [Organisations](docs/organisations.md)
- [Users and roles](docs/users.md)
- [Supported housing schemes](docs/schemes.md)
## Technical Documentation
- [Developer setup](docs/developer_setup.md)
- [Form builder](docs/form_builder.md)
- [Form runner](docs/form_runner.md)
- [Infrastructure & CI/CD pipelines](docs/infrastructure.md)
- [Monitoring, logging & alerting](docs/monitoring.md)
- [Frontend](docs/frontend.md)
- [Testing strategies and style guide](docs/testing.md)
- [Export to CDS](docs/exports)
- [Testing strategy](docs/testing.md)
- [Form Builder](docs/form_builder.md)
- [Form Runner](docs/form_runner.md)
- [Infrastructure](docs/infrastructure.md)
- [Monitoring](docs/monitoring.md)
- [Exporting to CDS](docs/exports)
- [Application decision records](docs/adr)
## API documentation
@ -33,4 +34,4 @@ API documentation can be found here: <https://communitiesuk.github.io/submit-soc
## User interface
![View of the logs list](docs/images/logs_list.png)
![View of the logs list](docs/images/service.png)

142
docs/developer_setup.md

@ -1,39 +1,43 @@
# **Developing locally on host machine**
# Developing locally on host machine
The most common way to run a development version of the application is run with local dependencies.
Dependencies:
- Ruby
- Rails
- PostgreSQL
- NodeJS
- Gecko driver (https://github.com/mozilla/geckodriver/releases) [for running Selenium tests]
- [Ruby](https://www.ruby-lang.org/en/)
- [Rails](https://rubyonrails.org/)
- [PostgreSQL](https://www.postgresql.org/)
- [NodeJS](https://nodejs.org/en/)
- [Gecko driver](https://github.com/mozilla/geckodriver/releases) [for running Selenium tests]
We recommend using RBenv to manage Ruby versions.
We recommend using [RBenv](https://github.com/rbenv/rbenv) to manage Ruby versions.
1. Install PostgreSQL
Mac OS:
macOS:
```bash
brew install postgresql
brew services start postgresql
```
Linux (Debian):
```bash
sudo apt install -y postgresql postgresql-contrib libpq-dev
sudo systemctl start postgresql
```
2. Create a Postgres user
```bash
sudo su - postgres -c "createuser <username> -s -P"
```
3. Install RBenv & Ruby-build
Mac OS:
macOS:
```bash
brew install rbenv
rbenv init
@ -42,6 +46,7 @@ We recommend using RBenv to manage Ruby versions.
```
Linux (Debian):
```bash
sudo apt install -y rbenv git
rbenv init
@ -50,7 +55,7 @@ We recommend using RBenv to manage Ruby versions.
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
```
4. Install Ruby & Bundler
4. Install Ruby and Bundler
```bash
rbenv install 3.1.2
@ -59,15 +64,17 @@ We recommend using RBenv to manage Ruby versions.
gem install bundler
```
5. Install Javascript depenencies
5. Install JavaScript dependencies
macOS:
Mac OS:
```bash
brew install node
brew install yarn
```
Linux (Debian):
```bash
curl -sL https://deb.nodesource.com/setup_16.x | sudo bash -
sudo apt -y install nodejs
@ -80,73 +87,108 @@ We recommend using RBenv to manage Ruby versions.
```
6. Clone the repo
```bash
git clone https://github.com/communitiesuk/submit-social-housing-lettings-and-sales-data.git
```
## App setup (OS agnostic)
## Application setup
1. Copy the `.env.example` to `.env` and replace the database credentials with your local postgres user credentials.
2. Install the dependencies:\
`bundle install && yarn install`
2. Install the dependencies:
```bash
bundle install && yarn install
```
3. Create the database & run migrations:
```bash
bundle exec rake db:create db:migrate
```
3. Create the database & run migrations:\
`bundle exec rake db:create db:migrate`
4. Seed the database if required:
4. Seed the database if required:\
`bundle exec rake db:seed`
```bash
bundle exec rake db:seed
```
5. Start the dev servers
a. Using foreman:\
`./bin/dev`
a. Using Foreman:
b. Individually:\
```bash
./bin/dev
```
i. Rails:\
`bundle exec rails s`
b. Individually:
ii. JS (for hot reloading):\
`yarn build --mode=development --watch`
Rails:
If you're not modifying front end assets you can bundle them as a one off task:\
`yarn build --mode=development`
```bash
bundle exec rails s
```
Development mode will target the latest versions of Chrome, Firefox and Safari for transpilation while production mode will target older browsers.
JavaScript (for hot reloading):
The Rails server will start on <http://localhost:3000>.
```bash
yarn build --mode=development --watch
```
If you’re not modifying front end assets you can bundle them as a one off task:
```bash
yarn build --mode=development
```
Development mode will target the latest versions of Chrome, Firefox and Safari for transpilation while production mode will target older browsers.
The Rails server will start on <http://localhost:3000>.
6. Install Gecko Driver
Linux (Debian):
```bash
wget https://github.com/mozilla/geckodriver/releases/download/v0.31.0/geckodriver-v0.31.0-linux64.tar.gz
tar -xvzf geckodriver-v0.31.0-linux64.tar.gz
rm geckodriver-v0.31.0-linux64.tar.gz
chmod +x geckodriver
sudo mv geckodriver /usr/local/bin/
```
Linux (Debian):
Running the test suite (front end assets need to be built or server needs to be running):\
`bundle exec rspec`
```bash
wget https://github.com/mozilla/geckodriver/releases/download/v0.31.0/geckodriver-v0.31.0-linux64.tar.gz
tar -xvzf geckodriver-v0.31.0-linux64.tar.gz
rm geckodriver-v0.31.0-linux64.tar.gz
chmod +x geckodriver
sudo mv geckodriver /usr/local/bin/
```
Running the test suite (front end assets need to be built or server needs to be running):
# **Using Docker**
```bash
bundle exec rspec
```
## Using Docker
1. Build the image:\
`docker-compose build`
1. Build the image:
2. Run the database migrations:\
`docker-compose run --rm app /bin/bash -c 'rake db:migrate'`
```bash
docker-compose build
```
2. Run the database migrations:
```bash
docker-compose run --rm app /bin/bash -c 'rake db:migrate'
```
3. Seed the database if required:\
`docker-compose run --rm app /bin/bash -c 'rake db:seed'`
3. Seed the database if required:
4. To be able to debug with Pry run the app using:\
`docker-compose run --service-ports app`
```bash
docker-compose run --rm app /bin/bash -c 'rake db:seed'
```
4. To be able to debug with Pry run the app using:
```bash
docker-compose run --service-ports app
```
If this is not needed you can run `docker-compose up` as normal

12
docs/exports.md

@ -1,15 +1,17 @@
# CDS exports
# Exporting to CDS
All data collected by the application needs to be exported to the Consolidated Data Store (CDS) which is a data warehouse based on MS SQL running in the DAP (Data Analytics Platform).
This is done via XML exports saved in an S3 bucket located in the DAP VPC using dedicated credentials shared out of band. The data mapping for this export can be found in `app/services/exports/case_log_export_service.rb`. Initially the application database field names and field types were chosen to match the existing CDS data as closely as possible to minimise the amount of transformation needed. This has led to a less than optimal data model though and increasingly we should look to transform at the mapping layer where beneficial for our application.
This is done via XML exports saved in an S3 bucket located in the DAP VPC using dedicated credentials shared out of band. The data mapping for this export can be found in `app/services/exports/case_log_export_service.rb`
Initially the application database field names and field types were chosen to match the existing CDS data as closely as possible to minimise the amount of transformation needed. This has led to a less than optimal data model though and increasingly we should look to transform at the mapping layer where beneficial for our application.
The export service is triggered nightly using [Gov PaaS tasks](https://docs.cloudfoundry.org/devguide/using-tasks.html). These tasks are triggered from a Github action, as Gov PaaS does not currently support the Cloud Foundry Task Scheduler.
The S3 bucket is located in the DAP VPC rather than the application VPC as DAP runs in an AWS account directly so access to the S3 bucket can be restricted to only the IPs used by the application. This is not possible the other way around as Gov PaaS does not support restricting S3 access by IP (https://github.com/alphagov/paas-roadmap/issues/107).
The S3 bucket is located in the DAP VPC rather than the application VPC as DAP runs in an AWS account directly so access to the S3 bucket can be restricted to only the IPs used by the application. This is not possible the other way around as [Gov PaaS does not support restricting S3 access by IP](https://github.com/alphagov/paas-roadmap/issues/107).
## Other options previously considered:
## Other options previously considered
- CDC replication using a managed service such as [AWS DMS](https://aws.amazon.com/dms/)
- Would require VPC peering which Gov PaaS does not currently support (https://github.com/alphagov/paas-roadmap/issues/105)
- Would require VPC peering which [Gov PaaS does not currently support](https://github.com/alphagov/paas-roadmap/issues/105)
- Would require CDS to make changes to their ingestion model

119
docs/form_builder.md

@ -1,49 +1,66 @@
## Single log submission form configuration
# Form Builder
### Background
## Background
Lettings and Sales of Social housing data is collected in annual "collection windows" that run from 1st April to 1st April. During this window the form and questions generally stay constant. The form will generally change by small amounts between each collection window. Typical changes are adding new questions, adding or removing answer options from questions or tweaking question wording for clarity.
Social housing lettings and sales data is collected in annual collection windows that run from 1st April to 1st April.
During this window the form and questions generally stay constant. The form will generally change by small amounts between each collection window. Typical changes are adding new questions, adding or removing answer options from questions or tweaking question wording for clarity.
A paper form is produced for guidance and to help data providers collect the data offline, and a bulk upload template is circulated which need to match the online form.
Data is accepted for a collection window for up to 3 months after it's finished to allow for late data submission. This means that between April and July two version of the form run simultaneously.
Data is accepted for a collection window for up to 3 months after it’s finished to allow for late data submission. This means that between April and July two version of the form run simultaneously.
Other considerations that went into our design are being able to re-use as much of this solution for other data collections, and possibly having the ability to generate the form and/or form changes from a user interface.
Other considerations that went into our design are being able to re-use as much of this solution for other data collections, and possibly having the ability to generate the form and/or form changes from a UI.
We haven’t used micro-services, preferring to deploy a single application but we have modelled the form itself as configuration in the form of a JSON structure that acts as a sort of DSL/form builder for the form.
We haven't used micro-services, preferring to deploy a single application for CLDC but we have modelled the form itself as configuration in the form of a JSON structure that acts as a sort of DSL/form builder for the form. The idea is to decouple the code that creates the required routes, controller methods, views etc to display the form from the actual wording of questions or order of pages such that it becomes possible to make changes to the form with little or no code changes.
The idea is to decouple the code that creates the required routes, controller methods, views etc to display the form from the actual wording of questions or order of pages such that it becomes possible to make changes to the form with little or no code changes.
This should also mean that in the future it could be possible to create a UI that can construct the JSON config, which would open up the ability to make form changes to a wider audience. Doing this fully would require generating and running the necessary migrations for data storage, generating the required ActiveRecord methods to validate the data server side, and generating/updating API endpoints and documentation. All of this is likely to be beyond the scope of initial MVP but could be looked at in the future.
This should also mean that in the future it could be possible to create an interface that can construct the JSON config, which would open up the ability to make form changes to a wider audience. Doing this fully would require generating and running the necessary migrations for data storage, generating the required ActiveRecord methods to validate the data server side, and generating/updating API endpoints and documentation. All of this is likely to be beyond the scope of initial MVP but could be looked at in the future.
Since initially the JSON config will not create database migrations or ActiveRecord model validations, it will instead assume that these have been correctly created for the config provided. The reasoning for this is the following assumptions:
- The form will be tweaked regularly (amending questions wording, changing the order of questions or the page a question is displayed on)
- The actual data collected will change very infrequently. Time series continuity is very important to ADD (Analysis and Data Directorate) so the actual data collected should stay largely consistent i.e. in general we can change the question wording in ways that makes the intent clearer or easier to understand, but not in ways that would make the data provider give a different answer.
A form parser class will parse this config into ruby objects/methods that can be used as an API by the rest of the application, such that we could change the underlying config if needed (for example swap JSON for YAML or for DataBase objects) without needing to change the rest of the application. We'll call this the "Form Runner" part of the application.
A form parser class will parse this config into ruby objects/methods that can be used as an API by the rest of the application, such that we could change the underlying config if needed (for example swap JSON for YAML or for DataBase objects) without needing to change the rest of the application. We’ll call this the Form Runner part of the application.
## Setup this log
The setup this log section is treated slightly differently from the rest of the form. It is more accurately viewed as providing metadata about the form than as being part of the form itself. It also needs to know far more about the application specific context than other parts of the form such as who the current user is, what organisation they’re part of and what role they have etc.
### Setup this log
As a result it’s not modelled as part of the config but rather as code. It still uses the same Form Runner components though.
The setup this log section is treated slightly differently from the rest of the form. It is more accurately viewed as providing metadata about the form than as being part of the form itself. It also needs to know far more about the application specific context than other parts of the form such as who the current user is, what organisation they're part of and what role they have etc.
## Features the Form Config supports
As a result it's not modelled as part of the config but rather as code. It still uses the same "Form Runner" components though.
- Defining sections, subsections, pages and questions that fit the GOV.UK task list pattern
### Features the Form Config supports
- Auto-generated routes – URLs are automatically created from dasherized page names
- Defining sections, subsections, pages and questions that fit the GovUK tasklist pattern
- Auto-generated routes - urls are automatically created from dasherized page names
- Data persistence requires a database field to exist which matches the name/id for each question (and answer option for checkbox questions)
- Text, Numeric, Date, Radio, Select and Checkbox question types
- Conditional questions (`conditional_for`) - Radio and Checkbox questions can support "conditional" text or numeric questions that show/hide on the same page when the triggering option is selected
- Routing (`depends_on`) - all pages can specify conditions (attributes of the case log) that determine whether or not they're shown to the user
- Text, numeric, date, radio, select and checkbox question types
- Conditional questions (`conditional_for`) – Radio and checkbox questions can support conditional text or numeric questions that show/hide on the same page when the triggering option is selected
- Routing (`depends_on`) – all pages can specify conditions (attributes of the case log) that determine whether or not they’re shown to the user
- Methods can be chained (i.e. you can have conditions in the form `{ owning_organisation.provider_type: "local_authority"`) which will call `case_log.owning_organisation.provider_type` and compare the result to the provided value.
- Numeric questions support math expression depends_on conditions such as `{ age2: ">16" }`
- By default questions on pages that are not routed to are assumed to be invalid and are cleared. This can be prevented by setting `derived: true` on a question.
- Questions can be optionally hidden from the check answers page of each section by setting `hidden_in_check_answers: true`. This can also take a condition.
- Questions can be set as being inferred from other answers. This is similar to derived with the difference being that derived questions can be derived from anything not just other form question answers, and inferred answers are cleared when the answers they depend on change, whereas derived questions aren't.
- Questions can be set as being inferred from other answers. This is similar to derived with the difference being that derived questions can be derived from anything not just other form question answers, and inferred answers are cleared when the answers they depend on change, whereas derived questions aren’t.
- Soft validation interruption pages can be included
- For complex html guidance partials can be referenced
### JSON Config
- For complex HTML guidance partials can be referenced
## JSON Config
The form for this is driven by a JSON file in `/config/forms/{start_year}_{end_year}.json`
@ -105,47 +122,69 @@ The JSON should follow the structure:
Assumptions made by the format:
- All forms have at least 1 section
- All sections have at least 1 subsection
- All subsections have at least 1 page
- All pages have at least 1 question
- The ActiveRecord case log model has a field for each question name (must match). In the case of checkbox questions it must have one field for every answer option (again names must match).
- Text not required by a page/question such as a header or hint text should be passed as an empty string
- For conditionally shown questions, conditions that have been implemented and can be used are:
- Radio question answer option selected matches one of conditional e.g. ["answer-options-1-string", "answer-option-3-string"]
- Radio question answer option selected matches one of conditional e.g.\
`["answer-options-1-string", "answer-option-3-string"]`
- Numeric question value matches condition e.g. [">2"], ["<7"] or ["== 6"]
- When the top level question is a radio button and the conditional question is a numeric, text or date field then the conditional question is shown inline
- When the conditional question is a radio, checkbox or select field it should be displayed on it's own page and "depends_on" should be used rather than "conditional_for"
Page routing:
- When the conditional question is a radio, checkbox or select field it should be displayed on it’s own page and "depends_on" should be used rather than "conditional_for"
- Form navigation works by stepping sequentially through every page defined in the JSON form definition for the given subsection. For every page it checks if it has "depends_on" conditions. If it does, it evaluates them to determine whether that page should be show or not.
### Page routing
- In this way we can build up whole branches by having:
Form navigation works by stepping sequentially through every page defined in the JSON form definition for the given subsection. For every page it checks if it has "depends_on" conditions. If it does, it evaluates them to determine whether that page should be show or not.
```jsonc
"page_1": { "questions": { "question_1: "answer_options": ["A", "B"] } },
"page_2": { "questions": { "question_2: "answer_options": ["C", "D"] }, "depends_on": [{ "question_1": "A" }] },
"page_3": { "questions": { "question_3: "answer_options": ["E", "F"] }, "depends_on": [{ "question_1": "A" }] },
"page_4": { "questions": { "question_4: "answer_options": ["G", "H"] }, "depends_on": [{ "question_1": "B" }] },
```
In this way we can build up whole branches by having:
### JSON form validation against Schema
```jsonc
"page_1": { "questions": { "question_1: "answer_options": ["A", "B"] } },
"page_2": { "questions": { "question_2: "answer_options": ["C", "D"] }, "depends_on": [{ "question_1": "A" }] },
"page_3": { "questions": { "question_3: "answer_options": ["E", "F"] }, "depends_on": [{ "question_1": "A" }] },
"page_4": { "questions": { "question_4: "answer_options": ["G", "H"] }, "depends_on": [{ "question_1": "B" }] },
```
To validate the form JSON against the schema you can run:\
`rake form_definition:validate["config/forms/2021_2022.json"]`
## JSON form validation against Schema
n.b. You may have to escape square brackets in zsh\
`rake form_definition:validate\["config/forms/2021_2022.json"\]`
To validate the form JSON against the schema you can run:
```bash
rake form_definition:validate["config/forms/2021_2022.json"]
```
Note: you may have to escape square brackets in zsh:
```bash
rake form_definition:validate\["config/forms/2021_2022.json"\]
```
This will validate the given form definition against the schema in `config/forms/schema/generic.json`.
You can also run:\
`rake form_definition:validate_all`
You can also run:
This will validate all forms in directories = `["config/forms", "spec/fixtures/forms"]`
```bash
rake form_definition:validate_all
```
This will validate all forms in directories `["config/forms", "spec/fixtures/forms"]`
### Improvements that could be made
## Improvements that could be made
- JSON schema definition could be expanded such that we can better automatically validate that a given config is valid and internally consistent
- Generators could parse a given valid JSON form and generate the required database migrations to ensure all the expected fields exist and are of a compatible type
- The parsed form could be visualised using something like GraphViz to help manually verify the coded config meets requirements

18
docs/form_runner.md

@ -1,19 +1,21 @@
# Form Runner
The form runner is composed of:
The Form Runner is composed of:
Ruby Classes:
- A singleton form handler that instantiates an instances of each form definition (config file we have) combined with the "setup" section that is common to all forms. This is created at rails boot time. (`app/models/form_handler.rb`)
- A Form class that is the entry point for parsing a form definition and handles most of the associated logic (`app/models/form.rb`)
- Section, Subsection, Page and Question classes (`app/models/form/`)
- Setup subsection specific instances (subclasses) of Section, Subsection, Pages and Questions (`app/form/setup/`)
ERB Templates:
- A singleton form handler that instantiates an instances of each form definition (config file we have) combined with the setup section that is common to all forms. This is created at rails boot time. (`app/models/form_handler.rb`)
- A `Form` class that is the entry point for parsing a form definition and handles most of the associated logic (`app/models/form.rb`)
- `Section`, `Subsection`, `Page` and `Question` classes (`app/models/form/`)
- Setup subsection specific instances (subclasses) of `Section`, `Subsection`, `Pages` and `Questions` (`app/form/setup/`)
ERB templates:
- The page view which is the main view for each form page (`app/views/form/page.html.erb`)
- Partials for each question type (radio, checkbox, select, text, numeric, date) (`app/views/form/`)
- Partials for specific question guidance (`app/views/form/guidance`)
- The check answers page which is the view for the answer summary page of each section (`app/views/form/check_answers.html.erb`)
Routes for each form page are generated by looping over each Page instance in each Form instance held by the Form Handler and defining a "Get" path. The corresponding controller method is also auto-generated with meta-programming via the same looping in `app/controllers/form_controller.rb`
Routes for each form page are generated by looping over each Page instance in each Form instance held by the form handler and defining a `GET` path. The corresponding controller method is also auto-generated with meta-programming via the same looping in `app/controllers/form_controller.rb`
All form pages submit to the same controller method (`app/controllers/form_controller.rb#submit_form`) which validates and persists the data, and then redirects to the next form page that identifies as "routed_to" given the current case log state.
All form pages submit to the same controller method (`app/controllers/form_controller.rb#submit_form`) which validates and persists the data, and then redirects to the next form page that identifies as `routed_to` given the current case log state.

12
docs/frontend.md

@ -1,6 +1,6 @@
## Frontend
# Frontend
### GOV.UK Design System components
## GOV.UK Design System components
This service follows the guidance and recommendations from the [GOV.UK Design System](https://design-system.service.gov.uk). This is achieved using the following libraries:
@ -18,7 +18,7 @@ This service follows the guidance and recommendations from the [GOV.UK Design Sy
[GitHub](https://github.com/DFE-Digital/govuk-formbuilder) ·
[RubyDoc](https://www.rubydoc.info/gems/govuk_design_system_formbuilder)
### Service-specific components
## Service-specific components
Service-specific components are built using the [ViewComponent](https://viewcomponent.org) framework, and can be found in `app/components`.
@ -38,12 +38,12 @@ The general pattern is:
- Register a controller in `/app/frontend/controllers/index.js`- be sure to use kebab case
- Create that controller in `app/frontend/controllers/` - be sure to use underscore case
- Attach the controller to the html element that should trigger it's functionality
- Attach the controller to the html element that should trigger its functionality
### Asset bundling and compilation
- We use [Webpack](https://webpack.js.org/) via [jsbundling-rails](https://github.com/rails/jsbundling-rails) to bundle js, css and images. The configuration can be found in `webpack.config.js`.
- We use [Webpack](https://webpack.js.org/) via [jsbundling-rails](https://github.com/rails/jsbundling-rails) to bundle JavaScript, CSS and images. The configuration can be found in `webpack.config.js`.
- We use [Propshaft](https://github.com/rails/propshaft) as our asset pipeline to serve the assets bundled/compiled by webpack
- We use [Babel](https://babeljs.io/) to transpile js down to ES5 for Internet Explorer compatibility. The configuration can be found in `babel.config.js`
- We use [browserslist](https://github.com/browserslist/browserslist) to specifiy the browsers we want to transpile for. The configuration can be found in `package.json`
- We use [browserslist](https://github.com/browserslist/browserslist) to specify the browsers we want to transpile for. The configuration can be found in `package.json`
- We include a number of polyfills to support Internet Explorer. These can be found in `app/frontend/application.js`

BIN
docs/images/logs_list.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 286 KiB

BIN
docs/images/organisational_relationships.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 88 KiB

BIN
docs/images/service.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

BIN
docs/images/user_log_permissions.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 197 KiB

After

Width:  |  Height:  |  Size: 118 KiB

141
docs/infrastructure.md

@ -1,4 +1,6 @@
## Infrastructure
# Infrastructure
## Deployment
This application is running on [GOV.UK PaaS](https://www.cloud.service.gov.uk/). To deploy you need to:
@ -6,32 +8,49 @@ This application is running on [GOV.UK PaaS](https://www.cloud.service.gov.uk/).
2. [Install the Cloud Foundry CLI](https://docs.cloudfoundry.org/cf-cli/install-go-cli.html)
3. Login:\
`cf login -a api.london.cloud.service.gov.uk -u <your_username>`
3. Login:
```bash
cf login -a api.london.cloud.service.gov.uk -u <your_username>
```
4. Set your deployment target (staging/production):
```bash
cf target -o dluhc-core -s <deploy_environment>
```
4. Set your deployment target (staging/production):\
`cf target -o dluhc-core -s <deploy_environment>`
5. Deploy:
5. Deploy:\
`cf push dluhc-core --strategy rolling`. This will use the [manifest file](staging_manifest.yml)
```bash
cf push dluhc-core --strategy rolling
```
This will use the [manifest file](staging_manifest.yml)
Once the app is deployed:
1. Get a Rails console:\
`cf ssh dluhc-core-staging -t -c "/tmp/lifecycle/launcher /home/vcap/app 'rails console' ''"`
1. Get a Rails console:
```bash
cf ssh dluhc-core-staging -t -c "/tmp/lifecycle/launcher /home/vcap/app 'rails console' ''"
```
2. Check logs:
2. Check logs:\
`cf logs dluhc-core-staging --recent`
```bash
cf logs dluhc-core-staging --recent
```
### Troubleshooting deployments
A failed Github deployment action will occasionally leave a Cloud Foundry deployment in a broken state. As a result all subsequent Github deployment actions will also fail with the message `Cannot update this process while a deployment is in flight`.
`
```bash
cf cancel-deployment dluhc-core
`
```
You'd then need to check the logs and fix the issue that caused the initial deployment to fail.
You would then need to check the logs and fix the issue that caused the initial deployment to fail.
## CI/CD
@ -42,64 +61,88 @@ When a commit is made to `main` the following GitHub action jobs are triggered:
When a pull request is opened to `main` only the Test stage runs.
## Setting up Infrastructure for a new environment
### Staging
1. Login:\
`cf login -a api.london.cloud.service.gov.uk -u <your_username>`
1. Login:
2. Set your deployment target (staging):\
`cf target -o dluhc-core -s staging`
```bash
cf login -a api.london.cloud.service.gov.uk -u <your_username>
```
3. Create required Postgres and S3 bucket backing services (this will take ~15 mins to finish creating):\
`cf create-service postgres tiny-unencrypted-13 dluhc-core-staging-postgres`
2. Set your deployment target (staging):
`cf create-service aws-s3-bucket default dluhc-core-staging-import-bucket`
```bash
cf target -o dluhc-core -s staging
```
`cf create-service aws-s3-bucket default dluhc-core-staging-export-bucket`
3. Create required Postgres and S3 bucket backing services (this will take ~15 mins to finish creating):
4. Deploy manifest:\
`cf push dluhc-core-staging --strategy rolling`
```bash
cf create-service postgres tiny-unencrypted-13 dluhc-core-staging-postgres
cf create-service aws-s3-bucket default dluhc-core-staging-import-bucket
cf create-service aws-s3-bucket default dluhc-core-staging-export-bucket
```
5. Bind S3 services to app:\
`cf bind-service dluhc-core-staging dluhc-core-staging-import-bucket -c '{"permissions": "read-only"}'`
4. Deploy manifest:
`cf bind-service dluhc-core-staging dluhc-core-staging-export-bucket -c '{"permissions": "read-write"}'`
```bash
cf push dluhc-core-staging --strategy rolling
```
6. Create a service keys for accessing the S3 bucket from outside Gov PaaS:\
`cf create-service-key dluhc-core-staging-import-bucket data-import -c '{"allow_external_access": true}'`
5. Bind S3 services to app:
`cf create-service-key dluhc-core-staging-export-bucket data-export -c '{"allow_external_access": true, "permissions": "read-only"}'`
```bash
cf bind-service dluhc-core-staging dluhc-core-staging-import-bucket -c '{"permissions": "read-only"}'
cf bind-service dluhc-core-staging dluhc-core-staging-export-bucket -c '{"permissions": "read-write"}'
```
6. Create a service keys for accessing the S3 bucket from outside Gov PaaS:
```bash
cf create-service-key dluhc-core-staging-import-bucket data-import -c '{"allow_external_access": true}'
cf create-service-key dluhc-core-staging-export-bucket data-export -c '{"allow_external_access": true, "permissions": "read-only"}'
```
### Production
1. Login:\
`cf login -a api.london.cloud.service.gov.uk -u <your_username>`
1. Login:
```bash
cf login -a api.london.cloud.service.gov.uk -u <your_username>
```
2. Set your deployment target (production):
2. Set your deployment target (production):\
`cf target -o dluhc-core -s production`
```bash
cf target -o dluhc-core -s production
```
3. Create required Postgres and S3 bucket backing services (this will take ~15 mins to finish creating):\
`cf create-service postgres small-ha-13 dluhc-core-production-postgres`
3. Create required Postgres and S3 bucket backing services (this will take ~15 mins to finish creating):
`cf create-service aws-s3-bucket default dluhc-core-production-import-bucket`
```bash
cf create-service postgres small-ha-13 dluhc-core-production-postgres
cf create-service aws-s3-bucket default dluhc-core-production-import-bucket
cf create-service aws-s3-bucket default dluhc-core-production-export-bucket
```
`cf create-service aws-s3-bucket default dluhc-core-production-export-bucket`
4. Deploy manifest:
4. Deploy manifest:\
`cf push dluhc-core-production --strategy rolling`
```bash
cf push dluhc-core-production --strategy rolling
```
5. Bind S3 services to app:\
`cf bind-service dluhc-core-production dluhc-core-production-import-bucket -c '{"permissions": "read-only"}'`
5. Bind S3 services to app:
`cf bind-service dluhc-core-production dluhc-core-production-export-bucket -c '{"permissions": "read-write"}'`
```bash
cf bind-service dluhc-core-production dluhc-core-production-import-bucket -c '{"permissions": "read-only"}'
cf bind-service dluhc-core-production dluhc-core-production-export-bucket -c '{"permissions": "read-write"}'
```
6. Create a service keys for accessing the S3 bucket from outside Gov PaaS:\
`cf create-service-key dluhc-core-production-import-bucket data-import -c '{"allow_external_access": true}'`
6. Create a service keys for accessing the S3 bucket from outside Gov PaaS:
`cf create-service-key dluhc-core-production-export-bucket data-export -c '{"allow_external_access": true, "permissions": "read-only"}'`
```bash
cf create-service-key dluhc-core-production-import-bucket data-import -c '{"allow_external_access": true}'
cf create-service-key dluhc-core-production-export-bucket data-export -c '{"allow_external_access": true, "permissions": "read-only"}'
```

12
docs/monitoring.md

@ -1,13 +1,17 @@
# Infrastructure Metric monitoring
# Monitoring
We use self-hosted Prometheus and Grafana for monitoring infrastructure metrics. These are run in a dedicated Gov PaaS space called "monitoring" and are deployed as Docker images using Github action pipelines. The repository for these and more information is here: [dluhc-data-collection-monitoring](https://github.com/communitiesuk/dluhc-data-collection-monitoring).
# Application & Performance monitoring & alerting
## Performance monitoring and alerting
For application error and performance monitoring we use managed [Sentry](https://sentry.io/organizations/dluhc-core). You will need to be added to the DLUHC account to access this. It triggers slack notifications to the #team-data-collection-alerts channel for all application errors in staging and production and for any controller endpoints that have a P95 transaction duration > 250ms over a 24 hour period.
# Logs
## Logs
For log persistence we use a managed ELK (Elasticsearch, Logstash, Kibana) stack provided by [Logit](https://logit.io/). You will need to be added to the DLUHC account to access this. Longs are retained for 14 days with a daily limit of 2GB.
Logs are also available from Gov PaaS directly via cli `cf logs <gov-paas-space-name> --recent`.
Logs are also available from Gov PaaS directly via CLI:
```bash
cf logs <gov-paas-space-name> --recent
```

23
docs/organisation_relationships.md

@ -1,23 +0,0 @@
# Definitions
- **Stock owning organisation** (parent): an organisation that owns housing stock (parent). It may manage the allocation of people in and out of their accommodation, or it may contract this out to a managing agent (child).
- **Managing agent (child)**: These are about orgs. In scenarios where one organisation owns stock and another organisation is contracted to manage the stock and tenants, the latter organisation is often called a ‘managing agent’. A managing agent is the same as a child and is the term more commonly used by data providing organisations. Parent/child is what we call them internally but is not a term that should be used for external customers. Managing agents are responsible for the allocation of people in and out of the accommodation, and/or responsible for the services provided to support those people in the accommodation (in the case of Supported Housing).
# Permissions
## Organisational relationships:
Organisations that own stock can contract out the management of that stock to another organisation. This relationship is often referred to as a parent/child relationship. This is a useful analogy as a parent can have multiple children, and a child can have many parents. A child organisation can also be a parent, and a parent organisation can also be a child organisation:
![Organisational relationships](images/organisational_relationships.png)
The case logs that a user can see depends on their role:
- Customer Support users can access any case log
- Data coordinators can access any case log for which the organisation they work for is ultimately responsible for, meaning they can see logs managed by a child organisation
- Data providers can only access case logs for which their organisation manages (or directly owns)
Taking the relationships from the above diagram, and looking at which logs each user can access:
![User log access permissions](images/user_log_permissions.png)

25
docs/organisations.md

@ -0,0 +1,25 @@
# Organisational relationships
## Definitions
- **Stock owning organisation**: An organisation that owns housing stock. It may manage the allocation of people in and out of their accommodation, or it may contract this out to managing agents.
- **Managing agent**: In scenarios where one organisation owns stock and another organisation is contracted to manage the stock and tenants, the latter organisation is often called a ‘managing agent’. Managing agents are responsible for the allocation of people in and out of the accommodation, and/or responsible for the services provided to support those people in the accommodation (in the case of supported housing).
## Permissions
Organisations that own stock can contract out the management of that stock to another organisation. This relationship is often referred to as a parent/child relationship. This is a useful analogy as a parent can have multiple children, and a child can have many parents. A child organisation can also be a parent, and a parent organisation can also be a child organisation:
![Organisational relationships](images/organisational_relationships.png)
The case logs that a user can see depends on their role:
- Customer support users can access any case log
- Data coordinators can access any case log for which the organisation they work for is ultimately responsible for, meaning they can see logs managed by a child organisation
- Data providers can only access case logs for which their organisation manages (or directly owns)
Taking the relationships from the above diagram, and looking at which logs each user can access:
![User log access permissions](images/user_log_permissions.png)

8
docs/schemes.md

@ -1,5 +1,9 @@
# Supported housing schemes
- **Schemes**: Groups of similar properties in the same location, intended for similar tenants with the same type of support needs, managed in the same way. As some of the information we need about a new tenancy is the same for all new tenancies in the ‘scheme’, users can set up a ‘scheme’ in the CORE system by completing the information once. In Supported Housing forms, the user just supplies the appropriate scheme. This means providers do not have to complete identical information multiple times in each CORE form. Effectively we model these as "templates" or "predefined answer sets"
## Schemes
- **Management groups**: Schemes are often managed together as part of a ‘management group’. An organisation may have multiple management groups, and each management group may have multiple schemes. For Supported Housing logs, users must select the management group first, then select scheme.
Groups of similar properties in the same location, intended for similar tenants with the same type of support needs, managed in the same way. As some of the information we need about a new tenancy is the same for all new tenancies in the ‘scheme’, users can set up a ‘scheme’ in the CORE system by completing the information once. In Supported Housing forms, the user just supplies the appropriate scheme. This means providers do not have to complete identical information multiple times in each CORE form. Effectively we model these as templates or predefined answer sets.
## Management groups
Schemes are often managed together as part of a ‘management group’. An organisation may have multiple management groups, and each management group may have multiple schemes. For Supported Housing logs, users must select the management group first, then select scheme.

2
docs/service_overview.md

@ -1,4 +1,4 @@
## Service
# Service overview
All lettings and and sales of social housing in England need to be logged with the Department for levelling up, housing and communities (DLUHC). This is done by Local Authorities and Housing Associations, who are the primary users of this service. Data is collected via a form that runs on an annual data collection window basis. Form changes are made annually to add new questions, remove any that are no longer needed, or adjust wording or answer options etc. Each data collection window runs from 1st April to 1st April + an extra 3 months to allow for any late submissions, meaning that between April and July, two collection windows are open simultaneously and logs can be submitted for either.

9
docs/testing.md

@ -1,8 +1,13 @@
# Testing strategy
- We use [RSpec](https://rspec.info/) and [Capybara](https://teamcapybara.github.io/capybara/)
- Capybara is used for our feature tests. These use the Rack driver by default (faster) or the Gecko driver (installation required) when the `js: true` option is passed for a test.
- Capybara is configured to run in headless mode but this can be toggled by commenting out `app/spec/rails_helper.rb#L14`
- Capybara is configured to use Gecko driver for JS tests as Chrome is more commonly used and so naturally more likely to be better tested but this can be switched to Chrome driver by changing `app/spec/rails_helper.rb#L13`
- Feature specs are generally written sparingly as they're also the slowest, where possible a request spec is preferred as this still tests a large surface area (route, controller, model, view) without the performance impact. They are not suitable for tests that need to run javascript or test that a specific set of UI events triggers a specific set of requests (with high confidence).
- Capybara is configured to use Gecko driver for JavaScript tests as Chrome is more commonly used and so naturally more likely to be better tested but this can be switched to Chrome driver by changing `app/spec/rails_helper.rb#L13`
- Feature specs are generally written sparingly as they’re also the slowest, where possible a request spec is preferred as this still tests a large surface area (route, controller, model, view) without the performance impact. They are not suitable for tests that need to run JavaScript or test that a specific set of interaction events that trigger a specific set of requests (with high confidence).
- Test data is created with [FactoryBot](https://github.com/thoughtbot/factory_bot) where ever possible

13
docs/user_roles.md

@ -1,13 +0,0 @@
# External Users
The primary users of the system are external data providing organisations: Local Authorities and Private Registered Providers (Housing Associations). These have 2 main user type:
- Data Coordinators - administrators for their own organisation, can also complete logs
- Data Providers - complete the logs
Additionally there are Data Protection Officers (DPO) which at some organisations is a separate role, but in our codebase is modelled as an attribute of the user (i.e. a data coordinator or provider can additionally be a DPO). They are responsible for ensuring the organisation has signed the data sharing agreement.
# Internal users
- Customer support (helpdesk) - can administrate all organisations
- ADD statisticians - primary consumers of the data collected via CDS/DAP

15
docs/users.md

@ -0,0 +1,15 @@
# User roles
## External users
The primary users of the system are external data providing organisations: Local Authorities and Private Registered Providers (Housing Associations). These have 2 main user types:
- Data coordinators – administrators for their own organisation, can also complete logs
- Data providers – complete the logs
Additionally there are Data Protection Officers (DPO), which for some organisations is a separate role, but in our codebase is modelled as an attribute of the user (i.e. a data coordinator or provider can additionally be a DPO). They are responsible for ensuring the organisation has signed the data sharing agreement.
## Internal users
- Customer support (help desk) – can administrate all organisations
- ADD statisticians – primary consumers of the data collected via CDS/DAP
Loading…
Cancel
Save