* feat: add 22/23 year validation
* feat: wip commit
* feat: update i18n
* feat: add one year gap exchange/completion validations
* tests: add tests for new validations
* test: add setup validations tests
* test: update factory to pass saledate validation
* feat: update seeds
* feat: update tests to reflect sales logs shouldn't exist in new service before 2022
* feat: merge with main, improve date behaviour
* refactor: cleanup
* feat: update tests
* feat: enforce saledate not after 22/23 collection year end
* feat: check date is valid
* feat: add hodate/saledate hard validation to saledate as well
* feat: add HandoverDate check to saledate question
* test: update
* db: update
* test: update
* feat: wip po updates
* feat: add soft_validation to setup
* feat: fix bug by making sale date soft validation optional
* feat: add tests
* test: update
* refactor: linting
* remove duplicate bulk validations
- validation was being performed both at CSV level and log level causing
a duplicate validation to appear
* bulk upload valiation errors now store the message
- previouly this was storing just the error type which we do not have a
mechanism to pipe these back to user readable error messages
* refactor bulk upload with extract method
* add bulk upload mailer test
* bulk upload mails users to fix logs on website
* bulk uplad email change resume link
* refactor local variable name
* CYA summary list tweaks for bulk upload
- When checking answers for a bulk upload related log the copy is
tweaked and made red as per designs
* different CYA treatment when for bulk upload
* bulk upload CYA missing answers always red
- whereas previosly this only happened if user was filtering logs via a
bulk upload
* create logs iff the log itself is valid
* do not create logs if a setup section not complete
* bulk upload with 60% errors will not create logs
* extract magic number to constant
* add bulk upload absolute threshold error rate
* refactor with extract method
* fix bulk upload age data types
* bulk upload handles both with and without headers
- headers are from the spreadsheet template
- otherwise assume cell A1 is start of the dataset
* add class to create logs from bulk upload
* create logs when processing bulk uploads
* remove bulk_upload_id from csv output
* create bulk upload logs only if all valid
- this will be changed later to allow for partial logs
- and only to create logs when a threshold has been met
* add method to blank invalid non setup fields
* bulk upload log creation blanks invalid fields
* fix incorrect logic for bulk upload renewal
* fix linting
* bulk upload log creation fail logs to sentry
* fix bulk upload line ending parsing
* extract bulk uploading csv parsing to class
* use csv parser in log creator
* change handle line endings mechanism
- we now strip all windows line endings for unix based line endings
- this normalises things making it simpler
* able to view lettings bulk upload errors
* fix linting
* call service correctly in test
* add bulk upload sales questions mapping
* appease linter
* bulk upload error shows correct question
- depending on log type it will show relevant question for the field
concerned
* improve namespacing of classes
* add job to process bulk uploads
* move validation from parser to model
* add validations for field_1
* add validation for field_4
* pending tests for field_4
* convert field_mapping to array of hashes
* validate nulls based on form question
* actually load forms when toggling between forms
* validate null for startdate
* row parser has access to bulk upload
* csv upload validates first form section
* add postcode validation
* Refactor error mappings for row parser
* Add unittype question
* Fix null error setting and add builtype
* add wchair to bulk upload
* Add beds to bulk upload
* Add joint to bulk upload
* Add startertenancy to the bulk upload
* Add tenancy for bulk upload
* Add declaration to the bulk upload
* Add age1 and age1_known to bulk upload
* add ages to bulk upload
* add sex1 to bulk upload
* add ethnic_group and ethnic to bulk upload
* add national to bulk upload
* add ecstat1 to bulk upload
* add military related fields to bulk upload
* add preg_occ to bulk upload
* add housingneeds to bulk upload
* add illness to bulk upload
* add layear to bulk upload
* add waityear to bulk upload
* add reason to bulk upload
* add prevten to bulk upload
* add homeless to bulk upload
* add previous postcode to bulk upload
* add reasonable preferences to bulk upload
* add allocations system to bulk upload
* add referral to bulk upload
* add net_income_known to bulk upload
* add hb to bulk upload
* add benefits to bulk upload
* add rent fields to bulk upload
* add hhmemb to bulk upload
* use 2022 csv fixtures for bulk upload
* fix renewal mapping for bulk upload
* placeholder test for bulk upload validation
* fix bulk upload mapping for homeless field
* fix leftreg mapping for bulk upload
* fix user associations in bulk upload tests
* add gender fields for bulk upload
* add ecstatN fields to bulk upload
* add #relatN fields to bulk upload
* extract old_visible_id in factory to trait
* map net_income_known correctly for bulk upload
* fix income bugs for bulk upload
* add unitletas to bulk upload
* add #rsnvac to bulk upload
* add #sheltered to bulk upload
* add illness fields to bulk upload
* add #irproduct_other to bulk upload
* infer renewal from rsnvac for bulk upload
* add #tenancyother to bulk upload
* add #tenancylength to bulk upload
* bulk upload earnings accepts pennies but rounds
* add #reasonother to bulk upload
* fix mapping of #ppcodenk for bulk upload
* add #household_charge to bulk upload
* add #chcharge to bulk upload
* add #tcharge to bulk upload
* add #supcharg to bulk upload
* add pscharge to bulk upload
* add #scharge to bulk upload
* use case statement for bulk upload allocation
* add offered to bulk upload
* add propcode to bulk upload
* add major repair fields to bulk upload
* add #voiddate to bulk upload
* support YY year format for bulk upload
* test postcode strips whitespace for bulk upload
* add #la to bulk upload
* add previous la to bulk upload
* fix failing test
* remove duplicate line from rebase
* add first time social housing to bulk upload
* make methods private
* fix field_4 validation for bulk upload
- the null check was inverted by mistake
Co-authored-by: Kat <katrina@kosiak.co.uk>
* lockdown bulk upload routes
* able to view lettings bulk upload errors
* add error count to bulk upload results
* coverage for bulk upload filename on results
* group bulk upload errors by row on results
* able to view bulk upload sales results
* scope lettings and sales bulk upload results
* fix linting
* call service correctly in test
* add bulk upload sales questions mapping
* appease linter
* bulk upload error shows correct question
- depending on log type it will show relevant question for the field
concerned
* use local disk for bulk upload for dev env
- this saves the need to connect to S3 to play with bulk upload in dev
environment
* improve namespacing of classes
* add job to process bulk uploads
* use local disk storage for dev file upload
* fix test
* use inline active job queue_adapter for dev
* use test active job queue adapter for test env
* remove rubocop violation
* delete bulk upload from disk after processing
* populate errors with cell, row + metadata
* update error message with something meaningful
* shim in sales validator
* able to parse sales bulk uploads
* change migration to add purchase_code to errors
* bulk upload error component renders purchaser code
- when a sales log
* populate purchaser_code for bulk upload errors
- when log type is sales
* remove superfluous private method
* Validate that the user belongs to either the managing or owning organisation
* do not reset created_by and remove user_organisation_chosen?
* Do not default managing organisation to owning organisation
* validate user belongs to organisation
* update tests to specify created_by
* clear create_by for support users
* refactor
* typo
* rebase lint
* Add `updated_by` to logs to track who was the last person to update a log
- Reset `created_by` automatically when a form is updated and the owner does not belong to the managing or owning organisation
* move reset_invalidated_dependent_fields!, update schema file and fix tests
Co-authored-by: James Rose <james@jbpr.net>
* Mark log as impacted by deactivation when location is deactivated
* Display affected logs in the table
* Route affected logs to tenancy start date question
* Update routes to get the tenancy start date page from form
* update next_page_redirect_path
* rename column
* Fix tests
* Add next_unresolved_page_id to pages
* Update unresulved when the log is corrected
* Mark logs as unresolved when scheme gets deactivated
* display correct content when there are no unresolved logs
* mark logs resolved after the user leaves check answers page
* Display link in success banner and reset banner when the link is clicked
* display inset hint text for unresolved log questions
* Display unresolved logs banner
* update banner message
* Persist the link in the banner
* update inset text
* Update success banner text
* Add unresolved and created_by scopes
* rename method
* add unresolved_log_redirect_page_id to form + typo and route
* Add UnresolvedLogHelper and extract flack notice message
* pluralize and return early
* remove flash[:notice] = nil
* to keep it consistent for sales log
* Extract unresolved path and fix a link
* extract resolve method and fix attribute nme
* update path
* typo
We import XML files that define logs from the previous CORE service. For an unknown reason, some of these files don't include the required namespace declarations to parse `user` values. Instead of changing the source, this change introduces a new method that forces the namespace declaration for user fields.
These fields are set using IDs from the previous CORE service. There was an incorrect assumption that these fields were integers so we were casting them as such. We have discovered that these fields are strings (e.g., `027`) so this change adjusts our types.
We were previously blocking the import of locations that belonged to schemes where the end date was before the current date. This meant that we were unable to import logs that are associated with schemes that _were_ valid, but have since expired.
We import XML files that define logs from the previous CORE service. For an unknown reason, some of these files don't include the required namespace declarations to parse `meta` values. Instead of changing the source, this change introduces a new method that forces the namespace declaration for meta fields.
Only import fields from the old CORE service that are logically valid. If a log is `saved` or `submitted-invalid` then it might have fields that are logically invalid at import. We can safely blank these out for the user to re-input.
* Remove an unused ethnic_other column
* remove sale_completion_date from lettings logs
* Add collection start year
* Remove sale_or_letting column
* Rename rent_type to rent_type_detail in the export
* Format dates
* refactor
* Fix test
* Set age and relat to nil when inconsistent
* Set homeless and reasonable preference due to homelessness to nil if inconsistent
* Age not answered case
* Refused is also valid
* Check relationship has been answered
While migrating users from the previous service to the new one we discovered that email addresses are not unique in the previous service. This means that one user in the new service might relate to multiple users in the previous service.
This change:
- Adds a new LegacyUser model that can belong to a User. Each LegacyUser model has one old_user_id that corresponds to the user ID in the legacy service.
- Updates the user import service so that we create this association for new users
- Creates a Rake script to backfill the association for existing users
* Add previous, current and next forms to form handler
* Add current, previous and next sales forms to form handler
* Implement current_lettings_form, current_sales_form and store year and form type in form
* refactor lettings_forms
* Use current, previous and next forms in lettings log model
* Use current, previous and next forms in sales log model
* use current, previous and next forms in csv service
* Remove "startyear_endyear" forms from form handler
* Remove name from form initializer and add an optional start year
* refactor get_form
* update csv test, fix form initialize
* rebase fix
* Refactor form_name_from_start_year method out
* remove unused variable
* fix typo, add date tests
* rebase, fix tests
* add comment to before test block
* Change the FormHandler back to only contain the form objects
* extract name
* Add abstract log class and sales log class
Created a parent log class for sales log and lettings log. Any bits common
to both sales and lettings can live in the parent class. As the sales log
functionality is built up any commonalities with lettings log can be extracted
into the parent log class. The sales log model is set up without a json form
and instead the form is defined in code - like the setup section of the lettings
log.
* update sales logs controller
* update lettings controller specs
* update filter method name
* update organisations controller
* use lettings method
* Add deleted tests back
* lint
Co-authored-by: Kat <katrina@madetech.com>
Co-authored-by: Kat <kosiak.katrina@gmail.com>
* Replaced log CSV direct download with email
* Tidy up authorization of organisations controller
- We already have a method to authenticate the scope of the user, so we can reuse that.
* Use Rails routes instead of absolute paths for CSV download links
* Introduce base NotifyMailer to to abstract away shared Notify mail functionality
* Fix mailer spec name
* Add worker instance to PaaS manifest
* Add CSV download bucket instance name into environment
* Update tests for improved search term handling
* Fix download mailer tests
* Clarifying comments
Co-authored-by: natdeanlewissoftwire <nat.dean-lewis@softwire.com>
Co-authored-by: James Rose <james@jbpr.net>
* feat: display label columns for la and prevloc, and show codes for previous la and prevloc fields
* refactor: combine duplicate has keys
* test: add new columns
* test: add new columns post merge
* tests: update
* feat: relocate redundant methods
* Add BOMs before CSV info
* add BOM to tests
* DRYing
* remove added blank line
* add scheme and location columns to exported csv logs, remove scheme_id and location_id
* reformat
* update tests
* linting
* linting
* use delegate to simplify code
* update column names in expected csvs for tests
* update to use csv_case_log_service
* update tests
* delegate scheme_owning and scheme_managing _organisaton_names
* update tests
* update spec variable names
* feat: remove scheme_id, location_id from support users' exports as well
* fix: revert postcode change from other branch