* Copy 23 BU files to 24
* Renumber bulk upload fields for 2024
* Add prepare your file
* Update max columns
* Update fields in first_record_start_date
* Update managing org
* Rebase changes
* Validate that correct template for the year is used for sales (with headers)
* Check the correct template is being used without headers
* Check correct template for lettings
* Update csv parser on sales
* Remove redundant methods
* Extract form years
* Reverse year check mathod
# Context
- https://digital.dclg.gov.uk/jira/browse/CLDC-2344
- So the reported bug and these change do not match 100%
- As the reported bug should have already been resolved in an earlier change as the error message that was reported has already been removed from the system
- These changes remove the check from sales plus a few minor tidy ups
# Changes
- Remove dead code around error threshold which is no longer used or needed
- Inverted negative predicate to positive one, `incorrect_field_count` => `correct_field_count`
- Remove `validate_min_columns` from sales bulk upload
# Context
- https://digital.dclg.gov.uk/jira/browse/CLDC-2316
- Implement bulk upload sales for new collection year 2023
- This is a first pass implementation and will probably have some bugs in it and we can address over time
# Changes
- Add CSV parser for sales 2023 to handle CSV structure
- Tweak collection window validation so error now contextual to year selected for upload
- Handle arbitrary ordering of CSV columns
- Fix ordering of errors in report and now ordered by cell
- Added `Upload your file again` link styled as button on error report to match lettings experience
- Update tooling to convert logs to 2023 csv rows with support for random column ordering
# Known issues
- There seem to be some issues with how UPRN is handled if the UPRN cannot be validated.
- For the above I think there is dependency on https://github.com/communitiesuk/submit-social-housing-lettings-and-sales-data/pull/1570 which should clear any errored fields so users can continue to create logs and fix within the service
* feat: add validation for number of field labels
* feat: check field labels are numbers
* feat: remove validations for number of columns
* refactor: rename non_blank_fields_count to valid_field_numbers_count
* test: add functions to generate custom field labels/values
* test: that extra invalid field labels don't cause issues
* test: that removing a valid field label reduces count by 1
* test: that adding a valid field label increases count by 1
* refactor: rename validate_fields_count and wrong_field_count
* feat: add validation that max col count not exceeded when no headers
* fix: convert numbers to strings in default_field_numbers
* feat: add leniency to max cols count (in case of 1 extra col)
* test: explicitly set year of bulk upload to be 2022
* test: add/update tests in validator_spec
* chore: lint
* test: remove tests from csv_parser_spec that were moved to validator_spec
* feat: update 2022 csv_parser to work with new validations
* refactor: define number of valid 2022 fields in one place
* refactor: remove redundant headers definition
* feat: update 2022 csv parser to have col flexibility like 2023
* test: for validator 2022 as well as 2023
* test: simplify 2022/2023 logic and improve layout
* chore: lint
* test: ensure context descriptions start with "when"
* refactor: check fields/columns count within csv parser, not validator
* test: update 2022 csv parser tests to work like 2023
* chore: lint
* refactor questions from validator to row parser
* able to switch between bulk upload parsers
- depending on what year we are processing
* spec tooling to support bulk upload multi year
* row parser now has year in namespacing
* add static data for 2023 row parser
* add placeholder to log to csv for specs
* bulk upload aribtrary 23/34 column ordering works
* bulk upload supports 23/24 without headers
* bulk upload 23/24 supports BOM + invalid chars
* dupe tests
* port 23/24 attributes_for_log
* port 23/24 bulk upload validations
* force crossover period
* tweak max permitted columns
* able to return column for given field
* work out column for field for errors
* add field_4 as 23/24 setup field
* remove duplicate method
* map schemes and locations correctly
* handle arbitrary number of header rows
* add missing fields to bulk upload support
* fix bulk upload age data types
* bulk upload handles both with and without headers
- headers are from the spreadsheet template
- otherwise assume cell A1 is start of the dataset
* add class to create logs from bulk upload
* create logs when processing bulk uploads
* remove bulk_upload_id from csv output
* create bulk upload logs only if all valid
- this will be changed later to allow for partial logs
- and only to create logs when a threshold has been met
* add method to blank invalid non setup fields
* bulk upload log creation blanks invalid fields
* fix incorrect logic for bulk upload renewal
* fix linting
* bulk upload log creation fail logs to sentry
* fix bulk upload line ending parsing
* extract bulk uploading csv parsing to class
* use csv parser in log creator
* change handle line endings mechanism
- we now strip all windows line endings for unix based line endings
- this normalises things making it simpler