There are two types of submissions:
- The nightly submission of predictions from the team, if they choose to. Teams have the opportunity to submit a daily result for us to compute a daily leader board. It is not mandatory, but, it will help you gauge where your code is with respect to other teams.
- Code submission; code is reviewed for readability and reproducibility as we finalize the top three winning teams.
In both cases submissions must be made via the team dedicated environment. Please make sure you verify your environment is working properly.
Accessing the Compute Environment
Once all the team members have successfully signed the NDA , each team will be granted access to a Jupyter server to develop and test code. Each team will be assigned unique credentials to be shared among the team members.
Each team member will receive an email with credentials and instructions to connect to the Jupyter server.
Please do not share connection credentials with anybody outside your team. Please do not post code, credentials or data on public websites. Amazon scans public websites on a daily basis (stackoverflow, quora, github, etc.) Amazon will immediately contact us if they find user credentials or data on public websites.
Daily Prediction Submission
Each team is invited to make a daily submission of their predictions using the evaluation dataset. Every night an automated job will take the latest submission (if present), run the scoring and populate the leader board on the TracHack website the next morning.
A team can make a submission for the day by placing their predictions as a CSV file named with the submission date in the YYYY-MM-DD format inside the submission folder within their environment. For example the file submission/2022-04-1.csv will contain the team’s submissions from 1st of April 2022. While teams are not required to submit every day, regular submissions are very useful to get feedback on how the teams are progressing.
TracHack 22.1: For TracHack22.1 the CSV must contain comma separated values with header customer_id and nps_class. It should consist of one single entry for each customer from the evaluation dataset and respective predicted ‘nps_class’ flag , where 0 is “detractor”, 1 is “passive” and 2 is “promoter“.
customer_id,nps_class 7e3881ca33cb03b77d40dde9288c5f7f1cfff11c,0 fc75767d4287c23bf0ee6e87366e41e51c25926f,2 d09a45cb0b81980231746918ec7f772e1f7f069d,2 35929687b68cc0bc8f38bd622b4cd9452effb3b8,2 e6141d7f5a570ff80b8d8f2c63ff941210443d93,0 46e362663bf795ce33c6e4a601be8a9f4a7f41de,1 d3dbdf9fbf667b6f7af02f84f06fa56e86b7d401,0 d95c7c38f8056b533e28bb6643c0d71400401a7d,1 f39fe6c4244f77b3b6364124aab2438531cebf7b,2 3a9b2d6fdaaa390330815f0384513fd00244263e,0 etc.
TracHack 22.2: For TracHack22.2 the CSV must contain comma separated values with header customer_id and ebb_eligible flag. It should consist of one single entry for each customer from the evaluation dataset and respective predicted ‘ebb-eligible’ flag with either 1 (ebb eligible) or 0 (not ebb eligible).
customer_id,ebb_eligible f51707b140d8fa3b0d6859c49112f60357131de7,1 ac774c4943118044e3ddbee2b36b141afeb1f7a0,0 e0c8a12f5daed65cfe2d433301fcbb629327368f,0 6675321499dd530d3b4390e271310503884e36f9,1 704f7c7d12f01bd6c611d04e9a1cbbd4440ebb87,1 e0de51552bc30f3922a32bc7b3fedb88e817f8cf,0 2d41975b9822abe3da1e96701fa3fbb8c64985e8,1 48e23686428c704d10fe7d937f218b2f40055bd8,0 2249549952a441a7c54fac4819f322fbe39356c5,0 c7527cd7fd2c98f1973e60d62ce4c1eb67ddd3ef,1 etc.
NOTE: You must make a prediction for ALL line_ids in the eval dataset for the submission.
Final Code Submission
When it is time to get ready for the final submission for TracHack, there are a couple of things you should do:
1. Submit your predictions as a submission CSV file for the date format. Teams will produce a submission with filename: yyyy-mm-dd-final.csv to be saved in the submission folder.
2. Consolidate all of your code (data prep, feature selection, model training and prediction, etc) into a single Jupyter notebook and call it mlcode.ipynb. Save this inside the code folder. It is VERY important that your code reproduces the submission. We will use that notebook to reproduce your submission predictions. If these don’t match, then your submission will not be considered valid.