Reviewer: Amy Heather


@MHowells Thank you for inviting me to review your code! I hope this is helpful. 🙂
📋 Review Summary
Nice work Matthew!
You've done a fantastic job making your code open and accessible, such as by making it a public repository with a permissive licence. It's great that you have your ORCID there, so it's easy to link the work with you. The detailed dependency management is great, as are the docstrings you have used, which are clear and helpful.
There were a few build/run issues, and some extra steps to make your project even more reproducible and user-friendly, and these are detailed below.
🏃 Running the code
First is some feedback from my attempt to build your environment and run the code.
Dependency management
Thank you for clearly specifying your Python version, providing a requirements.txt file with pinned dependencies, and including setup instructions in your README. These are all excellent!
When reconstructing your environment, I opted to use conda as this allowed me to control my Python version. I encountered a few minor issues which may be helpful to address:
- Python version compatability. Matplotlib 3.9.4 requires PySide6 ≥6.7.2, which in turn requires Python ≥3.10. As a result, I upgraded from Python 3.9.7 (as stated in your documentation) to Python 3.10.* (specifically, 3.10.17).
- Missing dependencies. I found that
pexpect and decorator were required for the kernel to start, and that scipy was listed as an import but not in the environment file.
To create and activate the conda environment, I used the following commands:
conda env create --file environment.yaml
conda activate hybrid-sim-model
💡 Recommendations:
Code troubleshooting
When initialising SD(), a TypeError occured due to a missing gatekeeping_type argument:
TypeError: SD.__init__() missing 1 required positional argument: 'gatekeeping_type'
I tested gatekeeping_type values (proportion and fixed) and confirmed proportion aligns with arrival rate results in the chart in your .ipynb file on GitHub.
I also amended the docstrings in SD accordingly - I add a gatekeeping_type docstring (based on that in lambda_function) and removed the old birth_rate docstring.
After applying these fixes, everything ran smoothly and I was able to successfully reproduce the charts and results from your repository, which is great to see! 🎉
💡 Recommendations:
🌟 STARS
This section is a review against the STARS framework, which focuses on making code and materials reusable (i.e. can others run the code and then use and adapt it to their own contexts). This framework is described in:
Thomas Monks, Alison Harper, and Navonil Mustafee. 2024. “Towards Sharing Tools and Artefacts for Reusable Simulations in Healthcare.” Journal of Simulation 0 (0): 1–20. https://doi.org/10.1080/17477778.2024.2347882.
I've included some detailed feedback after the checklist.
✅ Checklist
| Item |
Status |
| Essential components |
|
| Open licence |
✅ |
| Dependency management |
✅ |
| FOSS model |
✅ |
| Minimum documentation |
⬜ |
| ORCID |
✅ |
| Citation information |
⬜ |
| Remote code repository |
✅ |
| Open science archive |
⬜ |
| Optional components |
|
| Enhanced documentation |
⬜ |
| Documentation hosting |
⬜ |
| Online coding environment |
⬜ |
| Model interface |
⬜ |
| Web app hosting |
⬜ |
Minimum documentation
Within the essential STARS criteria, it recommends minimal documentation has:
- (a) What model does
- (b) How to install and run model to obtain results
- (c) How to vary parameters to run new experiments
Your current README.md is fab! The overview is great, and the dependency instructions, funding statement, and author ORCID, are all brilliant.
We would suggest expanding it to also cover points (a)-(c), to help make it easy for someone coming across your repository to understand the model and how to use it.
💡 Recommendations:
Citation information
We'd suggest including clear instructions in your README.md for how to cite your repository. For example:
Matthew Howells (2025) Hybrid Simulation Modelling for Orthopaedics. GitHub. https://github.com/MHowells/HybridSimModel.
You could also generate a CITATION.cff file. This is a text file that is recognised by GitHub and will generate a "Cite this repository" option in the about panel for your repository. There is a fabulous website cff-init which you provide with the relevant information and it will generate one for you.
💡 Recommendations:
Code archiving
If you are able to, archiving your repository will ensure long-term preservation. Unlike GitHub, which does not guarantee permanent storage, archiving services provide persistent access and assign a DOI, making your code easily citable and discoverable. This could also be included in the citation section of the README.md - for example:
Matthew Howells (2025) Hybrid Simulation Modelling for Orthopaedics. Zenodo. {DOI}.
Examples of archives include Figshare and Zenodo. We often use Zenodo as it integrates nicely with GitHub. You simply login to Zenodo using your GitHub account, go to My account > GitHub, and toggle your repository to be synced with Zenodo. Then, on GitHub, whenever you create a GitHub release, it will push that to Zenodo. Check out: https://tutorials.inbo.be/tutorials/git_zenodo/.
💡 Recommendations:
Optional STARS criteria
These are all "nice to have" features. They may take a bit more time and effort, so consider them if you have the interest and time.
Online coding environment. You already have a .ipynb and requirements.txt, so adding an online coding environment (like Binder) is relatively straightfoward. This lets others run your code in the cloud with no step-up. See:
Enhanced documentation and hosting. Tools like Quarto or Jupyter Book can be used to build interactive documentation. You can host docs freely via GitHub pages. Check out:
Web apps. You could consider building a web app interface (e.g. with streamlit). See:
- Monks T, Harper A. Improving the usability of open health service delivery simulation models using Python and web apps. NIHR Open Res. 2023 Dec 15;3:48. doi: 10.3310/nihropenres.13467.1. PMID: 37881450; PMCID: PMC10593330.
💡 Recommendations:
🌟 Reproducibility
This section reviews against recommendations to support the reproducibility of healthcare discrete-event simulation models (i.e. can others run your code and get the same results as you). These recommendations are described in:
Amy Heather, Thomas Monks, Alison Harper, Navonil Mustafee, Andrew Mayne. 2025. On the reproducibility of discrete-event simulation studies in health research: an empirical study using open models. ArXiv. https://doi.org/10.48550/arXiv.2501.13137.
I've included some detailed feedback after the checklist - but for any duplicate items already covered above (e.g. including run instructions), these are not repeated below.
✅ Checklist
First are recommendations to support reproduction.
| Recommendation |
Status |
| Set-up |
|
| Share code with an open licence (⭐) |
✅ |
| Link publication to a specific version of the code |
N/A |
| List dependencies and versions |
✅ |
| Running the model |
|
| Provide code for all scenarios and sensitivity analyses (⭐) |
N/A |
| Ensure model parameters are correct (⭐) |
N/A |
| Control randomness |
⬜ |
| Outputs |
|
| Include code to calculate all required model outputs (⭐) |
✅ |
| Include code to generate the tables, figures, and other reported results (⭐) |
⬜ |
Next are recommendations to support troubleshooting and reuse (which come in handy when attempting to reproduce something, but it's not quite working!).
| Recommendation |
Status |
| Design |
|
| Separate model code from applications |
N/A |
| Avoid hard-coded parameters |
✅ |
| Minimise code duplication |
✅ |
| Clarity |
|
| Comment sufficiently |
🟡 |
| Ensure clarity and consistency in the model results tables |
✅ |
| Include run instructions |
⬜ |
| State run times and machine specifications |
⬜ |
| Functionality |
|
| Optimise model run time |
N/A |
| Save outputs to a file |
✅ |
| Avoid excessive output files |
N/A |
| Address large file sizes |
N/A |
Link publication to a specific version of the code
I'm not aware of a publication, but if you do write one, it would be great to:
- Ensure the repository includes the exact code used to generate the figures and tables presented in the paper.
- Clearly label which outputs correspond to published results.
- Create versioned releases of the code (e.g., at submission, during review, and upon acceptance) to ensure reproducibility.
This is less relevant to your poster which just had illustrative charts - thought it could be nice to add a link to the poster, for reference.
💡 Recommendations:
Control randomness
Currently, the DES results vary slightly each time the code is run (as can see from looking at the output table). Introducing seed control would allow for consistent, reproducible results across runs.
In ciw, this can be done by setting ciw.seed() (ciw docs).
💡 Recommendation:
Include code to generate the tables, figures, and other reported results
I was able to reproduce the charts in your .ipynb file on GitHub, which is excellent.
I wasn't sure how to generate the versions from the poster - including the code/parameters for those, if you have them (as I can see they were just illustrative), would be great. Ideally, each figure or table in your poster or publication should be traceable back to the exact code and data that produced it!
💡 Recommendation:
Comment sufficiently
Love the docstrings on the system dynamics components! They are fab. Would be great to include some for the classes in the discrete-event simulation components too.
💡 Recommendations:
State run times and machine specifications
Your code is very quick to run, so this is not super essential! However, it can still be nice to know from looking at the repository that "oh, this will be nice and quick" - and that, if someone else does find it running for a long time, that something is wrong! As an example, machine specification could be like "Intel Core i7-12700H with 32GB RAM running Ubuntu 24.04.1 Linux."
💡 Recommendations:
Reviewer: Amy Heather
@MHowells Thank you for inviting me to review your code! I hope this is helpful. 🙂
📋 Review Summary
Nice work Matthew!
You've done a fantastic job making your code open and accessible, such as by making it a public repository with a permissive licence. It's great that you have your ORCID there, so it's easy to link the work with you. The detailed dependency management is great, as are the docstrings you have used, which are clear and helpful.
There were a few build/run issues, and some extra steps to make your project even more reproducible and user-friendly, and these are detailed below.
🏃 Running the code
First is some feedback from my attempt to build your environment and run the code.
Dependency management
Thank you for clearly specifying your Python version, providing a
requirements.txtfile with pinned dependencies, and including setup instructions in your README. These are all excellent!When reconstructing your environment, I opted to use conda as this allowed me to control my Python version. I encountered a few minor issues which may be helpful to address:
pexpectanddecoratorwere required for the kernel to start, and thatscipywas listed as an import but not in the environment file.To create and activate the conda environment, I used the following commands:
Code troubleshooting
When initialising
SD(), aTypeErroroccured due to a missinggatekeeping_typeargument:I tested
gatekeeping_typevalues (proportionandfixed) and confirmedproportionaligns with arrival rate results in the chart in your.ipynbfile on GitHub.I also amended the docstrings in
SDaccordingly - I add agatekeeping_typedocstring (based on that inlambda_function) and removed the oldbirth_ratedocstring.After applying these fixes, everything ran smoothly and I was able to successfully reproduce the charts and results from your repository, which is great to see! 🎉
🌟 STARS
This section is a review against the STARS framework, which focuses on making code and materials reusable (i.e. can others run the code and then use and adapt it to their own contexts). This framework is described in:
I've included some detailed feedback after the checklist.
✅ Checklist
Minimum documentation
Within the essential STARS criteria, it recommends minimal documentation has:
Your current
README.mdis fab! The overview is great, and the dependency instructions, funding statement, and author ORCID, are all brilliant.We would suggest expanding it to also cover points (a)-(c), to help make it easy for someone coming across your repository to understand the model and how to use it.
Citation information
We'd suggest including clear instructions in your
README.mdfor how to cite your repository. For example:You could also generate a
CITATION.cfffile. This is a text file that is recognised by GitHub and will generate a "Cite this repository" option in the about panel for your repository. There is a fabulous website cff-init which you provide with the relevant information and it will generate one for you.Code archiving
If you are able to, archiving your repository will ensure long-term preservation. Unlike GitHub, which does not guarantee permanent storage, archiving services provide persistent access and assign a DOI, making your code easily citable and discoverable. This could also be included in the citation section of the
README.md- for example:Examples of archives include Figshare and Zenodo. We often use Zenodo as it integrates nicely with GitHub. You simply login to Zenodo using your GitHub account, go to My account > GitHub, and toggle your repository to be synced with Zenodo. Then, on GitHub, whenever you create a GitHub release, it will push that to Zenodo. Check out: https://tutorials.inbo.be/tutorials/git_zenodo/.
Optional STARS criteria
These are all "nice to have" features. They may take a bit more time and effort, so consider them if you have the interest and time.
Online coding environment. You already have a
.ipynbandrequirements.txt, so adding an online coding environment (like Binder) is relatively straightfoward. This lets others run your code in the cloud with no step-up. See:Enhanced documentation and hosting. Tools like Quarto or Jupyter Book can be used to build interactive documentation. You can host docs freely via GitHub pages. Check out:
Web apps. You could consider building a web app interface (e.g. with streamlit). See:
🌟 Reproducibility
This section reviews against recommendations to support the reproducibility of healthcare discrete-event simulation models (i.e. can others run your code and get the same results as you). These recommendations are described in:
I've included some detailed feedback after the checklist - but for any duplicate items already covered above (e.g. including run instructions), these are not repeated below.
✅ Checklist
First are recommendations to support reproduction.
Next are recommendations to support troubleshooting and reuse (which come in handy when attempting to reproduce something, but it's not quite working!).
Link publication to a specific version of the code
I'm not aware of a publication, but if you do write one, it would be great to:
This is less relevant to your poster which just had illustrative charts - thought it could be nice to add a link to the poster, for reference.
Control randomness
Currently, the DES results vary slightly each time the code is run (as can see from looking at the output table). Introducing seed control would allow for consistent, reproducible results across runs.
In
ciw, this can be done by settingciw.seed()(ciw docs).Include code to generate the tables, figures, and other reported results
I was able to reproduce the charts in your
.ipynbfile on GitHub, which is excellent.I wasn't sure how to generate the versions from the poster - including the code/parameters for those, if you have them (as I can see they were just illustrative), would be great. Ideally, each figure or table in your poster or publication should be traceable back to the exact code and data that produced it!
Comment sufficiently
Love the docstrings on the system dynamics components! They are fab. Would be great to include some for the classes in the discrete-event simulation components too.
State run times and machine specifications
Your code is very quick to run, so this is not super essential! However, it can still be nice to know from looking at the repository that "oh, this will be nice and quick" - and that, if someone else does find it running for a long time, that something is wrong! As an example, machine specification could be like "Intel Core i7-12700H with 32GB RAM running Ubuntu 24.04.1 Linux."