Retrospective analyses #675
Replies: 1 comment 1 reply
-
|
Hi @rpwildermuth-NOAA, the retrospective setting is just removing recent observed data from the likelihood without any change to the range of years of the model. Thus, the forecast is being applied after the original ending year of the model. If I'm understanding your question, you would like to know, for instance, how a 1-year forecast applied to a model ending in 2022 compares to the dynamics estimated for a model that ends in 2023. To do that, you could write a script that iteratively changes the ending year of the model, which would cause the forecast to automatically apply to the subsequent year. I think the main difference between the two approaches would be caused by the harvest control rule being applied to set the catch in the forecast year as opposed to the true catch observed in that year. Let me know if I'm misinterpreting the question. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I have a question about the implementation of retrospective peels in SS. I'm using the r4ss::retro() function to test the skill of retrospective forecasts informed with external environmental data. I understand that retrospective peels to create model runs for calculating skill (e.g., Mohn's rho) can be set up in the starter.ss file by setting a negative value for retrospective year relative to end year. This effectively cuts off any data between that retrospective year and the end year of the base reference assessment.
I understand that true retrospective model formulation is a bit more complicated because of blocks and recruitment formulations (see the discussion on r4ss retrospective calibration). What I would like to know is how calculations for the late/forecast period are structured, since the model seems to continue producing estimates out to the end year of the reference assessment. In my test application, using
retro()andSSgetsummary()leads to time series of biomass and recruitment estimates from each retrospective peel for the full model period (e.g.retroSummary[["recruits"]]is a full data frame with no missing values for later years on the bottom right side of the data frame).I think this probably hasn't come up because typical application of Mohn's rho compares the last assessment year of each peel to the reference assessment (lines 155-160 in my example). But our assessment specifies a 1-year forecast, which can be compared to the last assessment year (and before) of the reference assessment (lines 146-152 in the example). Using this skill test is contingent on the model using the forecast specified in the model files, rather than just extending the model dynamics throughout the full reference period. Is there a way to know which of these is happening? Is there a way to ensure that the retrospective year flag in the starter.ss file is doing what I'm expecting:
Thanks! ~ RW
Beta Was this translation helpful? Give feedback.
All reactions