News:

Due to SPAM attacks, new members must be approved before posting.  Please email jclough@warrenpinnacle.com when registering and your account will be approved.

Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Jonathan S. Clough

#1
Using SLAMM / Blank Map on SLAMM 6.7?
August 07, 2023, 08:04:39 AM
When creating your simulation, make sure you did not select "California categories" as defined on page 43 of the Tech Doc (raster classes 101-128), while still trying to use rasters with traditional classes 1-26 (page 40).

https://warrenpinnacle.com/prof/SLAMM6/SLAMM_6.7_Technical_Documentation.pdf  (page 43 vs page 40)

If this is the problem with your simulation, you need to recreate your simulation with "traditional categories" or selecting "no" when asked about the California categories. 

My apologies, that part of the interface could be more user friendly-- the importance of the choice when starting a simulation should be emphasized.
#2
I'm sorry, that feature is not yet available.

Eventually I'd like to make SLAMM compatible with compressed GEOTIFF files, but that would likely require porting the whole software to a different development platform where those libraries are available.  The model is a bit stuck on Delphi XE3 for now until some funds become available for porting it.
#3
Using SLAMM / Re: Model calibration
June 22, 2020, 08:28:52 AM
Hi:  The SLB files are a SLAMM Binary format, not readable by GIS.  You need to either re-run and export to non-binary format (ASCII Rasters readable by ESRI and QGIS) or convert the binary files.

See the users manual, page 14-15 under "Data to Save" (GIS files options on the file execution screen) to do the former. 

To convert the binary files go to SLAMM File Setup and click the "Conv Binary Files to ASCII" button.  You can then drag the SLB file to the interface and it will convert to an ASCII Raster.

The output CSV is indeed in hectares,  (an embarrassing omission from the users manual!)

Best regards -- Jonathan

#4
Using SLAMM / Re: Model calibration
June 16, 2020, 06:22:33 AM
The time-zero run is a very important step.  To the extent possible, it should not be any different than the model's initial conditions.  It tests that the conceptual model, the wetland coverage, the elevation data, and the tide range data are all consistent.  In the results the initial condition is shown as "0" and the "time zero" result is shown as the first date of the simulation.

From the tech doc:
SLAMM can also simulate a "time zero" step, in which the conceptual model can be validated against the data inputs for your site. The time-zero model predicts the changes in the landscape given specified model tide ranges, elevation data, and land-cover data. Any discrepancy in time-zero results can provide a partial sense of the uncertainty of the model. There will almost always be some minor changes predicted at time zero due to horizontal off-sets between the land-cover and elevation data-sets, general data uncertainty, or other local conditions that make a portion of your site not conform perfectly to the conceptual model. However, large discrepancies could reflect an error in model parameterization with regards to tide ranges or dike locations, for example, and should be closely investigated.

I would suggest setting up the model to run for a couple of years (photo date, photo date + 1, photo date + 1) with minimal or zero SLR.  Any changes predicted should be understood as much as possible in terms of data error (issues with the tide model, DEM, or wetland coverage), lack of dike or seawall accounting, etc.  You should also examine the inundation frequency maps to ensure that the wetlands are being regularly wetted.  You can consider using the results from the time-zero run as your initial condition for your projection runs to ensure that the effects that you are seeing in the simulation are from the SLR signal and not uncertainty or lack of precision in your input data sets.

Sorry about the long delay in response.  -- Jonathan
#5
Yes, it's the first question you are asked when creating a new SLAMM simulation.

"Use California Categories?"  Answer "no" to use "classic" SLAMM categories.

#6
Here are some references:

Start Date End Date Rate Source N
1900 2000 1.7 mm/year +-0.5 IPCC 2007a §5.5.2.1 http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch5s5-5-2.html
1961 2003 1.8 mm/year +- 0.5 IPCC 2007a §5.5.2.1 http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch5s5-5-2.html
1993 2003 3.1 mm/year Grinsted et al. (2009).  Historical Data
1993 2003 3.3 mm/year Grinsted et al. (2009).  Satelite Altimetry

Also see http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch5s5-5-2-5.html

So maybe assume 1.8 mm/year from 1948-2003 and then 3.3 mm/year from 2003-2015?  So something like 2.1 mm/year for 1948-2015 perhaps?

There may be some newer references-- I haven't done a literature search recently.
#7
SLAMM often uses global future SLR estimates and then applies these locally based on the difference between local (RSLR) and global (eustatic) SLR.  [If the future SLR scenarios you are planning to use are localized (RSLR) then set these two parameters to the same value (e.g. 5.2 mm/year each) and move on.]

To estimate future RSLR from global SLR, SLAMM uses the difference between the local trend and the eustatic trend and assumes that that remains consistent into the future. 

However, the eustatic trend value can depend on the time of the measurement.  You want to match the eustatic historic trend time period with the local historic trend time period.

What time period is the 5.2 mm/year based on?

The eustatic rate of SLR is estimated at 1.7 mm/year from 1900 to 2000, but would be greater from 1970-2015 for example.

Here is another time I tried to explain these parameters:

QuoteThe historic trend is assumed to be the [global eustatic SLR] plus or minus [local factors] such as subsidence or uplift or other local meteorological conditions that could cause the difference.  If you are not using a local SLR scenario (running with a eustatic SLR scenario) then those [local factors] are assumed to remain consistent throughout the projection.

Therefore if the local historical trend is 2mm/year greater than the eustatic trend due to subsidence then [local factors] is calculated as 2mm/year and the future SLR is predicted to be 2mm/year greater than eustatic SLR projections.

Historic Eustatic Trend = Global eustatic SLR. (According to technical documentation, use rate of 1.7 mm/yr from 1900 to 2000)

What time period would be best to use for the Historic Trend?

The reason that parameter is there at all is to match the time period of the local historic trend.  Therefore if your local data is from 1970-2000 you would probably want to use a higher historic trend than 1.7 mm/year.

And is it correct that the Historic Eustatic Trend is subtracted from the Historic Trend to identify the isostatic/local trend in SLR?

Yes, that is the whole point of those two parameters.  That is why their time periods should match if possible.

So if you had a local trend of 4 mm/year from 1960-2010 and you have data showing that the historic eustatic trend from 1960-2010 is 2.5 mm/year (rough estimate off the top of my head).  Then SLAMM will assume that the 1.5 mm/year difference is due to local factors and should be added to any eustatic SLR scenarios. 

On the other hand if you have local SLR projections, like many states are producing, then these two parameters should be set to the same value to indicate that you do not want to adjust the future SLR projections as local factors have already been accounted for.  (e.g. using NY specific SLR scenarios, we set both the historic trend and historic eustatic trend to 1.7mm/year.)

Hope this is clear.  -- Jonathan
#8
That code predates my involvement with the code which started with SLAMM 3 in 1998.  Even if I had it you would need to find some historical compiler to run it on (probably some version of Borland Pascal)

However, the constructs that govern applying the model to large cell sizes remains in the model.  (The capability to model many categories within a single cell.  SLAMM currently holds up to three land types per cell modeled.  However, as cell sizes tend to be small, model initialization is based on a one category-per-cell assumption.)

You would have to be a coder.  It seems to me that the simplest steps would be to:

increase the NUM_CAT_COMPRESS variable which currently only allows 3 classes per cell.

Then the difficulty would be to initialize the model with minimum elevations, widths, and slopes for each category you wish to model in each large cell.

The mapping and GIS linkages wouldn't be particularly useful as they only display the dominant category per cell.

It would take some work in the code -- I'm afraid this type of application is not supported at this time unless someone wants to pay for model development...

Best regards -- Jonathan
#9
Using SLAMM / Re: Backtrack past conditions?
April 03, 2020, 07:23:29 AM
Greetings:

Backtracking also called "hindcasting" can be done but is often tricky due to lack of high quality elevation data in the past.  Another significant issue is whether enough local SLR has happened since the mid 1990s to have the model predict any significant change in marshes due to the SLR signal (which is the only real perturbation in a SLAMM simulation). 

Please see this thread for some examples and let me know if you have additional questions:

http://warrenpinnacle.com/SLAMMFORUM/index.php?topic=293.msg937
#10
Using SLAMM / Re: DEM setup
February 07, 2020, 08:18:49 AM
Absent other information you can make the assumption that MSL (the average of continuous measurements or an inferred continuous curve) = MTL (the midpoint between MHHW and MLLW).

Therefore you can leave the MTL-NAVD88 parameter at zero.

Do pay attention to the vertical accuracy of your elevation data as that is an important consideration (is the data set precise enough to model the impacts of incremental changes in SLR?)

Good luck!
#11
Using SLAMM / Re: DEM setup
February 03, 2020, 08:55:41 AM
It has been a while since we have done any in-depth user training or tutorials.  Here is one done in Mexico in 2013.  These materials may be useful to you:

https://www.dropbox.com/sh/eozby193epijjyh/AACCxl1OzLaJow23J1kkiipea?dl=0

I believe that you are specifying a horizontal datum for your data.  What is the vertical datum of your elevations?  That is what must be converted to a MTL basis in some manner.

Best regards! -- Jonathan
#12
Hello:

The scale on which SLAMM can be applied is an interesting question.

Early versions of the model (mid 1980s) were designed to be run with very large cell sizes (500m x 500m for example).  Within those cells there would be a classification of strips of wetlands and dry lands characterized as widths.  Each wetland and dry land would have an elevation and slope among other characteristics. 

Set up in this manner (this was pre-GIS, so from what I have heard inputs were entered by hand) an application to 20% of coastal US was prepared along with a report to congress:
http://risingsea.net/papers/federal_reports/rtc_park_wetlands.pdf

Newer version of the model have kept some of that architecture, but with the advent of GIS mapping, smaller cell sizes were desired.  In order to conserve memory the maximum number of classes per cell is now down to 3 in the latest version.  So that would not support larger cell sizes.  On the other hand, it exceeds the computational capacity and memory capacity of most machines to model the entire globe with cell sizes of 30 meters or less.  And that is likely an understatement.

So I guess it would be possible to perform such a run but the source code would need to revert back to an older version that supports larger cell sizes and the processing of data for model inputs would be quite tricky.

Hope this is useful.

#13
General Discussion / Re: model validation
January 30, 2020, 06:31:24 AM
A validation exercise was done for Louisiana in Glick et al.

https://www.lacoast.gov/crms2/crms_public_data/publications/Glick%20et%20al%202013.pdf

Some other validation attempts were made in the Gulf of Mexico through

See "Hindcast results" in these reports:

Great White Heron NWR, FL
Ten Thousand Islands NWR, FL
Lower Suawnnee, FL
MS Sandhill Crane NWR, MS
San Bernard and Big Boggy NWRs, TX

Validation is often confounded by -- lack of adequate historical SLR to cause impacts to wetlands, lack of high-quality historic land-cover and especially elevation datasets, and other anthropogenic changes to wetland cover (non-SLR losses)

#14
With regards to Question 1:

Thank you for your question, I'm not sure this is adequately addressed in the technical documentation.

The way that the "include dikes" and "dike location raster" works is that the model uses the connectivity algorithm to see if there is a connection between the land behind the dike and a "salt water source."  Specifically, the categories "riverine tidal," "estuarine water," "tidal creek," and "open ocean." must not appear behind the dike.  Otherwise the dike location raster will not work and the results will be no different.

You may check the connectivity of your site using the connectivity map option when setting up your model.

With regards to Question 2:

The time-zero run is a very important step.  To the extent possible, it should not be any different than the model's initial conditions.  It tests that the conceptual model, the wetland coverage, the elevation data, and the tide range data are all consistent.  In the results the initial condition is shown as "0" and the "time zero" result is shown as the first date of the simulation.

From the tech doc:
SLAMM can also simulate a "time zero" step, in which the conceptual model can be validated against the data inputs for your site. The time-zero model predicts the changes in the landscape given specified model tide ranges, elevation data, and land-cover data. Any discrepancy in time-zero results can provide a partial sense of the uncertainty of the model. There will almost always be some minor changes predicted at time zero due to horizontal off-sets between the land-cover and elevation data-sets, general data uncertainty, or other local conditions that make a portion of your site not conform perfectly to the conceptual model. However, large discrepancies could reflect an error in model parameterization with regards to tide ranges or dike locations, for example, and should be closely investigated.
#15
There always will be outliers in terms of the elevation to wetland to tide range relationship.  (Influences of ground water can have fresh marsh occurring lower, or wind tides can have salt marsh occurring higher.  Also, elevation data can be uncertain especially in areas of high vertical relief.  Or wetland maps have horizontal error. So we very often have outliers outside of the modeled range for wetland categories.)

In general, you don't need to include these elevation outliers in your accretion to elevation relationship.  Those wetlands that fall below the modeled lower range (we often set this to approximately the fifth percentile of the wetland elevations) will be lost at "time zero."  You can then check to see if this is reasonable or not based on satellite imagery and make changes if required. 

Those wetlands that fall above the modeled higher range will be set to the accretion rate modeled at the highest range.  We usually do not model those wetlands as "drying" because they are too high, assuming some other local factor has those wetlands perched at that elevation.  So they will generally remain persistent unless the SLR becomes extreme. 

Overall, the accretion model should try to match the majority of the data (5th to 95th percent confidence interval?) and not worry about the outliers.

Hope this is helpful, thanks!