Computing/Getting ERDDAP Running
ERDDAP is software for serving up scientific data. To run and configure it, the Docker approach works fairly well (even running on a Synology NAS).
Installation
- docker pull axiom/docker-erddap:latest-jdk17-openjdk
Configuration
- Read https://coastwatch.pfeg.noaa.gov/erddap/download/setupDatasetsXml.html very carefully.
- It's long. It's thorough.
 
- Read through https://github.com/unidata/tomcat-docker to see if any Tomcat bits apply (unlikely, will use Caddy2 to front this Tomcat)
- Read through https://registry.hub.docker.com/r/axiom/docker-erddap/ and (source) https://github.com/axiom-data-science/docker-erddap
- Run the container at least once and extract /usr/local/tomcat/content/erddap/setup.xmlanddatasets.xml
- Tune the XML files
- Even though Axiom's docker can do everything with environment variables, setup.xmlmust exist
 
- Even though Axiom's docker can do everything with environment variables, 
- Re-run the docker instance with file pass-throughs
- Don't forget to pass through the bigParentDirectory, because the logs are there
- In fact, there's a ton of stuff there that is needed for persistent operation
 
- Don't forget to pass through the 
- A sample JSON file is needed where the dataset indicates the files will exist
- The record should be deletable once the system is running and has real data flowing
 
- Use the DasDds.sh script in the WEB-INF directory to validate the dataset.xml file
- Given this is docker, either docker exec -it <name> /bin/bashor invoke it with sufficient pass-throughs to run the script directly from the host
 
- Given this is docker, either 
On restart, the service shows the default landing page, but shortly thereafter replaces it with any modified content. This implies that the pass-through of the XML files somehow doesn't take effect until the service has re-read them after docker startup.
Logs
- Catalina logs are uninteresting
- Application logs are not exported to docker's log system
- Application logs are in the bigParentDirectory passed-through volume
HTTPGet errors
- sourceNamedoes not do case translations to- destinationName. They must be identical.
- Configuration of the dataset attributes uses (effectively) the destinationNamespelling
- time, longitude, latitudemust be spelt that way
- authormust be the last field on the request, and it points to the keys (username_$password)
- commandmust exist in the definition, but never set
- Yes, it's HTTP GET -> data write happens. It's terrible and goes against the RFC for web server implementation.
AIS -catcher integration
The whole goal of this setup is to grab AIS data from the meteorological sensors in Dublin Bay. A simple proof-of-concept parser follows.
#!/usr/bin/env python3
import json
import urllib.request
import urllib.parse
from datetime import datetime
URL="https://erddap.home.arpa/erddap/tabledap/datasetName.insert"
MAPPER = {
    "signalpower": "Signal_Power",
    "wspeed": "Wind_Speed",
    "wgust": "Wind_Gust_Speed",
    "wdir": "Wind_Direction",
    "wgustdir": "Wind_Gust_Direction",
    "lon": "longitude",
    "lat": "latitude",
    "waveheight": "Wave_Height",
    "waveperiod": "Wave_Period",
    "mmsi": "MMSI",
}
 
with open("ais.capture", "r") as f:
    lines = f.readlines()
lines = [json.loads(x) for x in lines if "wspeed" in x and "01301" in x]
for line in lines:
    kv = {}
    rxtime = line["rxtime"]
    dt_rxtime = datetime.strptime(rxtime, "%Y%m%d%H%M%S")
    kv["time"]=dt_rxtime.strftime('%Y-%m-%dT%H:%M:%SZ')
    kv["Station_ID"] ="Dublin_Bay_Buoy"
    for field, value in MAPPER.items():
        kv[value]=line[field]
    kv["author"] ="testuser_somepassword"
    params = urllib.parse.urlencode(kv)
    url = f"{URL}?{params}"
    with urllib.request.urlopen(url) as f:
        print(f.read().decode('utf-8'))
A more thorough implementation is found at https://github.com/cricalix/erddap-feeder - a small Rust program that can listen for HTTP POST requests from ais-catcher and transform them to HTTP GET requests against the ERDDAP instance.

