This is a continuation from AI on the High Seas (Part 2): Building an Onboard AI System for Your Sailboat
This article includes practical code examples to help you implement key AI features on your sailboat. These snippets work with a Raspberry Pi running Python, Home Assistant, and optionally a local LLM or voice interface.
1. Intrusion Detection Script
Detects motion via a PIR sensor and sends an alert via Home Assistant.
import RPi.GPIO as GPIO
import time
import requests
PIR_PIN = 17
HA_WEBHOOK = "http://your-homeassistant.local/api/webhook/intruder_alert"
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIR_PIN, GPIO.IN)
try:
while True:
if GPIO.input(PIR_PIN):
print("Motion detected!")
requests.post(HA_WEBHOOK)
time.sleep(5) # delay to avoid spamming
time.sleep(0.5)
except KeyboardInterrupt:
GPIO.cleanup()
2. Barometric Pressure Drop Warning
Monitors pressure sensor and alerts if storm conditions may be forming.
import time
from smbus2 import SMBus
from bme280 import BME280
sensor = BME280(i2c_dev=SMBus(1))
pressure_history = []
while True:
pressure = sensor.get_pressure()
pressure_history.append(pressure)
if len(pressure_history) > 6:
del pressure_history[0]
if len(pressure_history) == 6:
drop = pressure_history[0] - pressure_history[-1]
if drop > 4:
print("Storm warning: Pressure dropping rapidly!")
time.sleep(600) # every 10 minutes
3. Maintenance Reminder Based on Days Passed
import datetime
import json
STATE_FILE = "maintenance_log.json"
try:
with open(STATE_FILE, "r") as f:
last_check = datetime.datetime.fromisoformat(json.load(f)["last"])
except:
last_check = datetime.datetime.now()
now = datetime.datetime.now()
days_passed = (now - last_check).days
if days_passed >= 30:
print("Maintenance due!")
with open(STATE_FILE, "w") as f:
json.dump({"last": now.isoformat()}, f)
else:
print(f"{30 - days_passed} days until next check.")
4. Voice Output (Using Mycroft or pyttsx3)
Text-to-speech notification for system alerts.
import pyttsx3
engine = pyttsx3.init()
engine.say("Freshwater level is below 20 percent. Please refill.")
engine.runAndWait()
5. Calling a Local LLM for Advice (Ollama)
Query your local model via HTTP with curl or Python.
curl http://localhost:11434/api/generate \
-d '{"model": "llama2", "prompt": "Should I reef the sails with wind at 25 knots?"}'
You can also call this from Python:
import requests
res = requests.post("http://localhost:11434/api/generate", json={
"model": "llama2",
"prompt": "Should I reef the sails with wind at 25 knots?"
})
print(res.json()["response"])
Next Steps
These examples give you a solid foundation to build out more advanced automations. Once your system is stable, consider integrating logging, dashboards, and crew interaction tools for a fully responsive onboard AI.