Compare commits

..

No commits in common. "main" and "v0.0.2" have entirely different histories.
main ... v0.0.2

1662 changed files with 266545 additions and 266766 deletions

View file

@ -7,7 +7,7 @@ tmp_dir = "tmp"
bin = "./tmp/nidus-sync"
cmd = "go build -o ./tmp/nidus-sync ."
delay = 1000
exclude_dir = ["templates", "static", "cmd", "tmp"]
exclude_dir = ["templates", "static", "tmp"]
exclude_file = []
exclude_regex = ["_test.go"]
exclude_unchanged = false
@ -25,7 +25,7 @@ tmp_dir = "tmp"
rerun = false
rerun_delay = 500
send_interrupt = true
stop_on_error = false
stop_on_error = true
[color]
app = ""

View file

@ -1,9 +0,0 @@
on: [push]
jobs:
golint:
runs-on: nixos
steps:
- name: checkout code
uses: actions/checkout@v4
- name: golangci-lint
run: golangci-lint run

28
.gitignore vendored
View file

@ -1,26 +1,2 @@
.env
.sass-cache/
cmd/geocode-test/geocode-test
cmd/passwordgen/passwordgen
/db/jet/jet
districts/
flogo.log
lob/cmd/letter-create/letter-create
lob/cmd/letter-list/letter-list
lob/cmd/address-create/address-create
lob/cmd/address-list/address-list
/nidus-sync
/nidus-sync.log
node_modules/
postgrid/cmd/send-pdf/send-pdf
result
stadia/cmd/bulk-geocode/bulk-geocode
stadia/cmd/geocode-autocomplete/geocode-autocomplete
stadia/cmd/geocode-bygid/geocode-bygid
stadia/cmd/reverse-geocode/reverse-geocode
stadia/cmd/structured-geocode/structured-geocode
stadia/cmd/tile-raster/tile-raster
static/gen/
temp/
ts/gen
vite/*/.vite/
nidus-sync
tmp/

3
.gitmodules vendored
View file

@ -4,3 +4,6 @@
[submodule "go-geojson2h3"]
path = go-geojson2h3
url = git@github.com:Gleipnir-Technology/go-geojson2h3.git
[submodule "bob"]
path = bob
url = git@github.com:Gleipnir-Technology/bob.git

View file

@ -1,12 +0,0 @@
{
"plugins": ["/nix/store/6kfm5qrd2bckffxphb5ylvbg3sz1657r-prettier-plugin-go-template-0.0.15-unstable-2023-07-26/lib/node_modules/prettier-plugin-go-template/lib/index.js"],
"useTabs": true,
"overrides": [
{
"files": ["*.html"],
"options": {
"parser": "go-template",
},
},
],
}

173
AGENTS.md
View file

@ -1,173 +0,0 @@
# AGENTS.md — Nidus Sync Codebase Guide
This file captures conventions, patterns, and gotchas for anyone working on this codebase. It was produced during a lint cleanup pass (May 2026) to document lessons learned.
## Project Overview
**Module:** `github.com/Gleipnir-Technology/nidus-sync`
**Language:** Go 1.24
**Build:** Nix (`flake.nix`) + standard Go toolchain
**ORM:** Bob (legacy) + Jet (new, partial migration)
**Frontend:** Vue SPA (Vite) replacing Go HTML templates
The app serves two hosts from a single binary:
- **Sync** (`sync/`) — internal dashboard for mosquito control districts
- **RMO** (`rmo/`) — public-facing "Report Mosquitoes Online" site
Both are migrating from Go `html/template` rendered pages to Vue SPAs served by `static.SinglePageApp()`.
## Build & Lint Commands
```bash
# Build everything
go build ./...
# Run linter
golangci-lint run
# Build a specific package
go build ./api/
go build ./platform/
```
## Lint Helpers (`lint/error.go`)
The `lint/` package provides helpers for common error-handling patterns. **Always use these instead of bare calls** to avoid errcheck lint failures:
| Helper | Use for | Example |
|--------|---------|---------|
| `lint.Fprintf(w, fmt, args...)` | `fmt.Fprintf` to writers where errors are non-critical | `lint.Fprintf(w, "ok")` |
| `lint.Fprint(w, args...)` | `fmt.Fprint` to writers | `lint.Fprint(w, "User-agent: *\n")` |
| `lint.Write(w, p []byte)` | `w.Write(p)` — HTTP response bodies | `lint.Write(w, body)` |
| `lint.LogOnErr(f, msg)` | Deferred `Close()` calls | `defer lint.LogOnErr(file.Close, "close file")` |
| `lint.LogOnErrCtx(f, ctx, msg)` | `txn.Commit(ctx)` or other ctx funcs | `lint.LogOnErrCtx(txn.Commit, ctx, "commit")` |
| `lint.LogOnErrRollback(f, ctx, msg)` | Deferred `txn.Rollback(ctx)` | `defer lint.LogOnErrRollback(txn.Rollback, ctx, "rollback")` |
**Key rule:** `LogOnErrRollback` silently ignores `"sql: transaction has already been committed or rolled back"` errors, which occur when a deferred rollback fires after a successful commit. Always use it for deferred rollbacks.
### For DB transactions, use this pattern:
```go
txn, err := db.PGInstance.BobDB.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("begin: %w", err)
}
defer lint.LogOnErrRollback(txn.Rollback, ctx, "rollback")
// ... do work ...
if err := txn.Commit(ctx); err != nil {
return fmt.Errorf("commit: %w", err)
}
return nil
```
### For HTTP handlers that render HTML:
```go
if err := renderShim(w, r, errRender(err)); err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
```
## Architecture Notes
### Two hosts, one binary
`main.go` creates two `gorilla/mux` routers and supports three modes via CLI flags:
- `-sync` — serve the Sync dashboard
- `-report` — serve the RMO public site
- `-all` — serve both (default)
Each host has its own route registration in `sync/routes.go` and `rmo/routes.go`.
### RMO package — all handlers are dead
**All** route registrations in `rmo/routes.go` are commented out. The file now only serves the Vue SPA via `static.SinglePageApp("static/gen/rmo")`. During cleanup (May 2026), all handler files were deleted:
```
rmo/compliance.go rmo/error.go rmo/nuisance.go rmo/report.go
rmo/district.go rmo/image.go rmo/quick.go rmo/root.go
rmo/email.go rmo/mailer.go rmo/notification.go rmo/scss.go
rmo/mock.go rmo/status.go rmo/water.go
```
Only `rmo/routes.go` remains. **Do not add new Go template handlers here** — the RMO host is pure Vue SPA now.
### Sync package — partially live
Many route registrations in `sync/routes.go` are active. Files deleted during cleanup were those with zero active registrations:
```
sync/admin.go sync/download.go sync/operations.go sync/pool.go
sync/cell.go sync/intelligence.go sync/parcel.go sync/radar.go
sync/communication.go sync/messages.go sync/planning.go sync/review.go
sync/dash.go sync/mock.go sync/notification.go sync/service-request.go
sync/signin.go sync/sms.go sync/text.go sync/tile.go
```
### api/ vs resource/ — two handler layers
The codebase has two HTTP handler patterns:
1. **`api/`** — route registration (`api/routes.go`) + `http.HandlerFunc` handlers. Handles signin, webhooks (Twilio, VoIP.ms), media uploads, configuration POSTs.
2. **`resource/`** — typed resource handlers with `List`, `Get`, `Create`, etc. methods. Each resource has a struct embedding `*router` for URI generation. This is the newer, preferred pattern.
The split is not clean — some `api/` files contain substantial business logic. New handlers should use the `resource/` pattern.
### DB access — Bob vs Jet
Two ORMs coexist:
- **Bob** (`github.com/Gleipnir-Technology/bob`) — legacy, used by most queries. Models in `db/models/*.bob.go` (103 files).
- **Jet** (`db/jet/`) — new, generated queries in `db/query/public/`, `db/query/publicreport/`, `db/query/arcgis/`. Only 3 schemas partially ported.
The `db.PGInstance` singleton holds both `BobDB` and `PGXPool`. Jet uses PGXPool directly; Bob uses BobDB.
### db/prepared.go & db/fieldseeker.go
`db/prepared.go` contains utility functions (`pointOrNull`, `lineOrNull`, `queryStoredProcedure`, etc.) that are **only** called from `db/fieldseeker.go`. That file is **entirely commented out** (`/* ... */`). The 9 unused-prepared-funcs lint warnings are expected — do not delete them unless you're also deleting or uncommenting fieldseeker.go.
## Lint Cleanup Context (May 2026)
### What was fixed
- **errcheck (36→0):** All unchecked error returns eliminated using `lint/` helpers or explicit checks.
- **unused (50→9):** ~60 functions/types deleted across ~30 files. Remaining 9 are in `db/prepared.go` (see above).
### golangci-lint reporting cap
golangci-lint caps unused reports at **50 items**. During cleanup, each batch of deletions exposed previously hidden items. If you see 50 unused items, there are almost certainly more hidden behind the cap. Delete the visible ones, re-run lint, and repeat.
### What was NOT fixed (remaining lint categories)
- **govet (26):** printf format mismatches, copylocks, lostcancel — some are real bugs
- **ineffassign (9):** dead assignments that may indicate logic errors
- **staticcheck (29):** deprecated `io/ioutil`, redundant returns, error string conventions, comparison always-true bugs
### Deleted by file count
| Directory | Files deleted | Reason |
|-----------|--------------|--------|
| `rmo/` | 15 | All handlers unused — routes commented out |
| `sync/` | 17 | Unregistered handlers |
| `api/` | 2 | `compliance.go`, `debug.go` — unused handlers |
| `platform/` | 2 | `text/db.go`, `dashboard.go`, `publicreport/address.go` |
| Other | 1 | `tomtom/` (prior cleanup) |
## Commit Conventions
Commits during the cleanup followed a consistent pattern:
```
lint: fix errcheck in api/api.go debug log writes
lint: remove unused code from sync/ package
```
Each commit fixes one category of issue in a small set of related files. Build verification (`go build ./...`) was performed before each commit.
## See Also
- `CLEANUP.md` — broader cleanup roadmap (Bob→Jet migration, html/ package removal, etc.)
- `HISTORY.md` — project history and architectural decisions
- `README.md` — administration and build-from-source instructions

View file

@ -1,390 +0,0 @@
# nidus-sync — Cleanup Tasks
This file lists code, files, and patterns that are remnants of older architectural approaches. These should be removed to reduce complexity, maintenance burden, and confusion.
---
## 1. Bob → Jet Migration (Incomplete)
**Status:** Bob is still the primary ORM. Jet was introduced May 2026 but only covers 3 schemas partially.
### 1a. Port remaining schemas from Bob to Jet
Jet-based queries exist for:
- `db/query/public/` — address, communication, communication_log_entry, compliance_report_request, feature, feature_pool, job, lead, signal, site
- `db/query/publicreport/` — compliance, image, image_exif, nuisance, report, report_image, report_log, water
- `db/query/arcgis/` — account, oauth, service_feature, service_map, user, user_privileges
Still using Bob directly (not yet ported to Jet queries):
- `platform/report/notification.go` (13 bob references)
- `platform/background/background.go` (8)
- `platform/arcgis.go` (8)
- `platform/text/send.go` (7)
- `platform/report/some_report.go` (6)
- `platform/site.go` (5)
- `platform/csv/flyover.go` (7)
- `platform/csv/pool.go` (5)
- `platform/csv/csv.go` (4)
- `platform/text/report.go` (4)
- `platform/text/phone_number.go` (3)
- `platform/publicreport/log.go` (3)
- `platform/mailer.go` (3)
- `platform/email/template.go` (2)
- `db/connection.go` (4 — bob.Tx types)
- `db/prepared.go` (2)
- `resource/review_task.go` (2)
- `rmo/status.go` (2)
- `rmo/report.go` (1)
- `rmo/mailer.go` (1)
- Plus many api/* files
### 1b. Remove Bob-generated models after migration
Once all queries are ported to Jet, delete the 103 `.bob.go` files in `db/models/`:
```
db/models/*.bob.go
```
### 1c. Remove Bob-specific helper files
These are Bob-specific and can be removed once Bob is fully replaced:
- `db/dberrors/` — Bob error types (still referenced)
- `db/dbinfo/` — Bob type info (still referenced)
- `db/models/bob_loaders.bob.go`
- `db/models/bob_where.bob.go`
### 1d. Remove Bob from go.mod and dependencies
After all Bob code is gone:
- Remove `github.com/Gleipnir-Technology/bob` from `go.mod`
- Run `go mod tidy`
### 1e. Remove Bob codegen scripts
- `db/bobgen.sh`
- `db/bobgen.yaml`
### 1f. Regenerate Jet output
The `db/jet/main.go` generator outputs to `db/gen/` but no output is currently checked in. Run the generator and ensure generated code is usable:
```bash
cd db/jet && go run .
```
---
## 2. Go HTML Templates → Vue SPA (Mostly Complete)
**Status:** Nearly all Go template routes are commented out in `sync/routes.go` and `rmo/routes.go`. Both hosts serve Vue SPAs via `static.SinglePageApp()`. Some Go template routes remain active.
### 2a. Remaining active Go template routes (sync)
These routes in `sync/routes.go` still render Go templates:
- `/oauth/arcgis/begin``getArcgisOauthBegin` (redirect, no template but in Go)
- `/oauth/arcgis/callback``getArcgisOauthCallback`
- `/mailer/pool/random``getMailerPoolRandom`
- `/mailer/mode-1``getMailer1` (generates PDF)
- `/mailer/mode-2``getMailer2` (generates PDF)
- `/mailer/mode-3/{code}``getMailer3` (generates PDF)
- `/mailer/mode-1/preview``getMailer1Preview`
- `/mailer/mode-2/preview``getMailer2Preview`
- `/mailer/mode-3/{code}/preview``getMailer3Preview`
- `/privacy``getPrivacy`
The mailer routes use `platform/pdf` which in turn uses headless Chrome (`chromedp`) to render HTML to PDF. This is legitimate server-side functionality, not just a template remnant. However, the PDF templates themselves may be candidates for migration to the Vue ecosystem.
### 2b. Remove all commented-out routes
Both `sync/routes.go` and `rmo/routes.go` have large blocks of commented-out route registrations. Remove these once migration is confirmed complete.
### 2c. Remove unused Go template files
Once all routes are ported or confirmed dead, remove the entire `html/template/` directory. The `html/` package (`html/embed.go`, `html/filesystem.go`, `html/func.go`, etc.) should also be removed once nothing references it.
### 2d. Reduce the html/ package surface
**Note:** The `html/` package is still actively imported by 40+ Go files. It provides:
- Template rendering (`html/embed.go`, `html/filesystem.go`) — mostly for mailer PDFs and privacy page
- `html.ContentConfig` — used extensively in sync/routes (mailer previews, admin pages)
- `html.MakeGet`, `html.MakePost` — HTTP handler wrappers (used by active `sync/` routes)
- `html.RespondError` — HTTP error responses
- Form parsing, image upload handling, URL building
**Short-term:** Remove the template rendering portion once mailer PDFs and privacy page are migrated.
**Long-term:** The full `html/` package can be removed only after all server-rendered pages are gone and handler wrappers are replaced with the `resource/` pattern.
---
## 3. esbuild (`build.js`) — Removed ✅
*(Completed 2026-05-09: `build.js` removed and `pkgs.esbuild` dropped from flake.nix devShell — Vite is the build tool)*
---
## 4. Legacy Static JavaScript Files
**Status:** `static/js/` contains 20 plain JavaScript files written as custom HTML elements and standalone scripts for the Go template era. These are referenced by old Go HTML templates but most of those templates are now unused.
### 4a. Files in static/js/
```
address-display.js
address-or-report-suggestion.js
address-suggestion.js
events.js
geocode.js
location.js
map-admin.js
map-aggregate.js
map-arcgis-tile.js
map-cell.js
map-locator.js
map-locator-ro.js
map-multipoint.js
map-proxied-arcgis-tile.js
map-routing.js
map-service-area.js
photo-upload.js
table-report.js
table-site.js
time-relative.js
user-selector.js
```
### 4b. Determine which are still used
The remaining active Go templates (mailer, oauth, privacy) may reference some of these. Check each active template for `<script src="/static/js/...">` references. Templates that are confirmed unused:
- All templates in `html/template/sync/` (dashboard, cell, communication-root, district, intelligence, layout, operations-root, planning-root, radar, review, sudo, upload-*) — these are replaced by Vue SPAs
- Most templates in `html/template/rmo/` — RMO routes are all commented out
### 4c. Migrate any still-needed functionality
The map-locator, address-suggestion, and photo-upload functionality has Vue equivalents in `ts/components/`. The remaining custom element patterns should be fully replaced by Vue components.
---
## 5. TomTom Integration — Removed ✅
*(Completed 2026-05-09: `tomtom/` directory removed — zero imports outside itself, Stadia Maps is now the geocoding/tile provider)*
---
## 6. Postgrid — Alternate Mail Provider
**Status:** `postgrid/` contains a single CLI tool (`cmd/send-pdf`) and a `postgrid` Go package reference in `main.go`. Lob is now the mail provider, with its own integration in `lob/`.
### 6a. Investigate and remove if unused
- Check if Postgrid is actually being used in production vs Lob
- If Lob is the chosen provider, remove `postgrid/` entirely
- Remove any Postgrid configuration references
---
## 7. Duplicate Architecture: `api/` vs `resource/`
**Status:** The `api/` package contains both route registration (`api/routes.go`) and handler functions (`api/signin.go`, `api/publicreport.go`, `api/compliance.go`, etc.). The `resource/` package provides typed resource handlers that expose `List`, `Get`, `Create`, etc. Some functionality exists in both layers.
### 7a. Consolidate handler functions
Functions in `api/` that directly handle business logic should be moved to `resource/`:
- `api/signin.go``postSignin`, `postSignout`, `postSignup`
- `api/compliance.go` — various compliance handlers
- `api/publicreport.go``postPublicreportInvalid`, `postPublicreportSignal`, `postPublicreportMessage`
- `api/sudo.go``postSudoEmail`, `postSudoSMS`, `postSudoSSE`
- `api/configuration.go``postConfigurationIntegrationArcgis`
- `api/review.go``postReviewPool`
- `api/twilio.go`, `api/voipms.go` — webhook handlers
- `api/audio.go`, `api/image.go` — media upload handlers
- `api/tile.go`, `api/debug.go` — utilities
### 7b. Standardize on resource pattern
Either move everything to `resource/` or keep both but clearly define responsibilities:
- `resource/` — domain resource CRUD + URI generation
- `api/` — route registration + HTTP concerns only
Currently the split is unclear and some `api/` files do substantial business logic.
---
## 8. `arcgis-go` Submodule — Not Checked Out
**Status:** The `arcgis-go` submodule (referenced in `.gitmodules`) is not checked out (empty directory). The external `github.com/Gleipnir-Technology/arcgis-go` package is used via `go.mod` instead.
### 8a. Remove submodule
```bash
git submodule deinit arcgis-go
git rm arcgis-go
```
Verify that all code references use the external package, not a local path.
---
## 9. `go-geojson2h3` Local Copy
**Status:** `go-geojson2h3/` is also a submodule. The external package `github.com/Gleipnir-Technology/go-geojson2h3/v2` is imported in `go.mod`. Only `h3utils/h3.go` references it.
### 9a. Consolidate
- If the local copy isn't needed, remove the submodule
- If local modifications exist, merge upstream or maintain intentionally with documentation
---
## 10. Old Generated Files & Artifacts
### 10a. `query.go` at project root — Removed ✅
### 10b. `db/sql/` directory
Contains `.bob.go` and `.bob.sql` files — these are Bob-style named queries. Once Bob is removed, these can be cleaned up or migrated to Jet equivalents.
### 10c. `static/gen/main.js`
A leftover built artifact. The new build output goes to `static/gen/sync/` and `static/gen/rmo/` via Vite. Ensure `static/gen/` is in `.gitignore` and the stale `main.js` is removed.
### 10d. `static/css/placeholder`
Empty placeholder file. Remove.
---
## 11. Nix devShell Cleanup
**Status:** `flake.nix` devShell includes several tools from older workflows:
### 11a. Potentially unnecessary devShell packages
- `pkgs.esbuild` — replaced by Vite (keep only if `build.js` is retained)
- `pkgs.dart-sass` — Vue/Vite uses the `sass` npm package; check if Go code invokes dart-sass directly
- `pkgs.autoprefixer` — may not be needed with Vite's built-in PostCSS
---
## 12. Start Scripts — Consolidate
**Status:** Four start scripts exist:
| Script | Purpose |
|--------|---------|
| `start-air.sh` | Development with air (live reload) |
| `start-flogo.sh` | Unknown (references `flogo`) |
| `start-nidus-sync.sh` | Production-like direct run |
| `start-nix-built.sh` | Run Nix-built output |
`start-flogo.sh` may be a remnant. Investigate and remove if unused.
---
## 13. Normalize Query Function Signatures to `db.Ex`
**Status:** The `db/query/` packages have inconsistent transaction parameter conventions. Some functions accept `db.Ex`, some accept `db.Tx` (concrete type), some accept `bob.Tx`, and some accept no transaction parameter at all (using the global `db.PGInstance` singleton). This blocks transaction-based testing and creates inconsistent patterns.
### 13a. Functions missing transaction parameter (use global pool)
These functions have no `txn` parameter and call `db.ExecuteOne`/`db.ExecuteMany`/`db.ExecuteNone` which use the global `PGInstance.PGXPool`:
**`db/query/public/communication.go`:**
- `CommunicationFromID(ctx, comm_id int64)` → add `txn db.Ex`, switch to `ExecuteOneTx`
- `CommunicationsFromOrganization(ctx, org_id int64)` → add `txn db.Ex`, switch to `ExecuteManyTx`
**`db/query/publicreport/report.go`:**
- `ReportFromID(ctx, report_id int64)` → add `txn db.Ex`, switch to `ExecuteOneTx`
- `ReportsFromIDs(ctx, report_ids []int64)` → add `txn db.Ex`, switch to `ExecuteManyTx`
**`db/query/arcgis/account.go`:**
- `AccountFromID(ctx, org_id string)` → add `txn db.Ex`, switch to `ExecuteOneTx`
**`db/query/arcgis/oauth.go` (all 9 functions use global pool):**
- `OAuthTokenInsert`, `OAuthTokenInvalidate`, `OAuthTokensValid`, `OAuthTokenFromID`, `OAuthTokenForUser`, `OAuthTokensForUser`, `OAuthTokenForUserExists`, `OAuthTokenUpdateAccessToken`, `OAuthTokenUpdateRefreshToken`, `OAuthTokenUpdateLicense` — add `txn db.Ex` to all, switch to `ExecuteOneTx`/`ExecuteManyTx`/`ExecuteNoneTx`
**`db/query/arcgis/service_feature.go`:**
- `ServiceFeatureFromID(ctx, id string)` → add `txn db.Ex`
- `ServiceFeatureFromURL(ctx, url string)` → add `txn db.Ex`
**`db/query/arcgis/service_map.go`:**
- `ServiceMapFromID(ctx, id string)` → add `txn db.Ex`
- `ServiceMapsFromAccountID(ctx, account_id string)` → add `txn db.Ex`
**Caller impact:** All callers will need to be updated to pass a `db.Ex`. Most platform callers either have a `db.Tx` from `db.BeginTxn()` already in scope (e.g., `communicationMark`, `SignalCreateFromPublicreport`) or can pass `db.PGInstance.PGXPool` (which implements `db.Ex`).
### 13b. Functions using `db.Tx` instead of `db.Ex`
These functions accept the concrete `db.Tx` type. While `db.Tx` implements `db.Ex`, using the concrete type prevents callers from passing other `db.Ex` implementors (e.g., `*pgxpool.Pool` or mock implementations). Change all to accept `db.Ex`:
**`db/query/public/communication.go`:**
- `CommunicationInsert(ctx, txn db.Tx, m)``txn db.Ex`
- `CommunicationSetStatus(ctx, txn db.Tx, ...)``txn db.Ex`
**`db/query/public/communication_log_entry.go`:**
- `CommunicationLogEntryInsert(ctx, txn db.Tx, m)``txn db.Ex`
**`db/query/publicreport/compliance.go`:**
- `ComplianceFromID(ctx, txn db.Tx, report_id)``txn db.Ex`
### 13c. Functions using `bob.Tx` instead of `db.Ex`
These are Bob-specific and need to be migrated to Jet's `db.Ex` pattern. This is part of the broader Bob→Jet migration (item 1):
**`db/query/arcgis/account.go`:**
- `AccountInsert(ctx, txn bob.Tx, m)``txn db.Ex`, switch from `ExecuteOneTxBob` to `ExecuteOneTx`
**`db/query/arcgis/service_feature.go`:**
- `ServiceFeatureInsert(ctx, txn bob.Tx, m)``txn db.Ex`, switch from `ExecuteOneTxBob` to `ExecuteOneTx`
**`db/query/arcgis/service_map.go`:**
- `ServiceMapInsert(ctx, txn bob.Tx, m)``txn db.Ex`, switch from `ExecuteOneTxBob` to `ExecuteOneTx`
**`db/query/arcgis/user.go`:**
- `UserInsert(ctx, txn bob.Tx, m)``txn db.Ex`, switch from `ExecuteOneTxBob` to `ExecuteOneTx`
**`db/query/arcgis/user_privileges.go`:**
- `UserPrivilegesDeleteByUserID(ctx, txn bob.Tx, id)``txn db.Ex`
- `UserPrivilegeInsert(ctx, txn bob.Tx, m)``txn db.Ex`
### 13d. Bug: `AddressFromID` ignores its transaction parameter
**`db/query/public/address.go` `AddressFromID`:** takes `txn db.Ex` but calls `db.ExecuteOne` (global pool) instead of `db.ExecuteOneTx`. This works when `txn` is the pool itself (callers pass `db.PGInstance.PGXPool`) but is a latent bug when called from within a transaction (caller in `platform/signal.go:85` and `platform/compliance.go:37`). Fix by switching to `db.ExecuteOneTx`.
Same bug in `AddressFromComplianceReportRequestID` (line 31 uses `ExecuteOne` instead of `ExecuteOneTx`).
### 13e. Migration strategy
1. Fix 13d first (one-character bugs — swap `ExecuteOne``ExecuteOneTx`)
2. Convert 13b next (signature-compatible change — `db.Tx``db.Ex` is widening)
3. Convert 13a next (add `txn db.Ex` parameter, update all callers)
4. Convert 13c last (part of broader Bob→Jet migration, item 1)
After all conversions, every query function will have a consistent `(ctx context.Context, txn db.Ex, ...)` signature, enabling uniform transaction-based testing.
---
## Priority Summary
1. **High impact, low effort:**
- ~~Remove `tomtom/` (unused, no imports)~~
- ~~Remove `build.js` (dead, replaced by Vite)~~
- Remove commented-out routes in `sync/routes.go` and `rmo/routes.go`
- ~~Remove `query.go` commented-out code~~
- Remove `static/gen/main.js` stale artifact
- Remove `static/css/placeholder`
- **Fix `AddressFromID`/`AddressFromComplianceReportRequestID` — swap `ExecuteOne` → `ExecuteOneTx`** (item 13d)
- **Convert `db.Tx` → `db.Ex` in query functions** (item 13b)
2. **Medium impact, medium effort:**
- Remove unused Go HTML templates (confirm which are still active first)
- Remove unused `static/js/` files (verify against active templates)
- Remove `arcgis-go` submodule
- Clean up Nix devShell
- **Add `txn db.Ex` to query functions missing it** (item 13a)
3. **High impact, high effort:**
- Complete Bob → Jet migration across all schemas
- Remove Bob-generated models, helpers, scripts
- Remove Bob from go.mod
- Consolidate `api/` and `resource/` handler patterns
- Remove `html/` package (after all Go templates are gone)
- **Convert `bob.Tx` → `db.Ex` in arcgis query functions** (item 13c)

View file

@ -1,207 +0,0 @@
# nidus-sync — Project History
## Overview
nidus-sync is a dual-tenant mosquito abatement platform serving two domains:
- **RMO** (`report.mosquitoes.online`) — Public-facing mosquito/water/nuisance reporting
- **Sync** (`sync.nidus.cloud`) — Administrative dashboard for vector control districts
The project was started in November 2025 and has undergone several major architectural shifts across ~1655 commits spanning 6 months.
---
## Timeline
### Phase 1: Foundation (November 2025)
**Nov 3 Nov 13: Project bootstrap**
- Initial Go project with Nix build system (`flake.nix`, `default.nix`)
- Basic `net/http` web serving with `gorilla/mux` routing
- Go `html/template` server-side rendering
- Bob ORM integration (`github.com/Gleipnir-Technology/bob`) for PostgreSQL — code-generated models via `bobgen`
- ArcGIS OAuth integration for user authentication
- ArcGIS Fieldseeker data synchronization (treatment areas, inspections, breeding sources, etc.)
- MapBox GL JS integration for heatmap visualization
- Dashboard with login, basic CRUD mocks
**Nov 13 Nov 24: Logging & DB restructuring**
- Migration from standard `log` to `zerolog` for structured, colorized output
- Database logic moved into a separate `db/` subdirectory
- Clean shutdown logic, token refresh loops
**Key characteristics:** Monolithic Go server, HTML templates, Bob ORM, MapBox maps, ArcGIS OAuth
---
### Phase 2: Fieldseeker & Schema Evolution (December 2025)
**Dec 2 Dec 24: Fieldseeker schema v2**
- Bob codegen updated to latest version
- Fieldseeker schema captured on OAuth connect and stored locally
- Dynamic SQL functions replacing hardcoded per-table sync logic
- Old Fieldseeker tables removed, v2 generated tables used
- Note/image audio support added
- MMS file downloads from SMS webhooks
**Key characteristics:** Bob-generated fieldseeker models, prepared SQL functions, SMS/MMS debugging
---
### Phase 3: Architecture Maturation (January 2026)
**Jan 2 Jan 8: Domain split & template system**
- WIP pass-through models concept ("Checkpoint on initial idea for passing through models")
- Massive reorganization: templates split into `rmo/` (public) and `sync/` (admin) subdirectories
- `html/` package created with embedded template loading
- Bob submodule removed, `arcgis-go` became external dependency
- Public report domain support added
- Version bumped 7 times in rapid iteration (v0.0.4 → v0.0.10)
**Jan 8 Jan 31: Platform Layer emergence**
- "Report platform layer" introduced (`a9b0a55f`) — initial abstraction between HTTP handlers and database
- Address suggestion and map-locator components via custom HTML elements
- SVG auto-transformation into Go templates
- Report submission forms wired up (nuisance, water)
- Email template system
**Key characteristics:** Two-domain architecture (RMO/Sync), `html/` template package, platform layer beginning, custom element web components
---
### Phase 4: Map Migration & Platform Expansion (February 2026)
**Feb 1 Feb 28: Map provider transition**
- MapBox → MapLibre GL (open-source fork) via `maplibre-gl`
- Stadia Maps integration for tile serving and geocoding (Feb 12-14)
- TomTom routing integration added (Feb 17)
- Bulk geocoding via Stadia
- Parcel image generation debugging
**Platform layer expansion:**
- Emails moved to platform layer
- Phone/SMS support
- OAuth integration settings
- Upload platform functions
- QR code and image tile moved into platform
- Admin map components
**Key characteristics:** MapLibre/Stadia replacing MapBox, TomTom added, platform layer expanding, heavy template iteration
---
### Phase 5: VueJS Revolution (March 2026) — 448 commits
**Mar 5 Mar 12: Pre-Vue cleanup**
- Stadia Maps client initialization
- Signal database schema added
- Review task/mailer schema rework
- Generated Bob files pruned
**Mar 12: Massive platform layer rework** (`44c4f17f`)
- User/organization handling restructured in platform layer
- Signal creation moved inside platform
**Mar 18 Mar 22: VueJS Migration** (the biggest architectural shift)
- Mar 18: Auto-generated report IDs
- Mar 21: **VueJS introduced** — begins with TypeScript bundle, then Vue SFC components, vue-router, Bootstrap/SCSS integration
- Mar 21: Dashboard, Intelligence, sidebar all moved to Vue
- Mar 22: **esbuild replaced by Vite** (`47f900ab`) — `vite/` directory with separate configs for `sync` and `rmo` SPAs
- Mar 22: TypeScript checking clean across entire frontend
- Mar 23: Public report card component, auth checks off API client
- Mar 24-31: Communication page ripped into components, impersonation support, users page
**Key characteristics:** VueJS 3 + TypeScript + Vite frontend, Pinia stores, vue-router, SCSS, SPA architecture replacing server-rendered Go templates
---
### Phase 6: Compliance & Communication (April 2026) — 454 commits
**Apr 1 Apr 9: RMO frontend & resources**
- Resource layer expanded (user, avatar, district, nuisance, water, compliance resources)
- RMO frontend checkpoint — Vue ports of public-facing pages
- TS types migrated into API module
- Old bundle paths removed, old SPA generation removed
**Apr 10 Apr 17: Compliance workflow**
- Compliance report creation, mailer flow
- Site/pool review tasks
- Stadia Maps cache, direct tile access
- OAuth refresh in frontend
- Image upload components
**Apr 17 Apr 25: Communication system**
- Background jobs reworked for shorter transactions
- Lob (physical mail) integration — direct API client, address creation, letter events
- QR code generation moved to API
- Compliance report evidence, mailer views
- Vue map system generalized (`cad01e68`)
**Apr 25 Apr 30: Map & communication polish**
- VueJS reimplementation of address/report suggestion
- Communication workbench with map, list, detail views
- Text message log, email/phone display
- Compliance card detail display
- SSE event system with status vs resource message distinction
- Systemd socket activation for downtime-free deploys
- Sentry error tracking for Vue frontend
**Key characteristics:** Compliance/mailer operational, communication system born, Lob integration, Sentry, generalized Vue map system
---
### Phase 7: Jet Migration & Cleanup (May 2026) — 46 commits so far
**May 1 May 9: SQL generation transition**
- **Jet (go-jet/jet) introduced** — type-safe SQL builder replacing Bob's query building
- Custom Jet generator created with geometry/Box2D type support (`db/jet/main.go`)
- `publicreport` schema ported to Jet
- `arcgis` schema ported to Jet (compiles, not fully tested per commit message)
- New `communication` table added
- Communication marking workflow (invalid, pending-response, possible-issue, possible-resolved)
- Linting: `golangci-lint` added to lefthook, per-file linting
- Cleanup of legacy generated columns (latitude/longitude), string-based queries
- Centralized error handler for Vue sync app
**Key characteristics:** Bob→Jet transition in progress, communication workflow, code quality improvements
---
## Architectural Patterns (by layer)
### Current architecture stack
```
┌─────────────────────────────────────────────────┐
│ Vue 3 SPA (TypeScript) │
│ ts/ — shared components, composables, stores │
│ vite/sync/ — admin SPA entry │
│ vite/rmo/ — public SPA entry │
├─────────────────────────────────────────────────┤
│ Go HTTP Server (gorilla/mux) │
│ api/routes.go — central route registration │
│ resource/ — resource handlers (REST patterns) │
│ sync/ — remaining Go template routes │
│ rmo/ — remaining Go template routes │
├─────────────────────────────────────────────────┤
│ platform/ — business logic layer │
│ (address, compliance, communication, district, │
│ email, fieldseeker, mailer, publicreport, │
│ review, signal, text, user, upload, etc.) │
├─────────────────────────────────────────────────┤
│ db/ — database access │
│ db/models/ — Bob-generated models (103 files) │
│ db/query/ — Jet-based query functions │
│ db/prepared.go — prepared SQL functions │
├─────────────────────────────────────────────────┤
│ PostgreSQL │
└─────────────────────────────────────────────────┘
```
### Pattern: Platform Layer
Introduced January 2026, the `platform/` package encapsulates business logic between HTTP handlers and the database. It grew from initial report handling to encompass users, organizations, emails, texts, compliance, communications, signals, geocoding, tiles, uploads, and more.
### Pattern: Resource Layer
Added MarchApril 2026, `resource/` provides typed REST resource handlers with URI generation (via mux route naming). Resources are instantiated with a `resource.NewRouter()` and expose methods like `List`, `Get`, `Create`, `Update`, `Delete` that return domain types. This replaced ad-hoc handler functions in `api/`.
### Pattern: Dual SPA + API
Since late March 2026, both domains serve Vue SPAs for most routes, with the Go server acting as an API backend. The `static.SinglePageApp()` handler serves the Vite-built output and falls back to `index.html` for client-side routing. Some Go template routes remain for mailer PDF generation, OAuth flows, and previews.

View file

@ -2,25 +2,6 @@
This is the software that powers [Nidus Cloud Sync](https://sync.nidus.cloud).
## Administration
### Password resets
If you need to manually reset a password you can do so with:
```
$ nix-shell -p genpass
$ genpass 12
abc123abc123
# this is from nidus, installed on deployment servers at the system layer
$ passwordgen
Please enter your password: abc123abc123
Password: abc123abc123
Hash: $2a$14$hdtoAtP7joczutY3bxaFqemBApH8xc5NbXLvDQqBfdzWV3jGSy4zi
$ psql -d nidus-sync
nidus-sync=> update user set password_hash='$2a$14$hdtoAtP7joczutY3bxaFqemBApH8xc5NbXLvDQqBfdzWV3jGSy4zi' where id=<something>;
```
## Building from source
First, you'll need [Nix](https://nix.dev).
@ -32,45 +13,12 @@ nix develop
go build .
```
## Building Custom Theme
We're using a customized Bootstrap theme for this site. You'll need to build the SCSS into CSS:
```
nix develop
sass --style=compressed --trace "$SASS_SRC_DIR/custom.scss":"$CSS_OUTPUT_DIR/bootstrap.css"
```
## Running
You'll need a number of environment variables for configuring things;
* ARCGIS_CLIENT_ID - The client ID for ArcGIS oauth, configured with esri.
* ARCGIS_CLIENT_SECRET - The client secret for ArcGIS oauth, configured with esri.
* BASE_URL - The URL the site is hosted at, used for forming callback URLs. Should be complete like 'https://foo.bar'
* BIND - The address and port to bind to. Use ':9001' for 'any address, port 9001'
* ENVIRONMENT - either 'PRODUCTION' or 'DEVELOPMENT'. It's used to set things like oauth token length.
* MAPBOX_TOKEN - The token to use with mapbox which is important for rendering maps.
* POSTGRES_DSN - The DSN for connecting to the postgres database.
* FIELDSEEKER_SCHEMA_DIRECTORY - The directory to write fieldseeker schema files for debugging.
* USER_FILES_DIRECTORY - The directory for writing uploaded user data files (audio, images).
```sh
> BASE_URL=https://sync.nidus.cloud ARCGIS_CLIENT_ID=foo ARCGIS_CLIENT_SECRET=bar POSTGRES_DSN='postgresql://?host=/var/run/postgresql&dbname=nidus-sync' ./nidus-sync
```
### Districts
There's a table containing district information in the database, `import.district`. It was created with:
```
psql
CREATE SCHEMA import;
shp2pgsql -s 3857 -c -D -I CA_districts.shp import.district | psql -d nidus-sync
psql -d nidus-sync
ALTER TABLE import.district ADD COLUMN geom_4326 geometry(MultiPolygon,4326) GENERATED ALWAYS AS (ST_Transform(geom, 4326)) STORED;
```
## Hacking
### air
@ -102,39 +50,3 @@ This uses [goose](https://github.com/pressly/goose). You can use the goose comma
> GOOSE_DRIVER=postgres GOOSE_DBSTRING="dbname=nidus-sync sslmode=disable" goose down
> GOOSE_DRIVER=postgres GOOSE_DBSTRING="dbname=nidus-sync sslmode=disable" goose up
```
### svg icons
These icons are generated as part of the build system. You can generate them manually with:
```
pnpm generate-icons
```
This will produce an scss file at `ts/gen/custom-icons.scss`
### typescript
In order to work on the TypeScript code you'll need to install the dependencies locally in your dev environment:
```
nix develop
pnpm install
```
You can then generate the TypeScript with:
```
pnpm watch
```
The only page that works right now is `https://sync.nidus.cloud/template-test`
### watchexec
For iterating on styles
```
watchexec -e scss sass scss/custom.scss:static/gen/css/bootstrap.css
```

View file

@ -1,656 +0,0 @@
# TEST-PLAN.md — Nidus Sync Database Query Layer Testing
## Status
The project currently has **zero tests**. `testify` is already in `go.mod` as an indirect dependency and needs to be promoted to direct.
## Overview
This plan covers **Tier 2 testing**: integration tests for the `db/query/` packages that execute real SQL against a throwaway Postgres database using the project's own migration system. All tests operate inside transactions that are rolled back, so they leave no trace.
The query layer is the ideal starting point because:
1. Every function is small and focused — a single INSERT, SELECT, or UPDATE.
2. After the signature normalization (CLEANUP.md §13), **every** function will accept a `db.Ex` interface, making them all uniformly testable from a test transaction.
3. These are the foundation that all platform-layer business logic calls. Bugs here cascade upward.
### Prerequisite: Normalize Query Function Signatures
Before writing tests, all query functions must be converted to accept `(ctx context.Context, txn db.Ex, ...)`. This is documented in detail at **[CLEANUP.md §13](CLEANUP.md#13-normalize-query-function-signatures-to-dbex)**. Summary of changes needed:
| Category | Count | What | Test-blocking? |
|----------|-------|------|---------------|
| 13d — Bugfix: txn ignored | 2 funcs | `AddressFromID`, `AddressFromComplianceReportRequestID` call `ExecuteOne` instead of `ExecuteOneTx` | Yes — data isolation broken |
| 13b — `db.Tx``db.Ex` | 4 funcs | `CommunicationInsert`, `CommunicationSetStatus`, `CommunicationLogEntryInsert`, `ComplianceFromID` | Partial — works but can't pass mock |
| 13a — Add `txn db.Ex` param | 19 funcs | Functions missing transaction parameter entirely | Yes — can't test in transactions |
| 13c — `bob.Tx``db.Ex` | 6 funcs | ArcGIS package functions using Bob transactions | Yes — can't test without Bob |
**Order of operations:** Fix 13d → convert 13b → convert 13a → convert 13c. After all conversions, every function is testable with `dbtest.Txn()`.
---
## Architecture of the Query Layer
### Package structure
```
db/query/
├── public/ ← main "public" schema queries (Jet ORM)
│ ├── address.go
│ ├── communication.go
│ ├── communication_log_entry.go
│ ├── compliance_report_request.go
│ ├── feature.go
│ ├── feature_pool.go
│ ├── job.go
│ ├── lead.go
│ ├── signal.go
│ └── site.go
├── publicreport/ ← "publicreport" schema queries (Jet ORM)
│ ├── compliance.go
│ ├── image.go
│ ├── image_exif.go
│ ├── nuisance.go
│ ├── report.go
│ ├── report_image.go
│ ├── report_log.go
│ └── water.go
└── arcgis/ ← "arcgis" schema queries (Jet ORM)
├── account.go
└── ...
```
### Query function patterns
There are three patterns in the query layer:
| Pattern | Signature | Example |
|---------|-----------|---------|
| **Insert (txn)** | `func XxxInsert(ctx, txn db.Ex, model) (model, error)` | `CommunicationInsert`, `LeadInsert`, `ReportInsert` |
| **Insert (global)** | `func XxxInsert(ctx, model) (model, error)` | (would use `db.PGInstance` directly) |
| **Select (txn)** | `func XxxFromYyy(ctx, txn db.Ex, ...) (model, error)` | `SiteFromAddressIDForOrg`, `FeaturesFromSiteID` |
| **Select (global)** | `func XxxFromYyy(ctx, ...) (model, error)` | `CommunicationFromID`, `AddressFromID` |
| **Update (txn)** | `func XxxSetYyy(ctx, txn db.Ex, ...) error` | `CommunicationSetStatus` |
| **Bulk insert (txn)** | `func XxxInserts(ctx, txn db.Ex, []model) ([]model, error)` | `AddressInserts`, `ReportImagesInsert` |
| **Bulk select (txn)** | `func XxxsFromYyys(ctx, txn db.Ex, []int64) ([]model, error)` | `AddressesFromIDs`, `FeaturePoolsFromFeatures` |
After the signature normalization in CLEANUP.md §13, **every** function accepts `txn db.Ex`. All tests use the same transaction-based pattern: begin → insert → query → verify → rollback.
### The `db.Ex` interface (from `db/tx.go`)
```go
type Ex interface {
Exec(ctx context.Context, sql string, arguments ...any) (commandTag pgconn.CommandTag, err error)
Query(ctx context.Context, sql string, args ...any) (pgx.Rows, error)
}
```
`db.BeginTxn()` returns `db.Tx` which implements `Ex`. `*pgxpool.Pool` does NOT implement `Ex` directly (it has different method signatures), which is why `db.ExecuteOne` (global pool) and `db.ExecuteOneTx` (transaction) are separate functions.
### Data flow
```
Query function constructs a Jet statement →
calls db.ExecuteOneTx[T]() or db.ExecuteManyTx[T]() →
statement.Sql() produces (query string, args) →
txn.Query(ctx, query, args...) →
pgx collects rows into typed struct
```
---
## Test Infrastructure
### Prerequisites
A running Postgres instance accessible via a connection string. The test framework will:
1. Connect using `TEST_POSTGRES_DSN` env var (default: skip tests if unset, so `go test ./...` works without DB)
2. Run all migrations via goose (embedded in `db/migrations/*.sql`)
3. Each test begins a transaction, runs the test, rolls back
4. No test data persists
### Test helper: `db/dbtest/dbtest.go`
Create a `db/dbtest/` package providing:
```go
package dbtest
import (
"context"
"os"
"sync"
"testing"
"github.com/Gleipnir-Technology/nidus-sync/db"
"github.com/jackc/pgx/v5/pgxpool"
)
var (
pool *pgxpool.Pool
once sync.Once
)
// Setup ensures the test database is initialized (migrations run).
// Called once per test binary via TestMain or per-package init.
func Setup(t *testing.T) {
t.Helper()
dsn := os.Getenv("TEST_POSTGRES_DSN")
if dsn == "" {
t.Skip("TEST_POSTGRES_DSN not set, skipping DB tests")
}
once.Do(func() {
ctx := context.Background()
if err := db.InitializeDatabase(ctx, dsn); err != nil {
t.Fatalf("initialize test database: %v", err)
}
pool = db.PGInstance.PGXPool
})
}
// Txn begins a new transaction on the test pool and returns
// it along with a rollback cleanup function.
func Txn(t *testing.T) (context.Context, db.Ex, func()) {
t.Helper()
ctx := context.Background()
tx, err := pool.Begin(ctx)
if err != nil {
t.Fatalf("begin txn: %v", err)
}
return ctx, tx, func() {
tx.Rollback(ctx)
}
}
// Pool returns the raw pgxpool for tests that need it.
func Pool() *pgxpool.Pool {
return pool
}
```
### Test file naming
All test files follow the standard Go convention: `<name>_test.go`, placed in the same package being tested (using `_test` external test package where needed for circular dependency avoidance). The package name follows `package public_test` pattern to test exported API only.
Actually, since the query functions are all exported and testable from outside, use:
```go
package public_test // external test package
```
This avoids circular dependency on `db/dbtest` and ensures we only test the public API.
### Dependencies to add to `go.mod`
Promote to direct (already indirect):
```
github.com/stretchr/testify v1.11.1
```
Add for assertions:
```
require "github.com/stretchr/testify/require"
assert "github.com/stretchr/testify/assert"
```
---
## Phase 1: INSERT Functions (lowest risk, highest clarity)
These are the simplest: construct a model, insert, verify the returned model has an auto-generated ID.
### 1.1 `db/query/public/` insert functions
| File | Function | Model Dependencies | Notes |
|------|----------|-------------------|-------|
| `communication.go` | `CommunicationInsert` | `Communication` | Requires `organization_id` FK. Create an org in test setup. |
| `communication_log_entry.go` | `CommunicationLogEntryInsert` | `CommunicationLogEntry` | Requires `communication_id` FK. Insert a communication first. |
| `compliance_report_request.go` | `ComplianceReportRequestInsert` | `ComplianceReportRequest` | Requires `lead_id` FK (nullable). Test with nil. |
| `lead.go` | `LeadInsert` | `Lead` | Requires `organization_id` and `site_id` FK. |
| `signal.go` | `SignalInsert` | `Signal` | Requires `organization_id`, `location` (geom), FK to `site_id` (nullable). |
| `job.go` | `JobInsert` | `Job` | Verify FK constraints documented. |
### 1.2 `db/query/publicreport/` insert functions
| File | Function | Model Dependencies |
|------|----------|-------------------|
| `compliance.go` | `ComplianceInsert` | `Compliance` |
| `image.go` | `ImageInsert` | `Image` |
| `image_exif.go` | `ImageExifInserts` | `ImageExif` (bulk) |
| `nuisance.go` | `NuisanceInsert` | `Nuisance` |
| `report.go` | `ReportInsert` | `Report` |
| `report_image.go` | `ReportImageInsert`, `ReportImagesInsert` | `ReportImage` (single + bulk) |
| `report_log.go` | `ReportLogInsert` | `ReportLog` |
| `water.go` | `WaterInsert` | `Water` |
### 1.3 `db/query/arcgis/` insert functions
| File | Function | Model Dependencies |
|------|----------|-------------------|
| `account.go` | `AccountInsert` | `Account` |
### Example test: `db/query/public/communication_test.go`
```go
package public_test
import (
"testing"
"time"
"github.com/Gleipnir-Technology/nidus-sync/db/dbtest"
"github.com/Gleipnir-Technology/nidus-sync/db/gen/nidus-sync/public/model"
query "github.com/Gleipnir-Technology/nidus-sync/db/query/public"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCommunicationInsert(t *testing.T) {
dbtest.Setup(t)
ctx, txn, cleanup := dbtest.Txn(t)
defer cleanup()
comm := model.Communication{
Created: time.Now(),
OrganizationID: 1,
Status: model.Communicationstatus_New,
}
result, err := query.CommunicationInsert(ctx, txn, comm)
require.NoError(t, err)
assert.NotZero(t, result.ID)
assert.Equal(t, model.Communicationstatus_New, result.Status)
assert.Equal(t, int32(1), result.OrganizationID)
}
```
### Test structure pattern
Every INSERT test follows this template:
1. **Arrange**: Create a model struct with required fields populated.
2. **Act**: Call the Insert function with a test transaction.
3. **Assert**:
- No error returned.
- `result.ID` is non-zero (auto-generated primary key).
- Inserted values match input values.
- Timestamp fields are set (where applicable).
---
## Phase 2: SELECT Functions
These require data to already exist in the table. Each SELECT test inserts a row in the same transaction, then queries it back. After the signature normalization (CLEANUP.md §13), **all** SELECT functions accept `txn db.Ex` and use `ExecuteOneTx`/`ExecuteManyTx`.
### 2.1 `db/query/public/` select functions
| File | Function | Strategy |
|------|----------|----------|
| `address.go` | `AddressFromComplianceReportRequestID` | Insert address → query by report request ID |
| `address.go` | `AddressFromGID` | Insert address → query by GID |
| `address.go` | `AddressFromID` | Insert address → query by ID |
| `address.go` | `AddressesFromIDs` | Insert 2 addresses → query by IDs |
| `communication.go` | `CommunicationFromID` | Insert communication → query by ID |
| `communication.go` | `CommunicationsFromOrganization` | Insert 2 communications → query by org |
| `feature.go` | `FeaturesFromSiteID` | Insert site → feature → query |
| `feature.go` | `FeaturesFromSiteIDs` | Insert 2 sites + features → query |
| `feature_pool.go` | `FeaturePoolsFromFeatures` | Insert feature → pool → query |
| `site.go` | `SiteFromAddressIDForOrg` | Insert address + site → query |
| `site.go` | `SiteFromIDForOrg` | Insert site → query |
### 2.2 `db/query/publicreport/` select functions
| File | Function | Strategy |
|------|----------|----------|
| `compliance.go` | `ComplianceFromID` | Insert compliance → query by ID |
| `report.go` | `ReportFromPublicID` | Insert report → query by public ID |
| `report.go` | `ReportFromPublicIDForOrg` | Insert report → query by public ID + org |
| `report.go` | `ReportFromID` | Insert report → query by ID |
| `report.go` | `ReportsFromIDs` | Insert 2 reports → query by IDs |
| `report.go` | `ReportsFromIDsForOrg` | Insert 2 reports → query by IDs + org |
| `report.go` | `ReportsUnreviewedForOrganization` | Insert reviewed + unreviewed → query unreviewed |
### 2.3 `db/query/arcgis/` select functions
| File | Function | Strategy |
|------|----------|----------|
| `account.go` | `AccountFromID` | Insert account → query by ID |
| `oauth.go` | `OAuthTokenFromID` | Insert token → query by ID |
| `oauth.go` | `OAuthTokenForUser` | Insert token → query by user |
| `oauth.go` | `OAuthTokensForUser` | Insert tokens → query by user |
| `oauth.go` | `OAuthTokensValid` | Insert valid + invalid → query valid |
| `oauth.go` | `OAuthTokenForUserExists` | Insert token → verify exists |
| `service_feature.go` | `ServiceFeatureFromID` | Insert → query by ID |
| `service_feature.go` | `ServiceFeatureFromURL` | Insert → query by URL |
| `service_map.go` | `ServiceMapFromID` | Insert → query by ID |
| `service_map.go` | `ServiceMapsFromAccountID` | Insert maps → query by account |
### Example test: `db/query/public/address_test.go`
```go
package public_test
import (
"testing"
"time"
"github.com/Gleipnir-Technology/nidus-sync/db/dbtest"
"github.com/Gleipnir-Technology/nidus-sync/db/gen/nidus-sync/public/model"
query "github.com/Gleipnir-Technology/nidus-sync/db/query/public"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/twpayne/go-geom"
)
func TestAddressFromGID(t *testing.T) {
dbtest.Setup(t)
ctx, txn, cleanup := dbtest.Txn(t)
defer cleanup()
// Insert test data
addr := model.Address{
Country: "US",
Created: time.Now(),
Location: geom.NewPoint(geom.XY).MustSetCoords(geom.Coord{-122.4, 37.8}),
H3cell: "test",
Locality: "San Francisco",
PostalCode: "94102",
Street: "Market St",
Unit: "",
Region: "CA",
Number: "1234",
Gid: "test-gid-001",
}
inserted, err := query.AddressInsert(ctx, txn, addr)
require.NoError(t, err)
// Query by GID
result, err := query.AddressFromGID(ctx, txn, "test-gid-001")
require.NoError(t, err)
require.NotNil(t, result)
assert.Equal(t, inserted.ID, result.ID)
assert.Equal(t, "US", result.Country)
assert.Equal(t, "San Francisco", result.Locality)
}
func TestAddressesFromIDs(t *testing.T) {
dbtest.Setup(t)
ctx, txn, cleanup := dbtest.Txn(t)
defer cleanup()
// Insert two addresses
a1, _ := query.AddressInsert(ctx, txn, model.Address{
Created: time.Now(), Location: geom.NewPoint(geom.XY).MustSetCoords(geom.Coord{0, 0}),
H3cell: "a", Gid: "gid-a",
})
a2, _ := query.AddressInsert(ctx, txn, model.Address{
Created: time.Now(), Location: geom.NewPoint(geom.XY).MustSetCoords(geom.Coord{0, 1}),
H3cell: "b", Gid: "gid-b",
})
// Query by IDs
results, err := query.AddressesFromIDs(ctx, txn, []int64{int64(a1.ID), int64(a2.ID)})
require.NoError(t, err)
assert.Len(t, results, 2)
// Empty input returns empty output
empty, err := query.AddressesFromIDs(ctx, txn, []int64{})
require.NoError(t, err)
assert.Empty(t, empty)
}
```
---
## Phase 3: UPDATE Functions
Verify that updates modify rows correctly and respect predicates.
### 3.1 Update functions
| File | Function | Test Strategy |
|------|----------|---------------|
| `communication.go` | `CommunicationSetStatus` | Insert communication → update status → verify via SELECT |
| `report.go` | `ReportUpdater.Execute` | Insert report → apply updater → verify |
### Example test: `db/query/public/communication_test.go`
```go
func TestCommunicationSetStatus(t *testing.T) {
dbtest.Setup(t)
ctx, txn, cleanup := dbtest.Txn(t)
defer cleanup()
// Insert
comm, err := query.CommunicationInsert(ctx, txn, model.Communication{
Created: time.Now(),
OrganizationID: 1,
Status: model.Communicationstatus_New,
})
require.NoError(t, err)
// Update status
err = query.CommunicationSetStatus(ctx, txn,
int64(comm.OrganizationID), int64(comm.ID),
model.Communicationstatus_Closed)
require.NoError(t, err)
// Verify the update via a SELECT in the same transaction
// (CommunicationFromID accepts db.Ex after CLEANUP.md §13a conversion)
updated, err := query.CommunicationFromID(ctx, txn, int64(comm.ID))
require.NoError(t, err)
assert.Equal(t, model.Communicationstatus_Closed, updated.Status)
}
```
---
## Phase 4: ArcGIS Query Package
After the `bob.Tx``db.Ex` conversion (CLEANUP.md §13c), the arcgis query functions use the same transaction pattern as the other packages.
### 4.1 INSERT functions
| File | Function | Notes |
|------|----------|-------|
| `account.go` | `AccountInsert` | After 13c: uses `ExecuteOneTx` |
| `oauth.go` | `OAuthTokenInsert` | After 13a: accepts `txn db.Ex` |
| `service_feature.go` | `ServiceFeatureInsert` | After 13c: uses `ExecuteOneTx` |
| `service_map.go` | `ServiceMapInsert` | After 13c: uses `ExecuteOneTx` |
| `user.go` | `UserInsert` | After 13c: uses `ExecuteOneTx` |
| `user_privileges.go` | `UserPrivilegeInsert` | After 13c: uses `ExecuteOneTx` |
### 4.2 SELECT functions
| File | Function | Notes |
|------|----------|-------|
| `account.go` | `AccountFromID` | After 13a: accepts `txn db.Ex` |
| `oauth.go` | `OAuthTokenFromID`, `OAuthTokenForUser`, `OAuthTokensForUser`, `OAuthTokensValid`, `OAuthTokenForUserExists` | After 13a |
| `service_feature.go` | `ServiceFeatureFromID`, `ServiceFeatureFromURL` | After 13a |
| `service_map.go` | `ServiceMapFromID`, `ServiceMapsFromAccountID` | After 13a |
### 4.3 UPDATE/DELETE functions
| File | Function | Notes |
|------|----------|-------|
| `oauth.go` | `OAuthTokenUpdateAccessToken`, `OAuthTokenUpdateRefreshToken`, `OAuthTokenUpdateLicense`, `OAuthTokenInvalidate` | After 13a |
| `user_privileges.go` | `UserPrivilegesDeleteByUserID` | After 13c |
---
## Phase 5: Edge Cases and Error Handling
### 5.1 Empty bulk operations
Functions like `AddressesFromIDs` and `ReportImagesInsert` already handle empty input slices gracefully. Write tests confirming:
- Empty input → non-nil empty slice, no error.
- Nil input → handled gracefully (or skipped with `t.Skip` if the function panics).
### 5.2 Unique constraint violations
Insert two rows with the same unique key; verify the error message is readable.
### 5.3 Foreign key violations
Insert a row referencing a non-existent parent; verify the error. This validates that FK constraints are correctly defined in the schema.
### 5.4 Not found
Functions returning `(*model.Xxx, error)` should return `nil, nil` on not-found (pattern already used by `ReportFromPublicID` and `AddressFromGID`). Test this behavior.
### 5.5 NULL handling
Models with nullable fields (`*int32`, `*string`, `*time.Time`, `*geom.T`): test with nil and non-nil values to verify round-trip fidelity.
---
## Test Execution
### Local development
```bash
# Set up a test database (one time)
createdb nidus-sync-test
# Run the query-layer tests
TEST_POSTGRES_DSN="postgresql://?host=/var/run/postgresql&dbname=nidus-sync-test" \
go test ./db/query/... -v -count=1
# Run all tests (skips DB tests if no DSN set)
go test ./... -v -count=1
```
### CI (GitHub Actions example)
```yaml
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: nidus-sync-test
POSTGRES_PASSWORD: password
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- name: Test query layer
run: |
go test ./db/query/... -v -count=1
env:
TEST_POSTGRES_DSN: "postgresql://postgres:password@localhost:5432/nidus-sync-test"
```
### Using test databases in sequence vs parallel
- All Phase 1 INSERT tests can run in parallel (they use separate transactions on separate tables).
- All SELECT tests for the same table should run sequentially to avoid ID conflicts.
- Per-package `TestMain` can handle `db.InitializeDatabase` once, then run all tests.
**Recommended approach**: Run all tests sequentially within each package (Go's default), using `-count=1` to disable caching. Each test starts its own transaction, so there's no data leakage even running sequentially.
---
## File-by-File Implementation Order
### Step 1: Infrastructure (1 file)
| File | Purpose |
|------|---------|
| `db/dbtest/dbtest.go` | Test helper: pool setup, migration runner, transaction factory |
### Step 2: `go.mod` change (1 line)
Promote `github.com/stretchr/testify` to direct dependency.
### Step 3: INSERT tests (8 test files)
| Test File | Query File Tested | Functions Covered |
|-----------|------------------|-------------------|
| `db/query/public/communication_test.go` | `communication.go` + `communication_log_entry.go` | `CommunicationInsert`, `CommunicationLogEntryInsert`, `CommunicationSetStatus`, `CommunicationFromID`, `CommunicationsFromOrganization` |
| `db/query/public/address_test.go` | `address.go` | `AddressInsert`, `AddressesFromIDs`, `AddressFromGID`, `AddressFromID`, `AddressFromComplianceReportRequestID` |
| `db/query/public/site_test.go` | `site.go` | `SiteFromAddressIDForOrg`, `SiteFromIDForOrg` |
| `db/query/public/lead_test.go` | `lead.go` | `LeadInsert` |
| `db/query/public/signal_test.go` | `signal.go` | `SignalInsert` |
| `db/query/public/compliance_report_request_test.go` | `compliance_report_request.go` | `ComplianceReportRequestInsert` |
| `db/query/public/feature_test.go` | `feature.go` + `feature_pool.go` | `FeaturesFromSiteID`, `FeaturePoolsFromFeatures`, `FeaturesFromSiteIDs` |
| `db/query/publicreport/report_test.go` | `report.go` | `ReportInsert`, `ReportFromPublicID`, `ReportFromPublicIDForOrg`, `ReportFromID`, `ReportsFromIDs`, `ReportsFromIDsForOrg`, `ReportsUnreviewedForOrganization` |
### Step 4: Remaining query packages (4 test files)
| Test File | Query File Tested | Functions Covered |
|-----------|------------------|-------------------|
| `db/query/publicreport/compliance_test.go` | `compliance.go` | `ComplianceInsert`, `ComplianceFromID` |
| `db/query/publicreport/image_test.go` | `image.go` + `image_exif.go` + `report_image.go` | All image insert functions |
| `db/query/publicreport/nuisance_water_test.go` | `nuisance.go` + `water.go` + `report_log.go` | `NuisanceInsert`, `WaterInsert`, `ReportLogInsert` |
| `db/query/arcgis/arcgis_test.go` | `account.go` + `oauth.go` + `service_feature.go` + `service_map.go` + `user.go` | All arcgis query functions (after 13a + 13c conversions) |
---
## Model Foreign Key Dependency Graph
Understanding which inserts require which parent rows helps plan test setup:
```
organization ─────────────────────────────────────────────┐
│ │
├── communication ── communication_log_entry │
├── site ── feature ── feature_pool │
│ │ │
│ ├── signal (site_id, location) │
│ └── lead (site_id) ── compliance_report_request │
│ │
└── publicreport.report ── report_log │
├── report_image │
├── compliance (report_id) │
├── nuisance (report_id) │
└── water (report_id) │
```
For initial INSERT tests, we need at minimum a test `organization` row. The `dbtest.Setup` function can optionally seed this.
### Seeding approach
Option A — Seed in `dbtest.Setup()`: insert a minimal org row (id=1) during migration/setup so all tests have a valid FK target.
Option B — Each test creates its own dependency rows within the transaction.
**Recommendation**: Option B for now (each test is self-contained). The overhead is low and tests remain independent. If organization-schema evolves and gets more columns, we can add a helper:
```go
func SeedOrganization(ctx context.Context, txn db.Ex) (int32, error) {
// Insert a minimal org row
}
```
---
## What Is NOT Covered (yet)
| Area | Reason |
|------|--------|
| `db/prepared.go` param builders | Scheduled for removal (per project owner) |
| Platform layer (`platform/*.go`) | Separate plan — these call query functions; test them after query layer is solid |
| HTTP handlers (`api/`, `resource/`) | Need HTTP test infrastructure (httptest) |
| Bob ORM-generated models (`db/models/`) | Legacy ORM; query tests cover the Jet layer which is the migration target |
| `db/fieldseeker.go` | Entirely commented out |
| `db/connection.go` `Execute*` helpers | Covered transitively by query tests; direct tests would be lower priority |
| Vue/TypeScript frontend | Separate test effort (Vitest) |
---
## Success Criteria
After all phases complete:
1. **Signature normalization (CLEANUP.md §13)**: every query function has `(ctx context.Context, txn db.Ex, ...)` signature. No function uses the global pool internally.
2. **Every exported function in `db/query/public/`**, `db/query/publicreport/`, and `db/query/arcgis/` has at least one transaction-based test.
3. **INSERT functions**: verify returned model has auto-generated ID and correct typed fields.
4. **SELECT functions**: verify round-trip (insert → query → match) within the same transaction.
5. **UPDATE functions**: verify modification takes effect, verifiable via SELECT in same transaction.
6. **Edge cases**: empty input slices, not-found returns `nil`/error, FK/unique violations produce errors, NULL round-trips.
7. **CI green**: tests pass in CI with a Postgres service container.

View file

@ -1,335 +0,0 @@
package api
import (
"context"
"fmt"
"io"
"net/http"
"os"
"strconv"
"time"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/Gleipnir-Technology/nidus-sync/db"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/resource"
"github.com/Gleipnir-Technology/nidus-sync/version"
//"github.com/gorilla/mux"
"github.com/rs/zerolog/log"
)
/*
type renderer struct {
}
func (ren *renderer) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
*/
// In the best case scenario, the excellent github.com/pkg/errors package
// helps reveal information on the error, setting it on Err, and in the Render()
// method, using it to set the application-specific error code in AppCode.
type ResponseErr struct {
Error error `json:"-"` // low-level runtime error
HTTPStatusCode int `json:"-"` // http response status code
StatusText string `json:"status"` // user-level status message
AppCode int64 `json:"code,omitempty"` // application-specific error code
ErrorText string `json:"error,omitempty"` // application-level error message, for debugging
}
func (e *ResponseErr) Render(w http.ResponseWriter, r *http.Request) error {
http.Error(w, e.StatusText, e.HTTPStatusCode)
return nil
}
func errRender(err error) *ResponseErr {
log.Error().Err(err).Msg("Rendering error")
return &ResponseErr{
Error: err,
HTTPStatusCode: 500,
StatusText: "Error rendering response",
ErrorText: err.Error(),
}
}
type Renderable interface {
Render(http.ResponseWriter, *http.Request) error
}
func renderShim(w http.ResponseWriter, r *http.Request, renderer Renderable) error {
return renderer.Render(w, r)
}
func renderList(w http.ResponseWriter, r *http.Request, data []Renderable) error {
return nil
}
func handleClientIos(w http.ResponseWriter, r *http.Request, u platform.User) {
var sinceStr string
err := r.ParseForm()
if err != nil {
err = renderShim(w, r, errRender(fmt.Errorf("Failed to parse GET form: %w", err)))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
} else {
sinceStr = r.FormValue("since")
}
var since *time.Time
if sinceStr == "" {
since = nil
} else {
since, err = parseTime(sinceStr)
if err != nil {
err = renderShim(w, r, errRender(fmt.Errorf("Failed to parse 'since' value: %w", err)))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
}
csync, err := platform.ContentClientIos(r.Context(), u, since)
if err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
var since_used time.Time
if since == nil {
since_used = time.Unix(0, 0)
} else {
since_used = *since
}
response := ResponseClientIos{
Fieldseeker: toResponseFieldseeker(csync.Fieldseeker),
Since: since_used,
}
if err := renderShim(w, r, response); err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
}
func apiMosquitoSource(w http.ResponseWriter, r *http.Request, u platform.User) {
bounds, err := parseBounds(r)
if err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
query := db.NewGeoQuery()
query.Bounds = *bounds
query.Limit = 100
sources, err := platform.MosquitoSourceQuery()
if err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
data := []Renderable{}
for _, s := range sources {
data = append(data, NewResponseMosquitoSource(s))
}
if err := renderList(w, r, data); err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
}
}
func apiTrapData(w http.ResponseWriter, r *http.Request, u platform.User) {
bounds, err := parseBounds(r)
if err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
query := db.NewGeoQuery()
query.Bounds = *bounds
query.Limit = 100
trap_data, err := platform.TrapDataQuery()
if err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
data := []Renderable{}
for _, td := range trap_data {
data = append(data, NewResponseTrapDatum(td))
}
if err := renderList(w, r, data); err != nil {
err = renderShim(w, r, errRender(err))
if err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
}
}
func parseBounds(r *http.Request) (*db.GeoBounds, error) {
err := r.ParseForm()
if err != nil {
return nil, err
}
east := r.FormValue("east")
north := r.FormValue("north")
south := r.FormValue("south")
west := r.FormValue("west")
bounds := db.GeoBounds{}
var temp float64
temp, err = strconv.ParseFloat(east, 64)
if err != nil {
return nil, err
}
bounds.East = temp
temp, err = strconv.ParseFloat(north, 64)
if err != nil {
return nil, err
}
bounds.North = temp
temp, err = strconv.ParseFloat(south, 64)
if err != nil {
return nil, err
}
bounds.South = temp
temp, err = strconv.ParseFloat(west, 64)
if err != nil {
return nil, err
}
bounds.West = temp
return &bounds, nil
}
func webhookFieldseeker(w http.ResponseWriter, r *http.Request) {
// Create or open the log file
file, err := os.OpenFile("webhook/request.log", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)
if err != nil {
log.Printf("Error opening log file: %v", err)
http.Error(w, "Internal Server Error", http.StatusInternalServerError)
return
}
defer lint.LogOnErr(file.Close, "close request log")
// Write timestamp
timestamp := time.Now().Format("2006-01-02 15:04:05")
_, err = fmt.Fprintf(file, "\n=== Request logged at %s ===\n", timestamp)
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Write request line
_, err = fmt.Fprintf(file, "%s %s %s\n", r.Method, r.RequestURI, r.Proto)
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
// Write all headers
_, err = fmt.Fprintf(file, "\nHeaders:\n")
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
for name, values := range r.Header {
for _, value := range values {
lint.Fprintf(file, "%s: %s\n", name, value)
}
}
// Write body
_, err = fmt.Fprintf(file, "\nBody:\n")
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
body, err := io.ReadAll(r.Body)
if err != nil {
log.Printf("Error reading request body: %v", err)
_, err = fmt.Fprintf(file, "Error reading body: %v\n", err)
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
} else {
_, err = file.Write(body)
if err != nil {
log.Error().Err(err).Msg("writing response")
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
if len(body) == 0 {
lint.Fprintf(file, "(empty body)")
}
}
lint.Fprintf(file, "\n=== End of request ===\n\n")
// Extract the crc_token value for the signature portion
// Respond with 204 No Content
w.WriteHeader(http.StatusNoContent)
}
func parseTime(x string) (*time.Time, error) {
created_epoch, err := strconv.ParseInt(x, 10, 64)
if err != nil {
return &time.Time{}, fmt.Errorf("Failed to parse time '%s': %w", x, err)
}
created := time.UnixMilli(created_epoch)
return &created, nil
}
type about struct {
Environment string `json:"environment"`
SentryDSN string `json:"sentry_dsn"`
Tegola tegolaURLs `json:"tegola"`
Version version.VersionInfo `json:"version"`
}
type tegolaURLs struct {
Nidus string `json:"nidus"`
RMO string `json:"rmo"`
}
func getRoot(ctx context.Context, r *http.Request, q resource.QueryParams) (*about, *nhttp.ErrorWithStatus) {
v := version.Get()
return &about{
Environment: config.Environment,
SentryDSN: config.SentryDSNFrontend,
Tegola: tegolaURLs{
Nidus: config.MakeURLTegola("/maps/nidus/{z}/{x}/{y}?id={organization_id}"),
RMO: config.MakeURLTegola("/maps/rmo/{z}/{x}/{y}"),
},
Version: v,
}, nil
}

View file

@ -1,96 +0,0 @@
package api
import (
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/db"
"github.com/Gleipnir-Technology/nidus-sync/db/models"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/platform/background"
"github.com/Gleipnir-Technology/nidus-sync/platform/file"
"github.com/aarondl/opt/omit"
"github.com/aarondl/opt/omitnull"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/rs/zerolog/log"
)
func apiAudioPost(w http.ResponseWriter, r *http.Request, u platform.User) {
vars := mux.Vars(r)
id := vars["uuid"]
noteUUID, err := uuid.Parse(id)
if err != nil {
http.Error(w, "Failed to decode the uuid", http.StatusBadRequest)
return
}
var payload NoteAudioPayload
body, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read the payload", http.StatusBadRequest)
return
}
if err := json.Unmarshal(body, &payload); err != nil {
//debugSaveRequest(body, err, "Audio note POST JSON decode error")
http.Error(w, "Failed to decode the payload", http.StatusBadRequest)
return
}
ctx := r.Context()
setter := models.NoteAudioSetter{
Created: omit.From(payload.Created),
CreatorID: omit.From(int32(u.ID)),
Deleted: omitnull.FromPtr(payload.Deleted),
DeletorID: omitnull.FromPtr(payload.DeletorID),
Duration: omit.From(payload.Duration),
OrganizationID: omit.From(u.Organization.ID),
Transcription: omitnull.FromPtr(payload.Transcription),
TranscriptionUserEdited: omit.From(payload.TranscriptionUserEdited),
Version: omit.From(payload.Version),
UUID: omit.From(noteUUID),
}
if err := platform.NoteAudioCreate(ctx, u, setter); err != nil {
if err := renderShim(w, r, errRender(err)); err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
w.WriteHeader(http.StatusAccepted)
}
func apiAudioContentPost(w http.ResponseWriter, r *http.Request, user platform.User) {
vars := mux.Vars(r)
u_str := vars["uuid"]
u, err := uuid.Parse(u_str)
if err != nil {
http.Error(w, "Failed to parse image UUID", http.StatusBadRequest)
return
}
err = file.FileContentWrite(r.Body, file.CollectionAudioRaw, u)
if err != nil {
log.Printf("Failed to write content file: %v", err)
http.Error(w, "failed to write content file", http.StatusInternalServerError)
return
}
ctx := r.Context()
a, err := models.NoteAudios.Query(
models.SelectWhere.NoteAudios.UUID.EQ(u),
models.SelectWhere.NoteAudios.OrganizationID.EQ(user.Organization.ID),
).One(ctx, db.PGInstance.BobDB)
if err != nil {
log.Printf("Failed to get note audio %s for org %d: %w", u_str, user.Organization.ID, err)
http.Error(w, "failed to update database", http.StatusBadRequest)
return
}
err = background.NewAudioTranscode(ctx, db.PGInstance.BobDB, a.ID)
if err != nil {
log.Printf("Failed to transcode audio %s for org %d: %w", u_str, user.Organization.ID, err)
http.Error(w, "failed to transcode audio", http.StatusBadRequest)
return
}
w.WriteHeader(http.StatusOK)
}

View file

@ -1 +0,0 @@
package api

View file

@ -1 +0,0 @@
package api

View file

@ -1,34 +0,0 @@
package api
import (
"context"
"net/http"
"github.com/Gleipnir-Technology/bob/dialect/psql"
"github.com/Gleipnir-Technology/bob/dialect/psql/um"
"github.com/Gleipnir-Technology/nidus-sync/db"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/rs/zerolog/log"
)
type formArcgisConfiguration struct {
MapService *string `schema:"map-service"`
}
func postConfigurationIntegrationArcgis(ctx context.Context, r *http.Request, u platform.User, f formArcgisConfiguration) (string, *nhttp.ErrorWithStatus) {
if f.MapService != nil {
_, err := psql.Update(
um.Table("organization"),
um.SetCol("arcgis_map_service_id").ToArg(f.MapService),
um.Where(psql.Quote("id").EQ(psql.Arg(u.Organization.ID))),
).Exec(ctx, db.PGInstance.BobDB)
if err != nil {
return "", nhttp.NewError("Failed to update map service config: %w", err)
}
log.Info().Str("map-service", *f.MapService).Int32("org-id", u.Organization.ID).Msg("changed map service")
} else {
log.Info().Msg("no map service")
}
return "/configuration/integration/arcgis", nil
}

View file

@ -1,39 +0,0 @@
package api
import (
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/db"
"github.com/Gleipnir-Technology/nidus-sync/db/models"
"github.com/Gleipnir-Technology/nidus-sync/platform/file"
"github.com/gorilla/mux"
)
func apiGetDistrictLogo(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
slug := vars["slug"]
ctx := r.Context()
rows, err := models.Organizations.Query(
models.SelectWhere.Organizations.Slug.EQ(slug),
).All(ctx, db.PGInstance.BobDB)
if err != nil {
http.Error(w, "Failed to query", http.StatusInternalServerError)
return
}
switch len(rows) {
case 0:
http.Error(w, "Organization not found", http.StatusNotFound)
return
case 1:
org := rows[0]
if org.LogoUUID.IsNull() {
http.Error(w, "Logo not found", http.StatusNotFound)
return
}
file.ImageFileToWriter(file.CollectionLogo, org.LogoUUID.MustGet(), w)
return
default:
http.Error(w, "Too many organizations, this is a programmer error", http.StatusInternalServerError)
return
}
}

View file

@ -1,169 +0,0 @@
package api
import (
"encoding/json"
"fmt"
"net/http"
"time"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/platform/event"
"github.com/Gleipnir-Technology/nidus-sync/version"
"github.com/google/uuid"
"github.com/rs/zerolog/log"
)
var connectionsSSE map[*ConnectionSSE]bool = make(map[*ConnectionSSE]bool, 0)
var TYPE_STATUS string = "status"
type ConnectionSSE struct {
chanEvent chan platform.Event
id uuid.UUID
organizationID int32
userID int32
}
type Message struct {
Resource string `json:"resource"`
Time time.Time `json:"time"`
Type string `json:"type"`
URI string `json:"uri"`
}
type Status struct {
BuildTime time.Time `json:"build_time"`
IsModified bool `json:"is_modified"`
Revision string `json:"revision"`
Status string `json:"status"`
Type string `json:"type"`
}
func (c *ConnectionSSE) SendEvent(w http.ResponseWriter, m platform.Event) error {
if m.Type == event.EventTypeShutdown {
v := version.Get()
return send(w, Status{
BuildTime: v.BuildTime,
IsModified: v.IsModified,
Revision: v.Revision,
Status: m.Type.String(),
Type: TYPE_STATUS,
})
}
return send(w, Message{
Resource: m.Resource,
Time: m.Time,
Type: m.Type.String(),
URI: m.URI,
})
}
func (c *ConnectionSSE) SendHeartbeat(w http.ResponseWriter, t time.Time) error {
return send(w, platform.Event{
Resource: "clock",
Time: t,
Type: platform.EventTypeHeartbeat,
URI: "",
})
}
func SetEventChannel(chan_envelopes <-chan platform.Envelope) {
go func() {
for envelope := range chan_envelopes {
for conn := range connectionsSSE {
if conn.organizationID == envelope.OrganizationID || envelope.OrganizationID == 0 {
log.Debug().Int("type", int(envelope.Event.Type)).Int32("env-org", envelope.OrganizationID).Msg("pushed event to client")
conn.chanEvent <- envelope.Event
} else if conn.userID == envelope.UserID {
log.Debug().Int("type", int(envelope.Event.Type)).Int32("env-user", envelope.UserID).Msg("pushed event to user")
conn.chanEvent <- envelope.Event
} else {
log.Debug().Int("type", int(envelope.Event.Type)).Int32("env-org", envelope.OrganizationID).Int32("conn-org", conn.organizationID).Msg("skipped event, bad org")
}
}
}
}()
}
func send[T any](w http.ResponseWriter, msg T) error {
jsonData, err := json.Marshal(msg)
if err != nil {
return fmt.Errorf("marshaling json: %w", err)
}
// Write in SSE format: "data: <json>\n\n"
_, err = fmt.Fprintf(w, "data: %s\n\n", jsonData)
if err != nil {
return fmt.Errorf("writing SSE message: %w", err)
}
w.(http.Flusher).Flush()
return nil
}
func streamEvents(w http.ResponseWriter, r *http.Request, u platform.User) {
// Set headers for SSE
w.Header().Set("Content-Type", "text/event-stream")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Connection", "keep-alive")
w.Header().Set("Access-Control-Allow-Origin", "*")
uid, err := uuid.NewUUID()
if err != nil {
log.Error().Err(err).Msg("failed to create uuid")
http.Error(w, "failed to create uuid", http.StatusInternalServerError)
return
}
connection := ConnectionSSE{
chanEvent: make(chan platform.Event),
id: uid,
organizationID: u.Organization.ID,
userID: int32(u.ID),
}
connectionsSSE[&connection] = true
log.Debug().Int32("org", u.Organization.ID).Int("user", u.ID).Str("id", uid.String()).Msg("connected SSE client")
// Send an initial connected event
v := version.Get()
status := Status{
BuildTime: v.BuildTime,
IsModified: v.IsModified,
Revision: v.Revision,
Status: "connected",
Type: TYPE_STATUS,
}
body, err := json.Marshal(status)
if err != nil {
log.Error().Err(err).Msg("failed to marshal connect status")
http.Error(w, "failed to marshal connect status", http.StatusInternalServerError)
return
}
lint.Fprintf(w, "data: %s\n\n", body)
w.(http.Flusher).Flush()
// Keep the connection open with a ticker sending periodic events
ticker := time.NewTicker(5 * time.Second)
defer ticker.Stop()
// Use a channel to detect when the client disconnects
done := r.Context().Done()
// Keep connection open until client disconnects
for {
select {
case <-done:
log.Debug().Int32("org", u.Organization.ID).Int("user", u.ID).Str("id", uid.String()).Msg("Client closed connection")
delete(connectionsSSE, &connection)
return
case t := <-ticker.C:
// Send a heartbeat message
err = connection.SendHeartbeat(w, t)
if err != nil {
log.Error().Err(err).Msg("Failed to send heartbeat")
}
case e := <-connection.chanEvent:
err = connection.SendEvent(w, e)
if err != nil {
log.Error().Err(err).Msg("Failed to send heartbeat")
}
}
}
}

View file

@ -1,413 +0,0 @@
package api
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/auth"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/platform/file"
"github.com/Gleipnir-Technology/nidus-sync/resource"
"github.com/google/uuid"
"github.com/gorilla/schema"
"github.com/rs/zerolog/log"
)
type ErrorAPI struct {
Message string `json:"message"`
}
var decoder = schema.NewDecoder()
type handlerBase func(context.Context, http.ResponseWriter, *http.Request) *nhttp.ErrorWithStatus
type handlerBaseAuthenticated func(context.Context, http.ResponseWriter, *http.Request, platform.User) *nhttp.ErrorWithStatus
type handlerFunctionDelete func(context.Context, *http.Request, platform.User) *nhttp.ErrorWithStatus
type handlerFunctionGet[T any] func(context.Context, *http.Request, resource.QueryParams) (*T, *nhttp.ErrorWithStatus)
type handlerFunctionGetAuthenticated[T any] func(context.Context, *http.Request, platform.User, resource.QueryParams) (T, *nhttp.ErrorWithStatus)
type handlerFunctionGetImage func(context.Context, *http.Request, platform.User) (file.Collection, uuid.UUID, *nhttp.ErrorWithStatus)
type handlerFunctionGetSlice[T any] func(context.Context, *http.Request, resource.QueryParams) ([]*T, *nhttp.ErrorWithStatus)
type handlerFunctionGetSliceAuthenticated[T any] func(context.Context, *http.Request, platform.User, resource.QueryParams) ([]T, *nhttp.ErrorWithStatus)
type handlerFunctionPost[RequestType any, ResponseType any] func(context.Context, *http.Request, RequestType) (ResponseType, *nhttp.ErrorWithStatus)
type handlerFunctionPostAuthenticated[RequestType any, ResponseType any] func(context.Context, *http.Request, platform.User, RequestType) (ResponseType, *nhttp.ErrorWithStatus)
type handlerFunctionPostFormMultipart[RequestType any, ResponseType any] func(context.Context, *http.Request, RequestType) (*ResponseType, *nhttp.ErrorWithStatus)
type handlerFunctionPutAuthenticated[RequestType any] func(context.Context, *http.Request, platform.User, RequestType) (string, *nhttp.ErrorWithStatus)
func authenticatedHandlerBasic(f handlerBaseAuthenticated) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
ctx := r.Context()
e := f(ctx, w, r, u)
if e != nil {
respondErrorStatus(w, e)
return
}
return
})
}
func authenticatedHandlerDelete(f handlerFunctionDelete) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
ctx := r.Context()
e := f(ctx, r, u)
if e != nil {
respondErrorStatus(w, e)
return
}
http.Error(w, "", http.StatusNoContent)
return
})
}
func authenticatedHandlerGetImage(f handlerFunctionGetImage) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
ctx := r.Context()
collection, uid, e := f(ctx, r, u)
if e != nil {
respondErrorStatus(w, e)
return
}
file.ImageFileToWriter(collection, uid, w)
})
}
func authenticatedHandlerJSON[T any](f handlerFunctionGetAuthenticated[T]) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
ctx := r.Context()
var body []byte
var params resource.QueryParams
err := decoder.Decode(&params, r.URL.Query())
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("failed to decode query: %w", err))
return
}
resp, e := f(ctx, r, u, params)
w.Header().Set("Content-Type", "application/json")
//log.Info().Str("template", template).Err(e).Msg("handler done")
if e != nil {
respondErrorStatus(w, e)
return
}
body, err = json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
_, err = w.Write(body)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to write json: %w", err))
return
}
})
}
func authenticatedHandlerJSONSlice[T any](f handlerFunctionGetSliceAuthenticated[T]) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
ctx := r.Context()
var body []byte
var params resource.QueryParams
err := decoder.Decode(&params, r.URL.Query())
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("failed to decode query: %w", err))
return
}
resp, e := f(ctx, r, u, params)
w.Header().Set("Content-Type", "application/json")
//log.Info().Str("template", template).Err(e).Msg("handler done")
if e != nil {
respondErrorStatus(w, e)
return
}
if resp == nil {
body, err = json.Marshal([]struct{}{})
} else {
body, err = json.Marshal(resp)
}
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
_, err = w.Write(body)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to write json: %w", err))
return
}
})
}
func authenticatedHandlerJSONPost[RequestType any, ResponseType any](f handlerFunctionPostAuthenticated[RequestType, ResponseType]) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
w.Header().Set("Content-Type", "application/json")
req, e := parseRequest[RequestType](r)
if e != nil {
serializeError(w, e)
return
}
ctx := r.Context()
resp, e := f(ctx, r, u, *req)
if e != nil {
serializeError(w, e)
return
}
body, err := json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
_, err = w.Write(body)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to write json: %w", err))
return
}
})
}
func authenticatedHandlerJSONPut[RequestType any](f handlerFunctionPutAuthenticated[RequestType]) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
w.Header().Set("Content-Type", "application/json")
req, e := parseRequest[RequestType](r)
if e != nil {
serializeError(w, e)
return
}
ctx := r.Context()
path, e := f(ctx, r, u, *req)
if e != nil {
serializeError(w, e)
return
}
if path == "" {
w.WriteHeader(http.StatusNoContent)
return
}
w.Header().Set("Location", path)
http.Redirect(w, r, path, http.StatusCreated)
})
}
func authenticatedHandlerPostMultipart[ResponseType any](f handlerFunctionPostAuthenticated[[]file.Upload, ResponseType], collection file.Collection) http.Handler {
return auth.NewEnsureAuth(func(w http.ResponseWriter, r *http.Request, u platform.User) {
err := r.ParseMultipartForm(32 << 10) // 32 MB buffer
if err != nil {
respondError(w, http.StatusBadRequest, "Failed to parse form: %w ", err)
return
}
uploads, err := file.SaveFileUploads(r, collection)
if err != nil {
respondError(w, http.StatusInternalServerError, "failed to save uploads: %w", err)
return
}
/*
err = decoder.Decode(&content, r.PostForm)
if err != nil {
respondError(w, http.StatusBadRequest, "Failed to decode form: %w", err)
return
}
*/
ctx := r.Context()
resp, e := f(ctx, r, u, uploads)
if e != nil {
http.Error(w, e.Error(), e.Status)
return
}
body, err := json.Marshal(resp)
if err != nil {
log.Error().Err(err).Msg("failed to marshal json")
http.Error(w, "{\"message\": \"failed to marshal json\"}", http.StatusInternalServerError)
return
}
lint.Write(w, body)
})
}
func handlerBasic(f handlerBase) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
e := f(ctx, w, r)
if e != nil {
respondErrorStatus(w, e)
return
}
}
}
func handlerJSON[T any](f handlerFunctionGet[T]) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var body []byte
var params resource.QueryParams
err := decoder.Decode(&params, r.URL.Query())
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("failed to decode query: %w", err))
return
}
resp, e := f(ctx, r, params)
w.Header().Set("Content-Type", "application/json")
//log.Info().Str("template", template).Err(e).Msg("handler done")
if e != nil {
respondErrorStatus(w, e)
return
}
body, err = json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
lint.Write(w, body)
}
}
func handlerJSONSlice[T any](f handlerFunctionGetSlice[T]) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
var body []byte
var params resource.QueryParams
err := decoder.Decode(&params, r.URL.Query())
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("failed to decode query: %w", err))
return
}
resp, e := f(ctx, r, params)
w.Header().Set("Content-Type", "application/json")
//log.Info().Str("template", template).Err(e).Msg("handler done")
if e != nil {
respondErrorStatus(w, e)
return
}
body, err = json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
lint.Write(w, body)
}
}
func handlerJSONPost[RequestType any, ResponseType any](f handlerFunctionPost[RequestType, ResponseType]) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
req, e := parseRequest[RequestType](r)
if e != nil {
serializeError(w, e)
return
}
ctx := r.Context()
resp, e := f(ctx, r, *req)
if e != nil {
serializeError(w, e)
return
}
body, err := json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
lint.Write(w, body)
}
}
func handlerJSONPut[RequestType any, ResponseType any](f handlerFunctionPost[RequestType, ResponseType]) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
req, e := parseRequest[RequestType](r)
if e != nil {
serializeError(w, e)
return
}
ctx := r.Context()
resp, e := f(ctx, r, *req)
if e != nil {
serializeError(w, e)
return
}
body, err := json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
lint.Write(w, body)
}
}
func handlerFormPost[RequestType any, ResponseType any](f handlerFunctionPostFormMultipart[RequestType, ResponseType]) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
err := r.ParseMultipartForm(32 << 12) // 128 MB buffer
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("bad form: %w", err))
return
}
var req RequestType
err = decoder.Decode(&req, r.PostForm)
if err != nil {
respondErrorStatus(w, nhttp.NewBadRequest("decode form: %w", err))
return
}
ctx := r.Context()
resp, e := f(ctx, r, req)
if e != nil {
serializeError(w, e)
return
}
body, err := json.Marshal(resp)
if err != nil {
respondErrorStatus(w, nhttp.NewError("failed to marshal json: %w", err))
return
}
lint.Write(w, body)
}
}
func parseRequest[RequestType any](r *http.Request) (*RequestType, *nhttp.ErrorWithStatus) {
var err error
var req RequestType
content_type := r.Header.Get("Content-Type")
switch content_type {
case "application/json":
body, e := io.ReadAll(r.Body)
if e != nil {
return nil, nhttp.NewError("Failed to read body: %w", err)
}
err = json.Unmarshal(body, &req)
case "application/x-www-form-urlencoded":
e := r.ParseForm()
if err != nil {
return nil, nhttp.NewBadRequest("parsing form: %w", e)
}
err = decoder.Decode(&req, r.PostForm)
default:
return nil, nhttp.NewBadRequest("unrecognized content type '%s'", content_type)
}
if err != nil {
return nil, nhttp.NewErrorStatus(http.StatusBadRequest, "Failed to decode request: %w", err)
}
return &req, nil
}
func serializeError(w http.ResponseWriter, e *nhttp.ErrorWithStatus) {
log.Warn().Int("status", e.Status).Err(e).Str("user message", e.Message).Msg("Responding with an error from api")
body, err := json.Marshal(ErrorAPI{Message: e.Error()})
if err != nil {
log.Error().Err(err).Msg("failed to marshal error")
http.Error(w, "{\"message\": \"boom. I can't even tell you what went wrong\"}", http.StatusInternalServerError)
return
}
http.Error(w, string(body), e.Status)
return
}
func respondError(w http.ResponseWriter, status int, format string, args ...any) {
outer_err := fmt.Errorf(format, args...)
body, err := json.Marshal(ErrorAPI{
Message: outer_err.Error(),
})
if err != nil {
http.Error(w, "{\"message\": \"failed to marshal json\"}", http.StatusInternalServerError)
return
}
http.Error(w, string(body), status)
}
func respondErrorStatus(w http.ResponseWriter, e *nhttp.ErrorWithStatus) {
log.Warn().Int("status", e.Status).Err(e).Str("user message", e.Message).Msg("Responding with an error from api")
body, err := json.Marshal(ErrorAPI{Message: e.Error()})
if err != nil {
log.Error().Err(err).Msg("failed to marshal error")
http.Error(w, "{\"message\": \"boom. I can't even tell you what went wrong\"}", http.StatusInternalServerError)
return
}
http.Error(w, string(body), e.Status)
}

View file

@ -1,89 +0,0 @@
package api
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/db/models"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/platform/file"
"github.com/aarondl/opt/omit"
"github.com/aarondl/opt/omitnull"
"github.com/google/uuid"
"github.com/gorilla/mux"
"github.com/rs/zerolog/log"
)
func apiImagePost(w http.ResponseWriter, r *http.Request, u platform.User) {
vars := mux.Vars(r)
id := vars["uuid"]
noteUUID, err := uuid.Parse(id)
if err != nil {
http.Error(w, "Failed to decode the uuid", http.StatusBadRequest)
return
}
var payload NoteImagePayload
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read the payload", http.StatusBadRequest)
return
}
if err := json.Unmarshal(body, &payload); err != nil {
//debugSaveRequest(body, err, "Image note POST JSON decode error")
http.Error(w, "Failed to decode the payload", http.StatusBadRequest)
return
}
ctx := r.Context()
setter := models.NoteImageSetter{
Created: omit.From(payload.Created),
CreatorID: omit.From(int32(u.ID)),
Deleted: omitnull.FromPtr(payload.Deleted),
DeletorID: omitnull.FromPtr(payload.DeletorID),
OrganizationID: omit.From(u.Organization.ID),
Version: omit.From(payload.Version),
UUID: omit.From(noteUUID),
}
err = platform.NoteImageCreate(ctx, u, setter)
if err != nil {
if err := renderShim(w, r, errRender(err)); err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
w.WriteHeader(http.StatusAccepted)
}
func apiImageContentGet(w http.ResponseWriter, r *http.Request, u platform.User) {
vars := mux.Vars(r)
u_str := vars["uuid"]
imageUUID, err := uuid.Parse(u_str)
if err != nil {
log.Error().Err(err).Msg("Failed to parse image UUID")
http.Error(w, "Failed to parse image UUID", http.StatusBadRequest)
}
file.ImageFileToWriter(file.CollectionPublicImage, imageUUID, w)
w.WriteHeader(http.StatusOK)
}
func apiImageContentPost(w http.ResponseWriter, r *http.Request, u platform.User) {
vars := mux.Vars(r)
u_str := vars["uuid"]
imageUUID, err := uuid.Parse(u_str)
if err != nil {
log.Error().Err(err).Msg("Failed to parse image UUID")
http.Error(w, "Failed to parse image UUID", http.StatusBadRequest)
}
err = file.ImageFileFromReader(file.CollectionImageRaw, imageUUID, r.Body)
if err != nil {
if err := renderShim(w, r, errRender(err)); err != nil {
http.Error(w, fmt.Sprintf("render shim: %v", err), http.StatusInternalServerError)
}
return
}
w.WriteHeader(http.StatusOK)
log.Printf("Saved image file %s\n", imageUUID)
lint.Fprintf(w, "PNG uploaded successfully")
}

View file

@ -1 +0,0 @@
package api

View file

@ -1,46 +0,0 @@
package api
import (
"context"
"fmt"
"net/http"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/platform"
)
type formPublicreportSignal struct {
ReportID string `json:"reportID"`
}
func postPublicreportSignal(ctx context.Context, r *http.Request, user platform.User, req formPublicreportSignal) (string, *nhttp.ErrorWithStatus) {
signal_id, err := platform.SignalCreateFromPublicreport(ctx, user, req.ReportID)
if err != nil {
return "", nhttp.NewError("create signal: %w", err)
}
return fmt.Sprintf("/signal/%d", *signal_id), nil
}
func postPublicreportInvalid(ctx context.Context, r *http.Request, user platform.User, req formPublicreportSignal) (string, *nhttp.ErrorWithStatus) {
err := platform.PublicReportInvalid(ctx, user, req.ReportID)
if err != nil {
return "", nhttp.NewError("create signal: %w", err)
}
return fmt.Sprintf("/publicreport/%s", req.ReportID), nil
}
type formPublicreportMessage struct {
Message string `json:"message"`
ReportID string `json:"reportID"`
}
func postPublicreportMessage(ctx context.Context, r *http.Request, user platform.User, req formPublicreportMessage) (string, *nhttp.ErrorWithStatus) {
msg_id, err := platform.PublicReportMessageCreate(ctx, user, req.ReportID, req.Message)
if err != nil {
return "", nhttp.NewError("failed to create message: %s", err)
}
if msg_id == nil {
return "", nhttp.NewError("nil message id")
}
return fmt.Sprintf("/message/%d", *msg_id), nil
}

View file

@ -1,29 +0,0 @@
package api
import (
"context"
"errors"
"fmt"
"net/http"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/platform"
)
type createReviewPool struct {
Status string `json:"status"`
TaskID int32 `json:"task_id"`
Updates *platform.PoolUpdate `json:"updates"`
}
func postReviewPool(ctx context.Context, r *http.Request, user platform.User, req createReviewPool) (string, *nhttp.ErrorWithStatus) {
id, err := platform.ReviewPoolCreate(ctx, user, req.TaskID, req.Status, req.Updates)
if err != nil {
if errors.As(err, &platform.ErrorNotFound{}) {
return "", nhttp.NewErrorStatus(http.StatusNotFound, "review task %d not found", req.TaskID)
}
return "", nhttp.NewError("failed to set review: %w", err)
}
return fmt.Sprintf("/review/%d", id), nil
}

View file

@ -1,173 +0,0 @@
package api
import (
"github.com/Gleipnir-Technology/nidus-sync/auth"
"github.com/Gleipnir-Technology/nidus-sync/platform/file"
"github.com/Gleipnir-Technology/nidus-sync/resource"
"github.com/gorilla/mux"
)
func AddRoutesRMO(r *mux.Router) {
router := resource.NewRouter(r)
compliance_request := resource.ComplianceRequest(router)
district := resource.District(router)
geocode := resource.Geocode(router)
nuisance := resource.Nuisance(router)
pr_compliance := resource.PublicReportCompliance(router)
publicreport := resource.Publicreport(router)
publicreport_notification := resource.PublicreportNotification(router)
qrcode := resource.QRCode(router)
water := resource.Water(router)
r.HandleFunc("", handlerJSON(getRoot))
r.HandleFunc("/compliance-request/image/pool/{public_id}", compliance_request.ImagePoolGet).Methods("GET").Name("compliance-request.image.pool.ByIDGet")
r.Handle("/district", handlerJSONSlice(district.List)).Methods("GET")
r.Handle("/district/{id}", handlerJSON(district.GetByID)).Methods("GET").Name("district.ByIDGet")
r.HandleFunc("/district/{slug}/logo", apiGetDistrictLogo).Methods("GET").Name("district.logo.BySlug")
r.Handle("/geocode/by-gid/{id:.*}", handlerJSON(geocode.ByGID)).Methods("GET")
r.Handle("/geocode/reverse", handlerJSONPost(geocode.Reverse)).Methods("POST")
r.Handle("/geocode/reverse/closest", handlerJSONPost(geocode.ReverseClosest)).Methods("POST")
r.Handle("/geocode/suggestion", handlerJSONSlice(geocode.SuggestionList)).Methods("GET")
r.Handle("/publicreport-notification", handlerJSONPost(publicreport_notification.Create)).Methods("POST")
r.Handle("/qr-code/mailer/{code}", handlerBasic(qrcode.Mailer)).Methods("GET")
r.Handle("/qr-code/marketing", handlerBasic(qrcode.Marketing)).Methods("GET")
r.Handle("/qr-code/report/{code}", handlerBasic(qrcode.Report)).Methods("GET")
r.HandleFunc("/rmo/compliance", handlerJSONPost(pr_compliance.Create)).Methods("POST")
r.HandleFunc("/rmo/nuisance", handlerFormPost(nuisance.Create)).Methods("POST")
r.Handle("/rmo/publicreport/{id}", handlerBasic(publicreport.ByIDPublic)).Methods("GET").Name("publicreport.ByIDGetPublic")
r.Handle("/rmo/publicreport/compliance/{id}/image", handlerFormPost(publicreport.ImageCreate)).Methods("POST")
r.Handle("/rmo/publicreport/compliance/{id}", handlerJSON(pr_compliance.ByIDPublic)).Methods("GET").Name("publicreport.compliance.ByIDGetPublic")
r.Handle("/rmo/publicreport/compliance/{id}", handlerJSONPut(pr_compliance.Update)).Methods("PUT")
r.Handle("/rmo/publicreport/nuisance/{id}", handlerJSON(nuisance.ByIDPublic)).Methods("GET").Name("publicreport.nuisance.ByIDGetPublic")
r.Handle("/rmo/publicreport/water/{id}", handlerJSON(water.ByIDPublic)).Methods("GET").Name("publicreport.water.ByIDGetPublic")
r.Handle("/rmo/publicreport/{id}", handlerBasic(publicreport.ByIDPublic)).Methods("GET").Name("publicreport.ByIDGetPublicPublic")
r.HandleFunc("/rmo/water", handlerFormPost(water.Create)).Methods("POST")
}
func AddRoutesSync(r *mux.Router) {
router := resource.NewRouter(r)
compliance_request := resource.ComplianceRequest(router)
district := resource.District(router)
geocode := resource.Geocode(router)
lob_hook := resource.LobHook(router)
nuisance := resource.Nuisance(router)
pr_compliance := resource.PublicReportCompliance(router)
publicreport := resource.Publicreport(router)
publicreport_notification := resource.PublicreportNotification(router)
qrcode := resource.QRCode(router)
service_request := resource.ServiceRequest(router)
water := resource.Water(router)
//r.Use(render.SetContentType(render.ContentTypeJSON))
// Unauthenticated endpoints
r.HandleFunc("", handlerJSON(getRoot))
r.HandleFunc("/compliance-request/image/pool/{public_id}", compliance_request.ImagePoolGet).Methods("GET").Name("compliance-request.image.pool.ByIDGet")
r.Handle("/district", handlerJSONSlice(district.List)).Methods("GET")
r.Handle("/district/{id}", handlerJSON(district.GetByID)).Methods("GET").Name("district.ByIDGet")
r.HandleFunc("/district/{slug}/logo", apiGetDistrictLogo).Methods("GET").Name("district.logo.BySlug")
r.Handle("/geocode/by-gid/{id:.*}", handlerJSON(geocode.ByGID)).Methods("GET")
r.Handle("/geocode/reverse", handlerJSONPost(geocode.Reverse)).Methods("POST")
r.Handle("/geocode/reverse/closest", handlerJSONPost(geocode.ReverseClosest)).Methods("POST")
r.Handle("/geocode/suggestion", handlerJSONSlice(geocode.SuggestionList)).Methods("GET")
r.Handle("/lob/event", handlerBasic(lob_hook.Event)).Methods("POST")
r.Handle("/publicreport-notification", handlerJSONPost(publicreport_notification.Create)).Methods("POST")
r.Handle("/qr-code/mailer/{code}", handlerBasic(qrcode.Mailer)).Methods("GET")
r.Handle("/qr-code/marketing", handlerBasic(qrcode.Marketing)).Methods("GET")
r.Handle("/qr-code/report/{code}", handlerBasic(qrcode.Report)).Methods("GET")
r.HandleFunc("/signin", handlerJSONPost(postSignin))
r.Handle("/signout", authenticatedHandlerBasic(postSignout))
r.HandleFunc("/signup", handlerJSONPost(postSignup))
r.HandleFunc("/twilio/call", twilioCallPost).Methods("POST")
r.HandleFunc("/twilio/call/status", twilioCallStatusPost).Methods("POST")
r.HandleFunc("/twilio/message", twilioMessagePost).Methods("POST")
r.HandleFunc("/twilio/text", twilioTextPost).Methods("POST")
r.HandleFunc("/twilio/text/status", twilioTextStatusPost).Methods("POST")
r.HandleFunc("/voipms/text", voipmsTextGet).Methods("GET")
r.HandleFunc("/voipms/text", voipmsTextPost).Methods("POST")
r.HandleFunc("/webhook/fieldseeker", webhookFieldseeker).Methods("GET")
r.HandleFunc("/webhook/fieldseeker", webhookFieldseeker).Methods("POST")
// Authenticated endpoints
r.Handle("/audio/{uuid}", auth.NewEnsureAuth(apiAudioPost)).Methods("POST")
r.Handle("/audio/{uuid}/content", auth.NewEnsureAuth(apiAudioContentPost)).Methods("POST")
avatar := resource.Avatar(router)
r.Handle("/avatar/{uuid}", authenticatedHandlerGetImage(avatar.ByUUIDGet)).Methods("GET").Name("avatar.ByUUIDGet")
r.Handle("/avatar", authenticatedHandlerPostMultipart(avatar.Create, file.CollectionAvatar)).Methods("POST")
r.Handle("/client/ios", auth.NewEnsureAuth(handleClientIos)).Methods("GET")
communication := resource.Communication(router)
r.Handle("/communication", authenticatedHandlerJSONSlice(communication.List)).Methods("GET")
r.Handle("/communication/{id}", authenticatedHandlerJSON(communication.Get)).Methods("GET").Name("communication.ByIDGet")
r.Handle("/communication/{id}/mark/invalid", authenticatedHandlerJSONPost(communication.MarkInvalid)).Methods("POST").Name("communication.MarkInvalid")
r.Handle("/communication/{id}/mark/pending-response", authenticatedHandlerJSONPost(communication.MarkPendingResponse)).Methods("POST").Name("communication.MarkPendingResponse")
r.Handle("/communication/{id}/mark/possible-issue", authenticatedHandlerJSONPost(communication.MarkPossibleIssue)).Methods("POST").Name("communication.MarkPossibleIssue")
r.Handle("/communication/{id}/mark/possible-resolved", authenticatedHandlerJSONPost(communication.MarkPossibleResolved)).Methods("POST").Name("communication.MarkPossibleResolved")
r.Handle("/compliance-request/mailer", authenticatedHandlerJSONPost(compliance_request.CreateMailer)).Methods("POST")
//r.HandleFunc("/compliance-request/image/pool/{public_id}", getComplianceRequestImagePool).Methods("GET")
r.Handle("/configuration/integration/arcgis", authenticatedHandlerJSONPost(postConfigurationIntegrationArcgis)).Methods("POST")
email := resource.Email(router)
r.Handle("/email/{id}", authenticatedHandlerJSON(email.Get)).Methods("GET").Name("email.ByIDGet")
r.Handle("/events", auth.NewEnsureAuth(streamEvents)).Methods("GET")
r.Handle("/image/{uuid}", auth.NewEnsureAuth(apiImagePost)).Methods("POST")
r.Handle("/image/{uuid}/content", auth.NewEnsureAuth(apiImageContentGet)).Methods("GET")
r.Handle("/image/{uuid}/content", auth.NewEnsureAuth(apiImageContentPost)).Methods("POST")
impersonation := resource.Impersonation(router)
r.Handle("/impersonation", authenticatedHandlerJSONPost(impersonation.Create)).Methods("POST")
r.Handle("/impersonation", authenticatedHandlerDelete(impersonation.Delete)).Methods("DELETE")
lead := resource.Lead(r)
r.Handle("/leads", authenticatedHandlerJSON(lead.List)).Methods("GET")
r.Handle("/leads", authenticatedHandlerJSONPost(lead.Create)).Methods("POST")
mailer := resource.Mailer(router)
r.Handle("/mailer", authenticatedHandlerJSONSlice(mailer.List)).Methods("GET")
r.Handle("/mailer/{id}", authenticatedHandlerJSONPost(mailer.ByIDGet)).Methods("GET").Name("mailer.ByIDGet")
r.Handle("/mosquito-source", auth.NewEnsureAuth(apiMosquitoSource)).Methods("GET")
r.Handle("/publicreport/invalid", authenticatedHandlerJSONPost(postPublicreportInvalid)).Methods("POST")
r.Handle("/publicreport/signal", authenticatedHandlerJSONPost(postPublicreportSignal)).Methods("POST")
r.Handle("/publicreport/message", authenticatedHandlerJSONPost(postPublicreportMessage)).Methods("POST")
r.Handle("/publicreport/{id}", authenticatedHandlerBasic(publicreport.ByID)).Methods("GET").Name("publicreport.ByIDGet")
r.Handle("/publicreport/compliance/{id}", authenticatedHandlerJSON(pr_compliance.ByID)).Methods("GET").Name("publicreport.compliance.ByIDGet")
r.Handle("/publicreport/nuisance/{id}", authenticatedHandlerJSON(nuisance.ByID)).Methods("GET").Name("publicreport.nuisance.ByIDGet")
r.Handle("/publicreport/water/{id}", authenticatedHandlerJSON(water.ByID)).Methods("GET").Name("publicreport.water.ByIDGet")
r.Handle("/publicreport-notification", handlerJSONPost(publicreport_notification.Create)).Methods("POST")
r.Handle("/review/pool", authenticatedHandlerJSONPost(postReviewPool)).Methods("POST")
review_task := resource.ReviewTask(r)
r.Handle("/review-task", authenticatedHandlerJSON(review_task.List)).Methods("GET")
r.Handle("/service-request", authenticatedHandlerJSONSlice(service_request.List)).Methods("GET")
session := resource.Session(router)
r.Handle("/session", authenticatedHandlerJSON(session.Get)).Methods("GET").Name("session.get")
signal := resource.Signal(r)
r.Handle("/signal", authenticatedHandlerJSON(signal.List)).Methods("GET")
site := resource.Site(router)
r.Handle("/site", authenticatedHandlerJSONSlice(site.List)).Methods("GET")
r.Handle("/site/{id}", authenticatedHandlerJSON(site.ByIDGet)).Methods("GET").Name("site.ByIDGet")
sync := resource.Sync(r)
r.Handle("/sync", authenticatedHandlerJSONSlice(sync.List)).Methods("GET")
r.Handle("/sudo/email", authenticatedHandlerJSONPost(postSudoEmail)).Methods("POST")
r.Handle("/sudo/sms", authenticatedHandlerJSONPost(postSudoSMS)).Methods("POST")
r.Handle("/sudo/sse", authenticatedHandlerJSONPost(postSudoSSE)).Methods("POST")
text := resource.Text(router)
r.Handle("/text/{id}", authenticatedHandlerJSON(text.Get)).Methods("GET").Name("text.ByIDGet")
r.Handle("/trap-data", auth.NewEnsureAuth(apiTrapData)).Methods("GET")
r.Handle("/tile/{z}/{y}/{x}", auth.NewEnsureAuth(getTile)).Methods("GET")
upload := resource.Upload(r)
r.Handle("/upload/pool/custom", authenticatedHandlerPostMultipart(upload.PoolCustomCreate, file.CollectionCSV)).Methods("POST")
r.Handle("/upload/pool/flyover", authenticatedHandlerPostMultipart(upload.PoolFlyoverCreate, file.CollectionCSV)).Methods("POST")
r.Handle("/upload", authenticatedHandlerJSON(upload.List)).Methods("GET")
r.Handle("/upload/{id}", authenticatedHandlerJSON(upload.ByIDGet)).Methods("GET")
r.Handle("/upload/{id}/commit", authenticatedHandlerJSONPost(upload.Commit)).Methods("POST")
r.Handle("/upload/{id}/discard", authenticatedHandlerJSONPost(upload.Discard)).Methods("POST")
user := resource.User(router)
r.Handle("/user/self", authenticatedHandlerJSON(user.SelfGet)).Methods("GET")
r.Handle("/user/suggestion", authenticatedHandlerJSON(user.SuggestionGet)).Methods("GET")
r.Handle("/user", authenticatedHandlerJSONSlice(user.List)).Methods("GET")
r.Handle("/user/{id}", authenticatedHandlerJSON(user.ByIDGet)).Methods("GET").Name("user.ByIDGet")
r.Handle("/user/{id}", authenticatedHandlerJSONPut(user.ByIDPut)).Methods("PUT")
// Unauthenticated endpoints
}

View file

@ -1 +0,0 @@
package api

View file

@ -1,46 +0,0 @@
package api
import (
"context"
"errors"
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/auth"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/rs/zerolog/log"
)
type reqSignin struct {
Password string `schema:"password"`
Username string `schema:"username"`
}
func postSignin(ctx context.Context, r *http.Request, req reqSignin) (string, *nhttp.ErrorWithStatus) {
if req.Password == "" {
return "", nhttp.NewBadRequest("Empty password")
}
if req.Username == "" {
return "", nhttp.NewBadRequest("Empty username")
}
log.Info().Str("username", req.Username).Msg("API Signin")
_, err := auth.SigninUser(r, req.Username, req.Password)
if err != nil {
if errors.Is(err, auth.InvalidCredentials{}) {
return "", nhttp.NewUnauthorized("invalid credentials")
}
if errors.Is(err, auth.InvalidUsername{}) {
return "", nhttp.NewUnauthorized("invalid credentials")
}
if errors.Is(err, platform.NoUserError{}) {
return "", nhttp.NewUnauthorized("invalid credentials")
}
log.Error().Err(err).Str("username", req.Username).Msg("Login server error")
return "", nhttp.NewError("login server error")
}
return "/", nil
}
func postSignout(ctx context.Context, w http.ResponseWriter, r *http.Request, u platform.User) *nhttp.ErrorWithStatus {
auth.SignoutUser(r, u)
return nil
}

View file

@ -1,37 +0,0 @@
package api
import (
"context"
"net/http"
"strings"
"github.com/Gleipnir-Technology/nidus-sync/auth"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/rs/zerolog/log"
)
type reqSignup struct {
Username string `json:"username"`
Name string `json:"name"`
Password string `json:"password"`
Terms bool `json:"terms"`
}
func postSignup(ctx context.Context, r *http.Request, signup reqSignup) (string, *nhttp.ErrorWithStatus) {
log.Info().Str("username", signup.Username).Str("name", signup.Name).Str("password", strings.Repeat("*", len(signup.Password))).Msg("Signup")
if !signup.Terms {
log.Warn().Msg("Terms not agreed")
return "", nhttp.NewErrorStatus(http.StatusBadRequest, "You must agree to the terms to register")
}
user, err := auth.SignupUser(r.Context(), signup.Username, signup.Name, signup.Password)
if err != nil {
return "", nhttp.NewError("Failed to signup user", err)
}
auth.AddUserSession(ctx, user)
return "/", nil
}

View file

@ -1,84 +0,0 @@
package api
import (
"context"
"fmt"
"net/http"
"github.com/Gleipnir-Technology/nidus-sync/comms/email"
"github.com/Gleipnir-Technology/nidus-sync/comms/text"
"github.com/Gleipnir-Technology/nidus-sync/config"
nhttp "github.com/Gleipnir-Technology/nidus-sync/http"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/rs/zerolog/log"
)
type FormEmail struct {
Body string `schema:"emailBody"`
From string `schema:"emailFrom"`
Subject string `schema:"emailSubject"`
To string `schema:"emailTo"`
}
func postSudoEmail(ctx context.Context, r *http.Request, u platform.User, e FormEmail) (string, *nhttp.ErrorWithStatus) {
if !u.HasRoot() {
return "", &nhttp.ErrorWithStatus{
Message: "You must have sudo powers to do this",
Status: http.StatusForbidden,
}
}
request := email.Request{
From: e.From,
HTML: fmt.Sprintf("<html><p>%s</p></html>", e.Body),
Sender: e.From,
Subject: e.Subject,
To: e.To,
Text: e.Body,
}
resp, err := email.Send(ctx, request)
if err != nil {
log.Warn().Err(err).Msg("Failed to send email")
} else {
log.Info().Str("id", resp.ID).Str("to", e.To).Msg("Sent Email")
}
return "/sudo", nil
}
type FormSMS struct {
Message string `schema:"smsMessage"`
Phone string `schema:"smsPhone"`
}
func postSudoSMS(ctx context.Context, r *http.Request, u platform.User, sms FormSMS) (string, *nhttp.ErrorWithStatus) {
if !u.HasRoot() {
return "", &nhttp.ErrorWithStatus{
Message: "You must have sudo powers to do this",
Status: http.StatusForbidden,
}
}
id, err := text.SendText(ctx, config.VoipMSNumber, sms.Phone, sms.Message)
if err != nil {
log.Warn().Err(err).Msg("Failed to send SMS")
} else {
log.Info().Str("id", id).Msg("Sent SMS")
}
return "/sudo", nil
}
type FormSSE struct {
OrganizationID int32 `schema:"organizationID"`
Resource string `schema:"resource"`
Type string `schema:"type"`
URIPath string `schema:"uriPath"`
}
func postSudoSSE(ctx context.Context, r *http.Request, u platform.User, sse FormSSE) (string, *nhttp.ErrorWithStatus) {
if !u.HasRoot() {
return "", &nhttp.ErrorWithStatus{
Message: "You must have sudo powers to do this",
Status: http.StatusForbidden,
}
}
platform.SudoEvent(sse.OrganizationID, sse.Resource, sse.Type, sse.URIPath)
return "/sudo", nil
}

View file

@ -1,38 +0,0 @@
package api
import (
"net/http"
"strconv"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/gorilla/mux"
//"github.com/rs/zerolog/log"
)
func getTile(w http.ResponseWriter, r *http.Request, user platform.User) {
vars := mux.Vars(r)
x_str := vars["x"]
y_str := vars["y"]
z_str := vars["z"]
x, err := strconv.Atoi(x_str)
if err != nil {
http.Error(w, "can't parse x as an integer", http.StatusBadRequest)
return
}
y, err := strconv.Atoi(y_str)
if err != nil {
http.Error(w, "can't parse x as an integer", http.StatusBadRequest)
return
}
z, err := strconv.Atoi(z_str)
if err != nil {
http.Error(w, "can't parse x as an integer", http.StatusBadRequest)
return
}
err = platform.GetTile(r.Context(), w, user.Organization, true, uint(z), uint(y), uint(x))
if err != nil {
http.Error(w, "failed to do tile", http.StatusInternalServerError)
return
}
}

View file

@ -1,159 +0,0 @@
package api
import (
"context"
"fmt"
"net/http"
"strings"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform/text"
"github.com/rs/zerolog/log"
"github.com/twilio/twilio-go/twiml"
)
// Translate from Twilio's representation of a RCS message sender to our concept of a phone number
// From: rcs:dev_report_mosquitoes_online_dosrvwxm_agent
// To: +16235525879
func getDst(to string) (string, error) {
if to == config.TwilioRCSSenderRMO {
return config.PhoneNumberReportStr, nil
}
/*
phone, err := models.FindCommsPhone(ctx, db.PGInstance.BobDB, to)
if err != nil {
return "", fmt.Errorf("Failed to search for dest phone %s: %w", to, err)
}
return phone.E164, nil
*/
return "", fmt.Errorf("Cannot match phone number to '%s'", to)
}
func splitPhoneSource(s string) (string, string) {
parts := strings.Split(s, ":")
switch len(parts) {
case 0:
return "this isn't", "possible"
case 1:
return "", s
case 2:
return parts[0], parts[1]
default:
log.Warn().Str("s", s).Msg("Got an incomprehensible number of parts of a phone number")
return parts[0], parts[1]
}
}
func twilioMessagePost(w http.ResponseWriter, r *http.Request) {
message_sid := r.PostFormValue("MessageSid")
log.Info().Str("sid", message_sid).Msg("Twilio Message POST")
lint.Fprintf(w, "")
}
func twilioCallPost(w http.ResponseWriter, r *http.Request) {
called := r.PostFormValue("Called")
tostate := r.PostFormValue("ToState")
callercountry := r.PostFormValue("CallerCountry")
direction := r.PostFormValue("Direction")
callerstate := r.PostFormValue("CallerState")
tozip := r.PostFormValue("ToZip")
callsid := r.PostFormValue("CallSid")
to := r.PostFormValue("To")
callerzip := r.PostFormValue("CallerZip")
tocountry := r.PostFormValue("ToCountry")
stirverstat := r.PostFormValue("StirVerstat")
//calltoken := r.PostFormValue("CallToken")
calledzip := r.PostFormValue("CalledZip")
apiversion := r.PostFormValue("ApiVersion")
calledcity := r.PostFormValue("CalledCity")
callstatus := r.PostFormValue("CallStatus")
from := r.PostFormValue("From")
accountsid := r.PostFormValue("AccountSid")
calledcountry := r.PostFormValue("CalledCountry")
callercity := r.PostFormValue("CallerCity")
tocity := r.PostFormValue("ToCity")
fromcountry := r.PostFormValue("FromCountry")
caller := r.PostFormValue("Caller")
fromcity := r.PostFormValue("FromCity")
calledstate := r.PostFormValue("CalledState")
fromzip := r.PostFormValue("FromZip")
fromstate := r.PostFormValue("FromState")
log.Info().Str("called", called).Str("tostate", tostate).Str("callercountry", callercountry).Str("direction", direction).Str("callerstate", callerstate).Str("tozip", tozip).Str("callsid", callsid).Str("to", to).Str("callerzip", callerzip).Str("tocountry", tocountry).Str("stirverstat", stirverstat).Str("calledzip", calledzip).Str("apiversion", apiversion).Str("calledcity", calledcity).Str("callstatus", callstatus).Str("from", from).Str("accountsid", accountsid).Str("calledcountry", calledcountry).Str("callercity", callercity).Str("tocity", tocity).Str("fromcountry", fromcountry).Str("caller", caller).Str("fromcity", fromcity).Str("calledstate", calledstate).Str("fromzip", fromzip).Str("fromstate", fromstate).Msg("Incoming phone call")
say := &twiml.VoiceSay{
Message: "Thanks for calling Report Mosquitoes Online. I'll forward you to our tech support lead, Eli",
}
call := &twiml.VoiceDial{
Number: config.PhoneNumberSupportStr,
}
twimlResult, err := twiml.Voice([]twiml.Element{say, call})
if err != nil {
log.Error().Err(err).Msg("Failed to produce TWIML")
http.Error(w, err.Error(), http.StatusInternalServerError)
}
w.Header().Set("Content-Type", "text/xml")
lint.Fprintf(w, "%s", twimlResult)
}
func twilioCallStatusPost(w http.ResponseWriter, r *http.Request) {
call_sid := r.PostFormValue("CallSid")
account_sid := r.PostFormValue("AccountSid")
from := r.PostFormValue("From")
to := r.PostFormValue("To")
call_status := r.PostFormValue("CallStatus")
api_version := r.PostFormValue("ApiVersion")
direction := r.PostFormValue("Direction")
forwarded_from := r.PostFormValue("ForwardedFrom")
caller_name := r.PostFormValue("CallerName")
parent_call_sid := r.PostFormValue("ParentCallSid")
log.Info().Str("call_sid", call_sid).Str("account_sid", account_sid).Str("from", from).Str("to", to).Str("call_status", call_status).Str("api_version", api_version).Str("direction", direction).Str("forwarded_from", forwarded_from).Str("caller_name", caller_name).Str("parent_call_sid", parent_call_sid)
lint.Fprintf(w, "")
}
func twilioTextPost(w http.ResponseWriter, r *http.Request) {
message_sid := r.PostFormValue("MessageSid")
account_sid := r.PostFormValue("AccountSid")
messaging_service_sid := r.PostFormValue("MessagingServiceSid")
from := r.PostFormValue("From")
to_ := r.PostFormValue("To")
body := r.PostFormValue("Body")
num_media := r.PostFormValue("NumMedia")
num_segments := r.PostFormValue("NumSegments")
media_content_type0 := r.PostFormValue("MediaContentType0")
media_url0 := r.PostFormValue("MediaUrl0")
from_city := r.PostFormValue("FromCity")
from_state := r.PostFormValue("FromState")
from_zip := r.PostFormValue("FromZip")
from_country := r.PostFormValue("FromCountry")
to_city := r.PostFormValue("ToCity")
to_state := r.PostFormValue("ToState")
to_zip := r.PostFormValue("ToZip")
to_country := r.PostFormValue("ToCountry")
type_, src := splitPhoneSource(from)
log.Info().Str("message_sid", message_sid).Str("account_sid", account_sid).Str("messaging_service_sid", messaging_service_sid).Str("from", from).Str("to_", to_).Str("body", body).Str("num_media", num_media).Str("num_segments", num_segments).Str("media_content_type0", media_content_type0).Str("media_url0", media_url0).Str("from_city", from_city).Str("from_state", from_state).Str("from_zip", from_zip).Str("from_country", from_country).Str("to_city", to_city).Str("to_state", to_state).Str("to_zip", to_zip).Str("to_country", to_country).Str("type_", type_).Msg("got text")
twiml, _ := twiml.Messages([]twiml.Element{})
dst, err := getDst(to_)
if err != nil {
log.Error().Err(err).Str("to", to_).Msg("Failed to get dst")
return
}
go func() {
err := text.HandleTextMessage(context.Background(), src, dst, body)
if err != nil {
log.Error().Err(err).Msg("failed to handle Twilio incoming text")
}
}()
w.Header().Set("Content-Type", "text/xml")
lint.Fprintf(w, "%s", twiml)
}
func twilioTextStatusPost(w http.ResponseWriter, r *http.Request) {
message_sid := r.PostFormValue("MessageSid")
message_status := r.PostFormValue("MessageStatus")
log.Info().Str("sid", message_sid).Str("status", message_status).Msg("Updated message status")
text.UpdateMessageStatus(message_sid, message_status)
lint.Fprintf(w, "")
}

View file

@ -1,299 +0,0 @@
package api
import (
"net/http"
"time"
"github.com/Gleipnir-Technology/nidus-sync/db/models"
"github.com/Gleipnir-Technology/nidus-sync/h3utils"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/Gleipnir-Technology/nidus-sync/platform/types"
"github.com/aarondl/opt/null"
//"github.com/gorilla/mux"
"github.com/rs/zerolog/log"
)
type H3Cell uint64
type Bounds struct {
East float64
North float64
South float64
West float64
}
func NewBounds() Bounds {
return Bounds{
East: 180,
North: 180,
South: -180,
West: -180,
}
}
type NoteImagePayload struct {
UUID string `json:"uuid"`
Cell H3Cell `json:"cell"`
Created time.Time `json:"created"`
CreatorID int `db:"creator_id"`
Deleted *time.Time `json:"deleted"`
DeletorID *int32 `json:"deletor_id"`
Version int32 `json:"version"`
}
type NoteAudioPayload struct {
UUID string `json:"uuid"`
Breadcrumbs []NoteAudioBreadcrumbPayload `json:"breadcrumbs"`
Created time.Time `json:"created"`
CreatorID int `json:"creator_id"`
Deleted *time.Time `json:"deleted"`
DeletorID *int32 `json:"deletor_id"`
Duration float32 `json:"duration"`
Transcription *string `json:"transcription"`
TranscriptionUserEdited bool `json:"transcriptionUserEdited"`
Version int32 `json:"version"`
}
type ResponseDistrict struct {
Agency string `json:"agency"`
Manager string `json:"manager"`
Phone string `json:"phone"`
Website string `json:"website"`
}
type ResponseMosquitoSource struct {
Access string `json:"access"`
Active *bool `json:"active"`
Comments string `json:"comments"`
Created string `json:"created"`
Description string `json:"description"`
H3Cell int64 `json:"h3cell"`
ID string `json:"id"`
LastInspectionDate string `json:"last_inspection_date"`
Habitat string `json:"habitat"`
Inspections []ResponseMosquitoInspection `json:"inspections"`
Name string `json:"name"`
NextActionDateScheduled string `json:"next_action_date_scheduled"`
Treatments []ResponseMosquitoTreatment `json:"treatments"`
UseType string `json:"use_type"`
WaterOrigin string `json:"water_origin"`
Zone string `json:"zone"`
}
type NoteAudioBreadcrumbPayload struct {
Cell H3Cell `json:"cell"`
Created time.Time `json:"created"`
ManuallySelected bool `json:"manuallySelected"`
}
type ResponseFieldseeker struct {
MosquitoSources []ResponseMosquitoSource `json:"sources"`
ServiceRequests []types.ServiceRequest `json:"requests"`
TrapData []ResponseTrapData `json:"traps"`
}
type ResponseClientIos struct {
Fieldseeker ResponseFieldseeker `json:"fieldseeker"`
Since time.Time `json:"since"`
}
func (i ResponseClientIos) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
type ResponseMosquitoInspection struct {
ActionTaken string `json:"action_taken"`
Comments string `json:"comments"`
Condition string `json:"condition"`
Created string `json:"created"`
EndDateTime string `json:"end_date_time"`
FieldTechnician string `json:"field_technician"`
ID string `json:"id"`
LocationName string `json:"location_name"`
SiteCondition string `json:"site_condition"`
}
func (rtd ResponseMosquitoInspection) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func NewResponseMosquitoInspection(i *models.FieldseekerMosquitoinspection) ResponseMosquitoInspection {
return ResponseMosquitoInspection{
ActionTaken: i.Actiontaken.GetOr(""),
Comments: i.Comments.GetOr(""),
Condition: i.Sitecond.GetOr(""),
Created: i.Creationdate.MustGet().Format("2006-01-02T15:04:05.000Z"),
ID: i.Globalid.String(),
LocationName: i.Locationname.GetOr(""),
SiteCondition: i.Sitecond.GetOr(""),
}
}
func NewResponseMosquitoInspections(inspections models.FieldseekerMosquitoinspectionSlice) []ResponseMosquitoInspection {
results := make([]ResponseMosquitoInspection, 0)
for _, i := range inspections {
results = append(results, NewResponseMosquitoInspection(i))
}
return results
}
func (rd ResponseDistrict) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func (rtd ResponseMosquitoSource) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func NewResponseMosquitoSource(ms platform.MosquitoSource) ResponseMosquitoSource {
pl := ms.PointLocation
h3cell, err := h3utils.ToCell(pl.H3cell.GetOr("0"))
if err != nil {
log.Warn().Err(err).Msg("Failed to convert h3 cell")
h3cell = 0
}
return ResponseMosquitoSource{
Active: toBool16(pl.Active),
Access: pl.Accessdesc.GetOr(""),
Comments: pl.Comments.GetOr(""),
Created: formatTime(pl.Creationdate),
Description: pl.Description.GetOr(""),
H3Cell: int64(h3cell),
ID: pl.Globalid.String(),
LastInspectionDate: formatTime(pl.Lastinspectdate),
Habitat: pl.Habitat.GetOr(""),
Inspections: NewResponseMosquitoInspections(ms.Inspections),
Name: pl.Name.GetOr(""),
NextActionDateScheduled: formatTime(pl.Nextactiondatescheduled),
Treatments: NewResponseMosquitoTreatments(ms.Treatments),
UseType: pl.Usetype.GetOr(""),
WaterOrigin: pl.Waterorigin.GetOr(""),
Zone: pl.Zone.GetOr(""),
}
}
func NewResponseMosquitoSources(sources []platform.MosquitoSource) []ResponseMosquitoSource {
results := make([]ResponseMosquitoSource, 0)
for _, i := range sources {
results = append(results, NewResponseMosquitoSource(i))
}
return results
}
type ResponseMosquitoTreatment struct {
Comments string `json:"comments"`
Created string `json:"created"`
EndDateTime string `json:"end_date_time"`
FieldTechnician string `json:"field_technician"`
Habitat string `json:"habitat"`
ID string `json:"id"`
Product string `json:"product"`
Quantity float64 `json:"quantity"`
QuantityUnit string `json:"quantity_unit"`
SiteCondition string `json:"site_condition"`
TreatAcres float64 `json:"treat_acres"`
TreatHectares float64 `json:"treat_hectares"`
}
func (rtd ResponseMosquitoTreatment) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func NewResponseMosquitoTreatment(i *models.FieldseekerTreatment) ResponseMosquitoTreatment {
return ResponseMosquitoTreatment{
Comments: i.Comments.GetOr(""),
Created: formatTime(i.Creationdate),
FieldTechnician: i.Fieldtech.GetOr(""),
Habitat: i.Habitat.GetOr(""),
ID: i.Globalid.String(),
Product: i.Product.GetOr(""),
Quantity: i.Qty.GetOr(0),
QuantityUnit: i.Qtyunit.GetOr(""),
SiteCondition: i.Sitecond.GetOr(""),
TreatAcres: i.Treatacres.GetOr(0.0),
TreatHectares: i.Treathectares.GetOr(0.0),
}
}
func NewResponseMosquitoTreatments(treatments models.FieldseekerTreatmentSlice) []ResponseMosquitoTreatment {
results := make([]ResponseMosquitoTreatment, 0)
for _, i := range treatments {
results = append(results, NewResponseMosquitoTreatment(i))
}
return results
}
type ResponseNote struct {
CategoryName string `json:"categoryName"`
Content string `json:"content"`
H3Cell int64 `json:"h3cell"`
ID string `json:"id"`
Timestamp string `json:"timestamp"`
}
func (rtd ResponseNote) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func NewResponseServiceRequests(requests models.FieldseekerServicerequestSlice) []types.ServiceRequest {
results := make([]types.ServiceRequest, 0)
for _, i := range requests {
results = append(results, types.ServiceRequestFromModel(i))
}
return results
}
type ResponseTrapData struct {
Created string `json:"created"`
Description string `json:"description"`
H3Cell int64 `json:"h3cell"`
ID string `json:"id"`
Name string `json:"name"`
}
func (rtd ResponseTrapData) Render(w http.ResponseWriter, r *http.Request) error {
return nil
}
func NewResponseTrapDatum(td *models.FieldseekerTraplocation) ResponseTrapData {
return ResponseTrapData{
Created: formatTime(td.Creationdate),
Description: td.Description.GetOr(""),
ID: td.Globalid.String(),
//H3Cell: td.H3Cell,
Name: td.Name.GetOr(""),
}
}
func NewResponseTrapData(data models.FieldseekerTraplocationSlice) []ResponseTrapData {
results := make([]ResponseTrapData, 0)
for _, i := range data {
results = append(results, NewResponseTrapDatum(i))
}
return results
}
func toResponseFieldseeker(sync platform.FieldseekerRecordsSync) ResponseFieldseeker {
return ResponseFieldseeker{
MosquitoSources: NewResponseMosquitoSources(sync.MosquitoSources),
ServiceRequests: NewResponseServiceRequests(sync.ServiceRequests),
TrapData: NewResponseTrapData(sync.TrapData),
}
}
func formatTime(t null.Val[time.Time]) string {
if t.IsNull() {
return ""
}
v := t.MustGet()
return v.Format("2006-01-02T15:04:05.000Z")
}
func toBool16(t null.Val[int16]) *bool {
if t.IsNull() {
return nil
}
val := t.MustGet()
var b bool
if val == 0 {
b = false
} else {
b = true
}
return &b
}

View file

@ -1 +0,0 @@
package api

View file

@ -1 +0,0 @@
package api

View file

@ -1,105 +0,0 @@
package api
import (
"context"
"encoding/json"
"io/ioutil"
"net/http"
//"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/Gleipnir-Technology/nidus-sync/platform/text"
"github.com/rs/zerolog/log"
)
/*
{
"data": {
"id": 101252305,
"event_type": "message.received",
"record_type": "event",
"payload": {
"id": 101252305,
"record_type": "message",
"from": {
"phone_number": "+18016984649"
},
"to": [
{
"phone_number": "+15593720139",
"status": "webhook_delivered"
}
],
"text": "test3",
"received_at": "2026-01-29T20:16:23.000000+00:00",
"type": "SMS",
"media": []
}
}
}
*/
type VoipMSStatusPhoneFrom struct {
PhoneNumber string `json:"phone_number"`
}
type VoipMSStatusPhoneTo struct {
PhoneNumber string `json:"phone_number"`
Status string `json:"status"`
}
type VoipMSStatusPayload struct {
ID int `json:"id"`
RecordType string `json:"record_type"`
From VoipMSStatusPhoneFrom `json:"from"`
To []VoipMSStatusPhoneTo `json:"to"`
Text string `json:"text"`
ReceivedAt string `json:"received_at"`
Type string `json:"type"`
//Media []something
}
type VoipMSStatusUpdate struct {
ID int `json:"id"`
EventType string `json:"event_type"`
RecordType string `json:"record_type"`
Payload VoipMSStatusPayload `json:"payload"`
}
type VoipMSTextPostBody struct {
Data VoipMSStatusUpdate `json:"data"`
}
func voipmsTextGet(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
name := query.Get("to")
age := query.Get("from")
message := query.Get("message")
files := query.Get("files")
id := query.Get("id")
date := query.Get("date")
log.Info().Str("name", name).Str("age", age).Str("message", message).Str("files", files).Str("id", id).Str("date", date).Msg("Incoming text message")
}
func voipmsTextPost(w http.ResponseWriter, r *http.Request) {
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, "failed to read", http.StatusInternalServerError)
return
}
//debugSaveRequest(r)
var b VoipMSTextPostBody
err = json.Unmarshal(body, &b)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
to := "unknown"
if len(b.Data.Payload.To) > 0 {
to = b.Data.Payload.To[0].PhoneNumber
}
log.Info().Int("ID", b.Data.ID).Str("event_type", b.Data.EventType).Str("record_type", b.Data.RecordType).Str("from", b.Data.Payload.From.PhoneNumber).Str("to", to).Str("content", b.Data.Payload.Text).Msg("Text status")
// Convert phone numbers from Voip.ms into E164 format for consistency
go func() {
err := text.HandleTextMessage(context.Background(), b.Data.Payload.From.PhoneNumber, to, b.Data.Payload.Text)
if err != nil {
log.Error().Err(err).Msg("failed to handle VoIP.ms incoming text")
}
}()
lint.Fprintf(w, "ok")
}

@ -1 +1 @@
Subproject commit 63cc8b573739294ea98f7e39d2baec3cd70dfd7f
Subproject commit 6344702fe14c9f64a1ea856e9629248bbaabe894

1021
arcgis.go Normal file

File diff suppressed because it is too large Load diff

212
auth.go Normal file
View file

@ -0,0 +1,212 @@
package main
import (
"context"
"errors"
"fmt"
"log/slog"
"net/http"
"strconv"
"github.com/Gleipnir-Technology/nidus-sync/enums"
"github.com/Gleipnir-Technology/nidus-sync/models"
"github.com/Gleipnir-Technology/nidus-sync/sql"
"github.com/aarondl/opt/omit"
"golang.org/x/crypto/bcrypt"
)
type NoCredentialsError struct{}
func (e NoCredentialsError) Error() string { return "No credentials were present in the request" }
type NoUserError struct{}
func (e NoUserError) Error() string { return "That user does not exist" }
type InvalidCredentials struct{}
func (e InvalidCredentials) Error() string { return "No username with that password exists" }
type InvalidUsername struct{}
func (e InvalidUsername) Error() string { return "That username doesn't exist" }
type AuthenticatedHandler func(http.ResponseWriter, *http.Request, *models.User)
type EnsureAuth struct {
handler AuthenticatedHandler
}
func NewEnsureAuth(handlerToWrap AuthenticatedHandler) *EnsureAuth {
return &EnsureAuth{handlerToWrap}
}
func (ea *EnsureAuth) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// If this is an API request respond with a more machine-readable error state
accept := r.Header.Values("Accept")
offers := []string{"application/json", "text/html"}
content_type := NegotiateContent(accept, offers)
user, err := getAuthenticatedUser(r)
if err != nil {
var msg []byte
// Separate return codes for different authentication failures
if _, ok := err.(*NoCredentialsError); ok {
fmt.Println("No credentials present and no session")
w.Header().Set("WWW-Authenticate-Error", "no-credentials")
msg = []byte("Please provide credentials.\n")
} else if _, ok := err.(*NoUserError); ok {
w.Header().Set("WWW-Authenticate-Error", "invalid-credentials")
msg = []byte("Invalid credentials provided.\n")
} else if _, ok := err.(*InvalidCredentials); ok {
w.Header().Set("WWW-Authenticate-Error", "invalid-credentials")
msg = []byte("Invalid credentials provided.\n")
}
if content_type == "text/html" {
http.Redirect(w, r, "/signin?next="+r.URL.Path, http.StatusSeeOther)
return
}
w.Header().Set("WWW-Authenticate", `Basic realm="Nidus Sync"`)
w.WriteHeader(401)
w.Write(msg)
return
}
ea.handler(w, r, user)
}
func addUserSession(r *http.Request, user *models.User) {
id := strconv.Itoa(int(user.ID))
sessionManager.Put(r.Context(), "user_id", id)
sessionManager.Put(r.Context(), "username", user.Username)
slog.Info("Created new user session",
slog.String("username", user.Username),
slog.String("user_id", id))
}
// Helper function to translate strings into solid error types for operating on
func findUser(ctx context.Context, user_id int) (*models.User, error) {
user, err := models.FindUser(ctx, PGInstance.BobDB, int32(user_id))
if err != nil {
if err.Error() == "No such user" {
return nil, &NoUserError{}
} else {
LogErrorTypeInfo(err)
slog.Error("Unrecognized error. This should be updated in the findUser code", slog.String("err", err.Error()))
return nil, err
}
}
return user, err
}
func getAuthenticatedUser(r *http.Request) (*models.User, error) {
//user_id := sessionManager.GetInt(r.Context(), "user_id")
user_id_str := sessionManager.GetString(r.Context(), "user_id")
if user_id_str != "" {
user_id, err := strconv.Atoi(user_id_str)
if err != nil {
return nil, fmt.Errorf("Failed to convert user_id to int: %w", err)
}
username := sessionManager.GetString(r.Context(), "username")
slog.Info("Current session info",
slog.Int("user_id", user_id),
slog.String("username", username))
if user_id > 0 && username != "" {
return findUser(r.Context(), user_id)
}
}
// If we can't get the user from the session try to get from auth headers
username, password, ok := r.BasicAuth()
if !ok {
return nil, &NoCredentialsError{}
}
user, err := validateUser(r.Context(), username, password)
if err != nil {
return nil, err
}
addUserSession(r, user)
return user, nil
}
func hashPassword(password string) (string, error) {
bytes, err := bcrypt.GenerateFromPassword([]byte(password), 14)
return string(bytes), err
}
func signinUser(r *http.Request, username string, password string) (*models.User, error) {
user, err := validateUser(r.Context(), username, password)
if err != nil {
return nil, err
}
if user == nil {
return nil, errors.New("No matching user")
}
addUserSession(r, user)
return user, nil
}
func signupUser(username string, name string, password string) (*models.User, error) {
passwordHash, err := hashPassword(password)
if err != nil {
return nil, fmt.Errorf("Cannot signup user: %w", err)
}
setter := models.UserSetter{
DisplayName: omit.From(name),
PasswordHash: omit.From(passwordHash),
PasswordHashType: omit.From(enums.HashtypeBcrypt14),
Username: omit.From(username),
}
u, err := models.Users.Insert(&setter).One(context.TODO(), PGInstance.BobDB)
if err != nil {
return nil, fmt.Errorf("Failed to create user: %w", err)
}
slog.Info("Created user",
slog.Int("ID", int(u.ID)),
slog.String("username", u.Username))
return u, nil
}
func validatePassword(password, hash string) bool {
err := bcrypt.CompareHashAndPassword([]byte(hash), []byte(password))
return err == nil
}
func validateUser(ctx context.Context, username string, password string) (*models.User, error) {
passwordHash, err := hashPassword(password)
if err != nil {
return nil, fmt.Errorf("Failed to hash password: %w", err)
}
slog.Info("Validating user",
slog.String("username", username),
slog.String("password", password),
slog.String("hash", passwordHash))
result, err := sql.UserByUsername(username).All(ctx, PGInstance.BobDB)
if err != nil {
return nil, fmt.Errorf("Failed to query for user: %w", err)
}
switch len(result) {
case 0:
return nil, InvalidUsername{}
case 1:
row := result[0]
if !validatePassword(password, row.PasswordHash) {
return nil, InvalidCredentials{}
}
user := models.User{
ID: row.ID,
ArcgisAccessToken: row.ArcgisAccessToken,
ArcgisLicense: row.ArcgisLicense,
ArcgisRefreshToken: row.ArcgisRefreshToken,
ArcgisRefreshTokenExpires: row.ArcgisRefreshTokenExpires,
ArcgisRole: row.ArcgisRole,
DisplayName: row.DisplayName,
Email: row.Email,
OrganizationID: row.OrganizationID,
Username: row.Username,
}
return &user, nil
default:
return nil, errors.New("More than one matching row, this should be impossible.")
}
}

View file

@ -1,243 +0,0 @@
package auth
import (
"context"
"errors"
"fmt"
"net/http"
"strconv"
"strings"
"time"
"github.com/Gleipnir-Technology/nidus-sync/platform"
"github.com/rs/zerolog/log"
"golang.org/x/crypto/bcrypt"
)
type InactiveUser struct{}
func (e InactiveUser) Error() string { return "That user is not active" }
type InvalidCredentials struct{}
func (e InvalidCredentials) Error() string { return "No username with that password exists" }
type InvalidUsername struct{}
func (e InvalidUsername) Error() string { return "That username doesn't exist" }
type NoCredentialsError struct{}
func (e NoCredentialsError) Error() string { return "No credentials were present in the request" }
type AuthenticatedHandler func(http.ResponseWriter, *http.Request, platform.User)
type EnsureAuth struct {
handler AuthenticatedHandler
}
func AddUserSession(ctx context.Context, user *platform.User) {
id_str := strconv.Itoa(int(user.ID))
sessionManager.Put(ctx, "user_id", id_str)
sessionManager.Put(ctx, "username", user.Username)
log.Debug().Str("id", id_str).Str("username", user.Username).Msg("added user session")
}
func ImpersonateEnd(ctx context.Context) {
sessionManager.Put(ctx, "impersonated_user_id", "")
}
func ImpersonateUser(ctx context.Context, target_user_id int) {
target_user_id_str := strconv.Itoa(int(target_user_id))
sessionManager.Put(ctx, "impersonated_user_id", target_user_id_str)
}
func ImpersonatedUser(ctx context.Context) *int32 {
i_str := sessionManager.GetString(ctx, "impersonated_user_id")
if i_str == "" {
return nil
}
i, err := strconv.Atoi(i_str)
if err != nil {
log.Error().Err(err).Str("impersonated_user_id", i_str).Msg("failed to parse impersonated_user_id")
return nil
}
result := int32(i)
return &result
}
func ImpersonatorID(ctx context.Context) *int32 {
user_id_str := sessionManager.GetString(ctx, "user_id")
user_id, err := strconv.Atoi(user_id_str)
if err != nil {
log.Error().Err(err).Str("user_id", user_id_str).Msg("failed to parse user_id")
return nil
}
result := int32(user_id)
return &result
}
func GetAuthenticatedUser(r *http.Request) (*platform.User, error) {
ctx := r.Context()
user_id_str := sessionManager.GetString(ctx, "user_id")
impersonated_user_id_str := sessionManager.GetString(ctx, "impersonated_user_id")
if impersonated_user_id_str != "" {
user_id_str = impersonated_user_id_str
}
if user_id_str != "" {
user_id, err := strconv.Atoi(user_id_str)
if err != nil {
return nil, fmt.Errorf("Failed to convert user_id to int: %w", err)
}
username := sessionManager.GetString(ctx, "username")
if user_id > 0 && username != "" {
user, err := platform.UserByID(ctx, int32(user_id))
if err != nil {
return nil, fmt.Errorf("user by ID: %w", err)
}
if !user.IsActive {
return nil, fmt.Errorf("user is inactive")
}
return user, nil
}
}
// If we can't get the user from the session try to get from auth headers
username, password, ok := r.BasicAuth()
if !ok {
return nil, &NoCredentialsError{}
}
user, err := validateUser(ctx, username, password)
if err != nil {
return nil, err
}
AddUserSession(ctx, user)
return user, nil
}
func NewEnsureAuth(handlerToWrap AuthenticatedHandler) *EnsureAuth {
return &EnsureAuth{handlerToWrap}
}
func (ea *EnsureAuth) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// If this is an API request respond with a more machine-readable error state
accept := r.Header.Get("Accept")
/*
offers := []string{"application/json", "text/html"}
content_type := NegotiateContent(accept, offers)
*/
user, err := GetAuthenticatedUser(r)
if err != nil || user == nil {
var msg []byte
// Don't send authentication headers for browsers because it forces the authentication popup
requested_with := r.Header.Get("X-Requested-With")
//log.Debug().Str("x-requested-with", requested_with).Send()
if !strings.HasPrefix(requested_with, "nidus-web") && accept != "text/event-stream" {
w.Header().Set("WWW-Authenticate", `Basic realm="Nidus Sync"`)
// Separate return codes for different authentication failures
if _, ok := err.(*NoCredentialsError); ok {
log.Info().Msg("No credentials present and no session")
w.Header().Set("WWW-Authenticate-Error", "no-credentials")
msg = []byte("Please provide credentials.\n")
} else if _, ok := err.(*platform.NoUserError); ok {
w.Header().Set("WWW-Authenticate-Error", "invalid-credentials")
msg = []byte("Invalid credentials provided.\n")
} else if _, ok := err.(*InvalidCredentials); ok {
w.Header().Set("WWW-Authenticate-Error", "invalid-credentials")
msg = []byte("Invalid credentials provided.\n")
}
}
w.WriteHeader(401)
_, err = w.Write(msg)
if err != nil {
log.Error().Err(err).Msg("failed to write response")
}
return
}
ea.handler(w, r, *user)
}
func SigninUser(r *http.Request, username string, password string) (*platform.User, error) {
user, err := validateUser(r.Context(), username, password)
if err != nil {
return nil, err
}
if user == nil {
return nil, errors.New("No matching user")
}
AddUserSession(r.Context(), user)
return user, nil
}
func SignoutUser(r *http.Request, user platform.User) {
sessionManager.Put(r.Context(), "user_id", "")
sessionManager.Put(r.Context(), "username", "")
err := sessionManager.Destroy(r.Context())
if err != nil {
log.Error().Err(err).Msg("failed to destroy session for user on signout")
}
log.Info().Str("username", user.Username).Int("user_id", (user.ID)).Msg("Ended user session")
}
func SignupUser(ctx context.Context, username string, name string, password string) (*platform.User, error) {
password_hash, err := HashPassword(password)
if err != nil {
return nil, fmt.Errorf("Cannot signup user, failed to create hashed password: %w", err)
}
u, err := platform.CreateUser(ctx, username, name, password_hash)
if err != nil {
return nil, fmt.Errorf("create user: %s", err)
}
return u, nil
}
func HashPassword(password string) (string, error) {
bytes, err := bcrypt.GenerateFromPassword([]byte(password), 14)
return string(bytes), err
}
func redact(s string) string {
if len(s) <= 4 {
return s
}
first_two := s[:2]
last_two := s[len(s)-2:]
middle_length := len(s) - 4
return first_two + strings.Repeat("*", middle_length) + last_two
}
func validatePassword(password, hash string) bool {
err := bcrypt.CompareHashAndPassword([]byte(hash), []byte(password))
if err != nil {
log.Debug().Err(err).Str("password", password).Str("hash", hash).Msg("!validate password")
}
return err == nil
}
func validateUser(ctx context.Context, username string, password string) (*platform.User, error) {
log.Info().Str("username", username).Msg("begin validateUser. Hashing...")
start := time.Now()
passwordHash, err := HashPassword(password)
if err != nil {
return nil, fmt.Errorf("Failed to hash password: %w", err)
}
t1 := time.Now()
elapsed := t1.Sub(start)
log.Info().Int64("elapsed ms", elapsed.Milliseconds()).Msg("calculated hash")
t2 := time.Now()
user, err := platform.UserByUsername(ctx, username)
if err != nil {
return nil, fmt.Errorf("Failed to query for user: %w", err)
}
elapsed = t2.Sub(t1)
log.Info().Int64("elapsed ms", elapsed.Milliseconds()).Str("username", username).Msg("queried user")
if user == nil {
log.Info().Str("username", username).Str("password", redact(password)).Msg("Invalid username")
return nil, InvalidUsername{}
}
if !user.IsActive {
return nil, InactiveUser{}
}
if !validatePassword(password, user.PasswordHash) {
log.Info().Str("username", username).Str("password", redact(password)).Str("hash", passwordHash).Msg("Invalid password for user")
return nil, InvalidCredentials{}
}
return user, nil
}

View file

@ -1,18 +0,0 @@
package auth
import (
"time"
"github.com/Gleipnir-Technology/nidus-sync/db"
"github.com/alexedwards/scs/pgxstore"
"github.com/alexedwards/scs/v2"
)
var sessionManager *scs.SessionManager
func NewSessionManager() *scs.SessionManager {
sessionManager = scs.New()
sessionManager.Store = pgxstore.New(db.PGInstance.PGXPool)
sessionManager.Lifetime = 24 * time.Hour
return sessionManager
}

1
bob Submodule

@ -0,0 +1 @@
Subproject commit 96da65fd88a50ae532079e8ea69746183f4af3a1

10
bobgen.yaml Normal file
View file

@ -0,0 +1,10 @@
aliases:
user_:
up_plural: "Users"
up_singular: "User"
down_plural: "users"
down_singular: "user"
psql:
queries:
- ./sql
plugins_preset: "all"

View file

@ -1,35 +0,0 @@
package main
import (
"bufio"
"errors"
"fmt"
"log"
"os"
"github.com/Gleipnir-Technology/nidus-sync/auth"
)
func main() {
var password string
scanValue("Please enter your password : ", &password)
hash, err := auth.HashPassword(password)
if err != nil {
fmt.Printf("Failed to hash password: %v\n", err)
os.Exit(1)
}
fmt.Println("Password:", password)
fmt.Println("Hash: ", hash)
}
func scanValue(message string, result *string) {
fmt.Print("%s", message)
scanner := bufio.NewScanner(os.Stdin)
if ok := scanner.Scan(); !ok {
log.Fatal(errors.New("Failed to scan input"))
}
*result = scanner.Text()
}

View file

@ -1,54 +0,0 @@
package main
import (
"context"
"log"
"os"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/Gleipnir-Technology/nidus-sync/db"
"github.com/Gleipnir-Technology/nidus-sync/db/query/public"
"github.com/Gleipnir-Technology/nidus-sync/lint"
)
func main() {
err := config.Parse()
if err != nil {
log.Printf("failed on config: %v", err)
os.Exit(1)
}
ctx := context.TODO()
err = db.InitializeDatabase(ctx, config.PGDSN)
if err != nil {
log.Printf("failed on db: %v", err)
os.Exit(2)
}
txn, err := db.BeginTxn(ctx)
if err != nil {
log.Printf("failed on txn: %v", err)
os.Exit(3)
}
defer lint.LogOnErrRollback(txn.Rollback, ctx, "rollback")
log.Printf("doing address")
gid := "openaddresses:address:us/ca/tulare-addresses-county:0dc28458fd03e3fa"
address, err := public.AddressFromGID(ctx, txn, gid)
if err != nil {
log.Printf("failed on query: %v", err)
os.Exit(4)
}
//log.Printf("address %d lat %f lng %f", address.ID, *address.LocationLatitude, *address.LocationLongitude)
log.Printf("Address id %d location %s", address.ID, address.Location)
lint.LogOnErrCtx(txn.Commit, ctx, "commit")
/*
log.Printf("doing comm")
id := int64(1)
comm, err := public.CommunicationFromID(ctx, id)
if err != nil {
log.Printf("failed on query: %v", err)
os.Exit(4)
}
log.Printf("communication %d", comm.ID)
*/
}

View file

@ -1,102 +0,0 @@
package email
import (
"context"
"fmt"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/rs/zerolog/log"
"resty.dev/v3"
)
type attachmentRequest struct {
Filename string `json:"filename"`
Content string `json:"content"`
}
type Request struct {
From string `json:"from"`
To string `json:"to"`
CC []string `json:"cc,omitempty"`
BCC []string `json:"bcc,omitempty"`
Subject string `json:"subject"`
Text string `json:"text"`
HTML string `json:"html,omitempty"`
Attachments []attachmentRequest `json:"attachments,omitempty"`
Sender string `json:"sender"`
ReplyTo string `json:"replyTo,omitempty"`
InReplyTo string `json:"inReplyTo,omitempty"`
References []string `json:"references,omitempty"`
}
type emailEnvelope struct {
From string `json:"from"`
To []string `json:"to"`
}
type emailResponseError struct {
StatusCode int `json:"statusCode"`
Error string `json:"error"`
Message string `json:"message"`
}
type emailResponse struct {
IsRedacted bool `json:"is_redacted"`
CreatedAt string `json:"created_at"`
HardBounces []string `json:"hard_bounces"`
SoftBounces []string `json:"soft_bounces"`
IsBounce bool `json:"is_bounce"`
Alias string `json:"alias"`
Domain string `json:"domain"`
User string `json:"user"`
Status string `json:"status"`
IsLocked bool `json:"is_locked"`
Envelope emailEnvelope `json:"envelope"`
RequireTLS bool `json:"requireTLS"`
MessageID string `json:"messageId"`
Headers map[string]string `json:"headers"`
Date string `json:"date"`
Subject string `json:"subject"`
Accepted []string `json:"accepted"`
Deliveries []string `json:"deliveries"`
RejectedErrors []string `json:"rejectedErrors"`
ID string `json:"id"`
Object string `json:"object"`
UpdatedAt string `json:"updated_at"`
Link string `json:"link"`
Message string `json:"message"`
}
var FORWARDEMAIL_EMAIL_POST_API = "https://api.forwardemail.net/v1/emails"
func Send(ctx context.Context, email Request) (result emailResponse, err error) {
client := resty.New()
var err_resp emailResponseError
r, err := client.R().
SetBasicAuth(config.ForwardEmailAPIToken, "").
SetBody(email).
SetContext(ctx).
SetError(&err_resp).
SetHeader("Content-Type", "application/json").
SetResult(&result).
Post(FORWARDEMAIL_EMAIL_POST_API)
if err != nil {
return result, fmt.Errorf("Failed to marshal email request: %w", err)
}
if r.IsError() {
log.Error().
Int("status", err_resp.StatusCode).
Str("error", err_resp.Error).
Str("msg", err_resp.Message).
Str("email.from", email.From).
Str("email.sender", email.Sender).
Str("email.subject", email.Subject).
Str("email.to", email.To).
Str("email.text", email.Text).
Msg("Email send error")
return result, fmt.Errorf("Error response %d from email service: %s (%s)", err_resp.StatusCode, err_resp.Message, err_resp.Error)
}
return result, nil
}

View file

@ -1,62 +0,0 @@
package email
import (
"context"
"errors"
"fmt"
"time"
"github.com/gorilla/websocket"
"github.com/rs/zerolog/log"
)
var FORWARDEMAIL_WS_API = "wss://api.forwardemail.net/v1/ws"
func StartWebsocket(ctx context.Context, api_token string) {
var conn *websocket.Conn
for {
err := ensureConnected(conn, api_token)
if err != nil {
log.Error().Err(err).Msg("Bailing on email websocket")
return
}
select {
case <-ctx.Done():
return
default:
// Read message
message_type, message, err := conn.ReadMessage()
if err != nil {
if !websocket.IsCloseError(err, websocket.CloseNormalClosure) {
conn = nil
}
log.Error().Err(err).Msg("Error reading message")
}
// Process and log the message
log.Info().Int("message_type", message_type).Bytes("message", message).Msg("Got email notification")
}
}
}
func ensureConnected(conn *websocket.Conn, api_token string) error {
if conn != nil {
return nil
}
url := FORWARDEMAIL_WS_API + "?token=" + api_token
for {
new_conn, _, err := websocket.DefaultDialer.Dial(url, nil)
if err == nil {
log.Info().Msg("Connected to mail websocket")
*conn = *new_conn
return nil
}
if errors.Is(err, websocket.ErrBadHandshake) {
return fmt.Errorf("Bad handshake connecting to email websocket, bailing.")
}
log.Error().Err(err).Str("url", url).Msg("Error connecting to WebSocket")
time.Sleep(3 * time.Second)
}
}

View file

@ -1,18 +0,0 @@
package text
import (
"context"
"fmt"
"github.com/Gleipnir-Technology/nidus-sync/config"
)
func SendText(ctx context.Context, source string, destination string, message string) (string, error) {
switch config.TextProvider {
case "voipms":
return sendTextVoipms(ctx, destination, message)
case "twilio":
return sendTextTwilio(ctx, source, destination, message)
}
return "", fmt.Errorf("Unsupported provider '%s'", config.TextProvider)
}

View file

@ -1,33 +0,0 @@
package text
import (
"context"
"fmt"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/rs/zerolog/log"
"github.com/twilio/twilio-go"
twilioApi "github.com/twilio/twilio-go/rest/api/v2010"
)
func sendTextTwilio(ctx context.Context, source string, destination string, message string) (string, error) {
client := twilio.NewRestClient()
params := &twilioApi.CreateMessageParams{}
params.SetMessagingServiceSid(config.TwilioMessagingServiceSID)
params.SetBody(message)
params.SetTo(destination)
resp, err := client.Api.CreateMessage(params)
if err != nil {
return "", fmt.Errorf("Failed to create message to %s: %w", destination, err)
}
if resp.Sid == nil {
log.Warn().Str("src", source).Str("dst", destination).Msg("Text message sid is nil")
return "", nil
}
log.Info().Str("src", source).Str("dst", destination).Str("message", message).Str("sid", *resp.Sid).Msg("Created text message")
return *resp.Sid, nil
}

View file

@ -1,88 +0,0 @@
package text
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"github.com/Gleipnir-Technology/nidus-sync/config"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/rs/zerolog/log"
)
var VOIP_MS_API = "https://voip.ms/api/v1/rest.php"
type VoipMSResponse struct {
MMS int `json:"mms"`
Message string `json:"message"`
Status string `json:"status"`
SMS int `json:"sms"`
}
func sendTextVoipms(ctx context.Context, to string, content string, media ...string) (string, error) {
if len(content) > 2048 {
return "", errors.New("Message content is more than 160 characters")
}
params := url.Values{}
params.Add("api_password", config.VoipMSPassword)
params.Add("api_username", config.VoipMSUsername)
params.Add("method", "sendMMS")
params.Add("did", config.VoipMSNumber)
params.Add("dst", to)
params.Add("message", content)
/*
for i, med := range media {
// These should be one of:
// 1. A full URL that the service cat GET
// 2. A base64-encoded image starting with "data:image/png;base64,iVBORw0KGgoAAAANSUh..."
params.Add(fmt.Sprintf("media%d", i+1), med)
}
params.Add(fmt.Sprintf("media%d", len(media)+1), "")
*/
response, err := makeVoipMSRequest(params)
if err != nil {
return "", fmt.Errorf("Failed to send MMS: %w", err)
}
if response.Status == "ip_not_enabled" {
return "", fmt.Errorf("Failed to send SMS: the IP address of the server is not enabled with voip.ms. You'll need to enable this server's IP with them.")
}
log.Info().Str("status", response.Status).Int("mms", response.MMS).Msg("Sent MMS message")
return strconv.Itoa(response.MMS), nil
}
func makeVoipMSRequest(params url.Values) (VoipMSResponse, error) {
result := VoipMSResponse{}
// Construct the URL with query parameters
full_url := VOIP_MS_API + "?" + params.Encode()
// Make the HTTP request
log.Debug().Str("full_url", full_url).Msg("Sending command to VoIP.ms")
resp, err := http.Get(full_url)
if err != nil {
log.Warn().Err(err).Str("url", full_url).Msg("Failed to make request to Voip.MS")
return result, fmt.Errorf("Error making request: %w", err)
}
defer lint.LogOnErr(resp.Body.Close, "failed closing response body")
// Read the response body
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Warn().Err(err).Str("url", full_url).Msg("Failed to read Voip.MS response body")
return result, fmt.Errorf("Failed to read response: %w", err)
}
log.Info().Str("response", string(body)).Msg("Response from Voip.MS")
// Parse the JSON response
var response VoipMSResponse
err = json.Unmarshal(body, &response)
if err != nil {
return result, fmt.Errorf("Failed to unmarshal JSON response: %w", err)
}
return response, nil
}

View file

@ -1,223 +0,0 @@
package config
import (
"fmt"
"net/url"
"os"
"github.com/nyaruka/phonenumbers"
//"github.com/rs/zerolog/log"
)
var (
Bind string
ClientID string
ClientSecret string
DomainRMO string
DomainNidus string
DomainTegola string
Environment string
FilesDirectory string
FieldseekerSchemaDirectory string
ForwardEmailAPIToken string
ForwardEmailRMOAddress string
ForwardEmailRMOPassword string
ForwardEmailRMOUsername string
ForwardEmailNidusAddress string
ForwardEmailNidusPassword string
ForwardEmailNidusUsername string
LobAPIKey string
PGDSN string
PhoneNumberReport phonenumbers.PhoneNumber
PhoneNumberReportStr string
PhoneNumberSupport phonenumbers.PhoneNumber
PhoneNumberSupportStr string
SentryDSN string
SentryDSNFrontend string
StadiaMapsAPIKey string
TextProvider string
TwilioAuthToken string
TwilioAccountSID string
TwilioMessagingServiceSID string
TwilioRCSSenderRMO string
VoipMSNumber string
VoipMSPassword string
VoipMSUsername string
)
func IsProductionEnvironment() bool {
return Environment == "PRODUCTION"
}
func makeURL(domain, path string, args ...string) string {
to_add := make([]any, 0)
for _, a := range args {
to_add = append(to_add, url.QueryEscape(a))
}
pattern := "https://" + domain + path
return fmt.Sprintf(pattern, to_add...)
}
func MakeURLNidus(path string, args ...string) string {
return makeURL(DomainNidus, path, args...)
}
func MakeURLReport(path string, args ...string) string {
return makeURL(DomainRMO, path, args...)
}
func MakeURLTegola(path string, args ...string) string {
//log.Debug().Str("path", path).Strs("args", args).Str("domain", DomainTegola).Msg("Making tegola url")
return makeURL(DomainTegola, path, args...)
}
func Parse() (err error) {
Bind = os.Getenv("BIND")
if Bind == "" {
Bind = ":9001"
}
ClientID = os.Getenv("ARCGIS_CLIENT_ID")
if ClientID == "" {
return fmt.Errorf("You must specify a non-empty ARCGIS_CLIENT_ID")
}
ClientSecret = os.Getenv("ARCGIS_CLIENT_SECRET")
if ClientSecret == "" {
return fmt.Errorf("You must specify a non-empty ARCGIS_CLIENT_SECRET")
}
DomainNidus = os.Getenv("DOMAIN_NIDUS")
if DomainNidus == "" {
return fmt.Errorf("You must specify a non-empty DOMAIN_NIDUS")
}
DomainRMO = os.Getenv("DOMAIN_RMO")
if DomainRMO == "" {
return fmt.Errorf("You must specify a non-empty DOMAIN_RMO")
}
DomainTegola = os.Getenv("DOMAIN_TEGOLA")
if DomainTegola == "" {
return fmt.Errorf("You must specify a non-empty DOMAIN_TEGOLA")
}
Environment = os.Getenv("ENVIRONMENT")
if Environment == "" {
return fmt.Errorf("You must specify a non-empty ENVIRONMENT")
}
if Environment != "PRODUCTION" && Environment != "DEVELOPMENT" {
return fmt.Errorf("ENVIRONMENT should be either DEVELOPMENT or PRODUCTION")
}
FieldseekerSchemaDirectory = os.Getenv("FIELDSEEKER_SCHEMA_DIRECTORY")
if FieldseekerSchemaDirectory == "" {
return fmt.Errorf("You must specify a non-empty FIELDSEEKER_SCHEMA_DIRECTORY")
}
FilesDirectory = os.Getenv("FILES_DIRECTORY")
if FilesDirectory == "" {
return fmt.Errorf("You must specify a non-empty FILES_DIRECTORY")
}
ForwardEmailAPIToken = os.Getenv("FORWARDEMAIL_API_TOKEN")
if ForwardEmailAPIToken == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_API_TOKEN")
}
ForwardEmailRMOAddress = os.Getenv("FORWARDEMAIL_RMO_ADDRESS")
if ForwardEmailRMOAddress == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_RMO_ADDRESS")
}
ForwardEmailRMOUsername = os.Getenv("FORWARDEMAIL_RMO_USERNAME")
if ForwardEmailRMOUsername == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_RMO_USERNAME")
}
ForwardEmailRMOPassword = os.Getenv("FORWARDEMAIL_RMO_PASSWORD")
if ForwardEmailRMOPassword == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_RMO_PASSWORD")
}
ForwardEmailNidusAddress = os.Getenv("FORWARDEMAIL_NIDUS_ADDRESS")
if ForwardEmailNidusAddress == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_NIDUS_ADDRESS")
}
ForwardEmailNidusUsername = os.Getenv("FORWARDEMAIL_NIDUS_USERNAME")
if ForwardEmailNidusUsername == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_NIDUS_USERNAME")
}
ForwardEmailNidusPassword = os.Getenv("FORWARDEMAIL_NIDUS_PASSWORD")
if ForwardEmailNidusPassword == "" {
return fmt.Errorf("You must specify a non-empty FORWARDEMAIL_NIDUS_PASSWORD")
}
LobAPIKey = os.Getenv("LOB_API_KEY")
if LobAPIKey == "" {
return fmt.Errorf("You must specify a non-empty LOB_API_KEY")
}
PGDSN = os.Getenv("POSTGRES_DSN")
if PGDSN == "" {
return fmt.Errorf("You must specify a non-empty POSTGRES_DSN")
}
PhoneNumberReportStr = os.Getenv("PHONE_NUMBER_RMO")
if PhoneNumberReportStr == "" {
return fmt.Errorf("You must specify a non-empty PHONE_NUMBER_RMO")
}
p, err := phonenumbers.Parse(PhoneNumberReportStr, "US")
if err != nil {
return fmt.Errorf("Failed to parse '%s' as a valid phone number: %w", PhoneNumberReportStr, err)
}
PhoneNumberReport = *p
PhoneNumberSupportStr = os.Getenv("PHONE_NUMBER_SUPPORT")
if PhoneNumberSupportStr == "" {
return fmt.Errorf("You must specify a non-empty PHONE_NUMBER_SUPPORT")
}
p, err = phonenumbers.Parse(PhoneNumberSupportStr, "US")
if err != nil {
return fmt.Errorf("Failed to parse '%s' as a valid phone number: %w", PhoneNumberSupportStr, err)
}
PhoneNumberSupport = *p
SentryDSN = os.Getenv("SENTRY_DSN")
if SentryDSN == "" {
return fmt.Errorf("You must specify a non-empty SENTRY_DSN")
}
SentryDSNFrontend = os.Getenv("SENTRY_DSN_FRONTEND")
if SentryDSNFrontend == "" {
return fmt.Errorf("You must specify a non-empty SENTRY_DSN_FRONTEND")
}
StadiaMapsAPIKey = os.Getenv("STADIA_MAPS_API_KEY")
if StadiaMapsAPIKey == "" {
return fmt.Errorf("You must specify a non-empty STADIA_MAPS_API_KEY")
}
TextProvider = os.Getenv("TEXT_PROVIDER")
switch TextProvider {
case "":
return fmt.Errorf("You must specify a non-empty TEXT_PROVIDER")
case "twilio":
case "voipms":
break
default:
return fmt.Errorf("Unrecognized text provider '%s'", TextProvider)
}
TwilioAccountSID = os.Getenv("TWILIO_ACCOUNT_SID")
if TwilioAccountSID == "" {
return fmt.Errorf("You must specify a non-empty TWILIO_ACCOUNT_SID")
}
TwilioAuthToken = os.Getenv("TWILIO_AUTH_TOKEN")
if TwilioAuthToken == "" {
return fmt.Errorf("You must specify a non-empty TWILIO_AUTH_TOKEN")
}
TwilioMessagingServiceSID = os.Getenv("TWILIO_MESSAGING_SERVICE_SID")
if TwilioMessagingServiceSID == "" {
return fmt.Errorf("You must specify a non-empty TWILIO_MESSAGING_SERVICE_SID")
}
TwilioRCSSenderRMO = os.Getenv("TWILIO_RCS_SENDER_RMO")
if TwilioRCSSenderRMO == "" {
return fmt.Errorf("You must specify a non-empty TWILIO_RCS_SENDER_RMO")
}
VoipMSNumber = os.Getenv("VOIPMS_NUMBER")
if VoipMSNumber == "" {
return fmt.Errorf("You must specify a non-empty VOIPMS_NUMBER")
}
VoipMSPassword = os.Getenv("VOIPMS_PASSWORD")
if VoipMSPassword == "" {
return fmt.Errorf("You must specify a non-empty VOIPMS_PASSWORD")
}
VoipMSUsername = os.Getenv("VOIPMS_USERNAME")
if VoipMSPassword == "" {
return fmt.Errorf("You must specify a non-empty VOIPMS_USERNAME")
}
return nil
}
func ArcGISOauthRedirectURL() string {
return MakeURLNidus("/oauth/arcgis/callback")
}

View file

@ -1,4 +1,4 @@
package auth
package main
import (
"sort"

198
database.go Normal file
View file

@ -0,0 +1,198 @@
package main
import (
"context"
"database/sql"
"embed"
"errors"
"fmt"
"io/fs"
"sync"
//"github.com/georgysavva/scany/v2/pgxscan"
//"github.com/jackc/pgx/v5"
"github.com/Gleipnir-Technology/nidus-sync/enums"
"github.com/Gleipnir-Technology/nidus-sync/models"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/jackc/pgx/v5/stdlib"
_ "github.com/jackc/pgx/v5/stdlib"
"github.com/pressly/goose/v3"
"github.com/rs/zerolog/log"
"github.com/stephenafamo/bob"
"github.com/stephenafamo/bob/dialect/psql"
"github.com/stephenafamo/bob/dialect/psql/dialect"
"github.com/stephenafamo/bob/dialect/psql/im"
"github.com/uber/h3-go/v4"
)
//go:embed migrations/*.sql
var embedMigrations embed.FS
type postgres struct {
BobDB bob.DB
PGXPool *pgxpool.Pool
}
var (
PGInstance *postgres
pgOnce sync.Once
)
func doMigrations(connection_string string) error {
log.Info().Str("dsn", connection_string).Msg("Connecting to database")
db, err := sql.Open("pgx", connection_string)
if err != nil {
return fmt.Errorf("Failed to open database connection: %w", err)
}
defer db.Close()
row := db.QueryRowContext(context.Background(), "SELECT version()")
var val string
if err := row.Scan(&val); err != nil {
return fmt.Errorf("Failed to get database version query result: %w", err)
}
log.Info().Str("version", val).Msg("Connected to database")
fsys, err := fs.Sub(embedMigrations, "migrations")
if err != nil {
return fmt.Errorf("Failed to get migrations embedded directory: %w", err)
}
provider, err := goose.NewProvider(goose.DialectPostgres, db, fsys)
if err != nil {
return fmt.Errorf("Failed to create goose provider: %w", err)
}
//goose.SetBaseFS(embedMigrations)
current, target, err := provider.GetVersions(context.Background())
if err != nil {
return fmt.Errorf("Faield to get goose versions: %w", err)
}
log.Info().Int("current", int(current)).Int("target", int(target)).Msg("Migration status")
results, err := provider.Up(context.Background())
if err != nil {
return fmt.Errorf("Failed to run migrations: %w", err)
}
if len(results) > 0 {
for _, r := range results {
log.Info().Int("version", int(r.Source.Version)).Str("direction", r.Direction).Msg("Migration done")
}
} else {
log.Info().Msg("No migrations necessary.")
}
return nil
}
func initializeDatabase(ctx context.Context, uri string) error {
needs, err := needsMigrations(uri)
if err != nil {
return fmt.Errorf("Failed to determine if migrations are needed: %w", err)
}
if needs == nil {
return errors.New("Can't read variable 'needs' - it's nil")
}
if *needs {
//return errors.New(fmt.Sprintf("Must migrate database before connecting: %t", *needs))
log.Info().Msg("Handling database migrations")
err = doMigrations(uri)
if err != nil {
return fmt.Errorf("Failed to handle migrations: %w", err)
}
} else {
log.Info().Msg("No database migrations necessary")
}
pgOnce.Do(func() {
db, e := pgxpool.New(ctx, uri)
bobDB := bob.NewDB(stdlib.OpenDBFromPool(db))
PGInstance = &postgres{bobDB, db}
err = e
})
if err != nil {
return fmt.Errorf("unable to create connection pool: %w", err)
}
var current string
query := `SELECT current_database()`
err = PGInstance.BobDB.QueryRow(query).Scan(&current)
if err != nil {
return fmt.Errorf("Failed to get database current: %w", err)
}
log.Info().Str("database", current).Msg("Connected to database")
return nil
}
func needsMigrations(connection_string string) (*bool, error) {
log.Info().Str("dsn", connection_string).Msg("Connecting to database")
db, err := sql.Open("pgx", connection_string)
if err != nil {
return nil, fmt.Errorf("Failed to open database connection: %w", err)
}
defer db.Close()
row := db.QueryRowContext(context.Background(), "SELECT version()")
var val string
if err := row.Scan(&val); err != nil {
return nil, fmt.Errorf("Failed to get database version query result: %w", err)
}
log.Info().Str("dsn", val).Msg("Connected to database")
fsys, err := fs.Sub(embedMigrations, "migrations")
if err != nil {
return nil, fmt.Errorf("Failed to get migrations embedded directory: %w", err)
}
provider, err := goose.NewProvider(goose.DialectPostgres, db, fsys)
if err != nil {
return nil, fmt.Errorf("Failed to create goose provider: %w", err)
}
hasPending, err := provider.HasPending(context.Background())
if err != nil {
return nil, err
}
return &hasPending, nil
}
func updateSummaryTables(ctx context.Context, org *models.Organization) {
/*org, err := models.FindOrganization(ctx, PGInstance.BobDB, org_id)
if err != nil {
log.Error().Err(err).Msg("Failed to get organization")
}*/
log.Info().Int("org_id", int(org.ID)).Msg("Getting point locations")
point_locations, err := org.FSPointlocations().All(ctx, PGInstance.BobDB)
if err != nil {
log.Error().Err(err).Msg("Failed to get organization")
return
}
log.Info().Int("count", len(point_locations)).Msg("Summarizing point locations")
for i := range 16 {
log.Info().Int("resolution", i).Msg("Working summary layer")
cellToCount := make(map[h3.Cell]int, 0)
for _, p := range point_locations {
cell, err := getCell(p.GeometryX, p.GeometryY, i)
if err != nil {
log.Error().Err(err).Msg("Failed to get cell")
continue
}
//log.Info().Float64("X", p.GeometryX).Float64("Y", p.GeometryY).Str("cell", cell.String()).Msg("Converted lat/lng")
cellToCount[cell] = cellToCount[cell] + 1
}
var to_insert []bob.Mod[*dialect.InsertQuery] = make([]bob.Mod[*dialect.InsertQuery], 0)
to_insert = append(to_insert, im.Into("h3_aggregation", "cell", "resolution", "count_", "type_", "organization_id", "geometry"))
for cell, count := range cellToCount {
polygon, err := cellToPostgisGeometry(cell)
if err != nil {
log.Error().Err(err).Msg("Failed to get PostGIS geometry")
continue
}
// log.Info().Str("polygon", polygon).Msg("Going to insert")
to_insert = append(to_insert, im.Values(psql.Arg(cell.String(), i, count, enums.H3aggregationtypeServicerequest, org.ID), psql.F("st_geomfromtext", psql.S(polygon), 4326)))
}
to_insert = append(to_insert, im.OnConflict("cell, organization_id, type_").DoUpdate(
im.SetCol("count_").To(psql.Raw("EXCLUDED.count_")),
))
//log.Info().Str("sql", insertQueryToString(psql.Insert(to_insert...))).Msg("Updating...")
_, err := psql.Insert(to_insert...).Exec(ctx, PGInstance.BobDB)
if err != nil {
log.Error().Err(err).Msg("Faild to add h3 aggregation")
}
}
}

View file

@ -1,3 +0,0 @@
#!/run/current-system/sw/bin/bash
PSQL_DSN="postgresql://?host=/var/run/postgresql&sslmode=disable&dbname=nidus-sync" /tmp/bobgen-psql
#PSQL_DSN="postgresql://?host=/var/run/postgresql&sslmode=disable&dbname=nidus-sync" bob/gen/bobgen-psql/bobgen-psql

View file

@ -1,56 +0,0 @@
aliases:
arcgis.user_:
up_plural: "ArcgisUsers"
up_singular: "ArcgisUser"
down_plural: "arcgisusers"
down_singular: "arcgisuser"
organization:
relationships:
publicreport.pool.pool_organization_id_fkey: "PublicreportPool"
fieldseeker.pool.pool_organization_id_fkey: "FieldseekerPool"
user_:
relationships:
fileupload.pool.pool_creator_id_fkey: "FileuploadPool"
up_plural: "Users"
up_singular: "User"
down_plural: "users"
down_singular: "user"
no_tests: true
psql:
schemas:
- "comms"
- "fieldseeker"
- "fileupload"
- "lob"
- "public"
- "publicreport"
- "tile"
shared_schema: "public"
queries:
- ./sql
uuid_pkg: google
plugins:
counts:
disabled: true
dbinfo:
destination: "dbinfo"
disabled: false
pkgname: "dbinfo"
enums:
destination: "enums"
disabled: false
pkgname: "enums"
factory:
disabled: true
pkgname: "factory"
destination: "factory"
joins:
disabled: true
loaders:
disabled: false
models:
destination: "models"
disabled: false
pkgname: "models"
where:
disabled: false

View file

@ -1,268 +0,0 @@
package db
import (
"context"
"database/sql"
"embed"
"errors"
"fmt"
"io/fs"
//"github.com/georgysavva/scany/v2/pgxscan"
//"github.com/jackc/pgx/v5"
"github.com/Gleipnir-Technology/bob"
"github.com/Gleipnir-Technology/jet/postgres"
"github.com/Gleipnir-Technology/nidus-sync/lint"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/jackc/pgx/v5/stdlib"
"github.com/pressly/goose/v3"
"github.com/rs/zerolog/log"
"github.com/stephenafamo/scan"
pgxgeom "github.com/twpayne/pgx-geom"
)
var ErrNoRows = pgx.ErrNoRows
//go:embed migrations/*.sql
var embedMigrations embed.FS
type pginstance struct {
BobDB bob.DB
PGXPool *pgxpool.Pool
}
var (
PGInstance *pginstance
)
func ExecuteNone(ctx context.Context, stmt postgres.Statement) error {
query, args := stmt.Sql()
_, err := PGInstance.PGXPool.Query(ctx, query, args...)
return err
}
func ExecuteNoneTx(ctx context.Context, txn Ex, stmt postgres.Statement) error {
query, args := stmt.Sql()
r, err := txn.Query(ctx, query, args...)
if err != nil {
return fmt.Errorf("query: %w", err)
}
r.Close()
return nil
}
func ExecuteNoneTxBob(ctx context.Context, txn bob.Tx, stmt postgres.Statement) error {
query, args := stmt.Sql()
r, err := txn.QueryContext(ctx, query, args...)
if err != nil {
return fmt.Errorf("query: %w", err)
}
defer lint.LogOnErr(r.Close, "close rows")
return nil
}
func ExecuteOne[T any](ctx context.Context, stmt postgres.Statement) (T, error) {
query, args := stmt.Sql()
var result T
row, err := PGInstance.PGXPool.Query(ctx, query, args...)
if err != nil {
return result, fmt.Errorf("execute query: %w", err)
}
var collected *T
collected, err = pgx.CollectOneRow(row, pgx.RowToAddrOfStructByPos[T])
if err != nil || collected == nil {
return result, fmt.Errorf("collect row: %w", err)
}
return *collected, nil
}
func ExecuteOneTx[T any](ctx context.Context, txn Ex, stmt postgres.Statement) (T, error) {
query, args := stmt.Sql()
//result, err := scan.One(ctx, txn, scan.StructMapper[T](), query, args...)
row, err := txn.Query(ctx, query, args...)
var result T
if err != nil {
return result, fmt.Errorf("txn query: %w", err)
}
var collected *T
collected, err = pgx.CollectOneRow(row, pgx.RowToAddrOfStructByPos[T])
if err != nil || collected == nil {
return result, fmt.Errorf("collect row: %w", err)
}
return *collected, nil
}
func ExecuteOneTxBob[T any](ctx context.Context, txn bob.Tx, stmt postgres.Statement) (T, error) {
query, args := stmt.Sql()
return scan.One(ctx, txn, scan.StructMapper[T](), query, args...)
}
func ExecuteMany[T any](ctx context.Context, stmt postgres.Statement) ([]T, error) {
query, args := stmt.Sql()
rows, err := PGInstance.PGXPool.Query(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("execute query: %w", err)
}
collected, err := pgx.CollectRows(rows, pgx.RowToAddrOfStructByPos[T])
if err != nil {
return []T{}, fmt.Errorf("collect rows: %w", err)
}
results := make([]T, len(collected))
for i, c := range collected {
if c == nil {
return results, fmt.Errorf("null collected")
}
results[i] = *c
}
return results, nil
}
func ExecuteManyTx[T any](ctx context.Context, txn Ex, stmt postgres.Statement) ([]T, error) {
query, args := stmt.Sql()
rows, err := txn.Query(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("execute query: %w", err)
}
collected, err := pgx.CollectRows(rows, pgx.RowToAddrOfStructByPos[T])
if err != nil {
return []T{}, fmt.Errorf("collect rows: %w", err)
}
results := make([]T, len(collected))
for i, c := range collected {
if c == nil {
return results, fmt.Errorf("null collected")
}
results[i] = *c
}
return results, nil
}
func doMigrations(connection_string string) error {
log.Debug().Str("dsn", connection_string).Msg("Connecting to database")
db, err := sql.Open("pgx", connection_string)
if err != nil {
return fmt.Errorf("Failed to open database connection: %w", err)
}
defer func() {
err := db.Close()
if err != nil {
log.Error().Err(err).Msg("failed to close database connection")
}
}()
row := db.QueryRowContext(context.Background(), "SELECT version()")
var val string
if err := row.Scan(&val); err != nil {
return fmt.Errorf("Failed to get database version query result: %w", err)
}
log.Info().Str("version", val).Msg("Connected to database")
fsys, err := fs.Sub(embedMigrations, "migrations")
if err != nil {
return fmt.Errorf("Failed to get migrations embedded directory: %w", err)
}
provider, err := goose.NewProvider(goose.DialectPostgres, db, fsys)
if err != nil {
return fmt.Errorf("Failed to create goose provider: %w", err)
}
//goose.SetBaseFS(embedMigrations)
current, target, err := provider.GetVersions(context.Background())
if err != nil {
return fmt.Errorf("Faield to get goose versions: %w", err)
}
log.Info().Int("current", int(current)).Int("target", int(target)).Msg("Migration status")
results, err := provider.Up(context.Background())
if err != nil {
return fmt.Errorf("Failed to run migrations: %w", err)
}
if len(results) > 0 {
for _, r := range results {
log.Info().Int("version", int(r.Source.Version)).Str("direction", r.Direction).Msg("Migration done")
}
} else {
log.Info().Msg("No migrations necessary.")
}
return nil
}
func InitializeDatabase(ctx context.Context, uri string) error {
log.Debug().Str("dsn", uri).Msg("Initializing database")
needs, err := needsMigrations(uri)
if err != nil {
return fmt.Errorf("Failed to determine if migrations are needed: %w", err)
}
if needs == nil {
return errors.New("Can't read variable 'needs' - it's nil")
}
if *needs {
//return errors.New(fmt.Sprintf("Must migrate database before connecting: %t", *needs))
log.Info().Msg("Handling database migrations")
err = doMigrations(uri)
if err != nil {
return fmt.Errorf("Failed to handle migrations: %w", err)
}
} else {
log.Debug().Msg("No database migrations necessary")
}
config, err := pgxpool.ParseConfig(uri)
if err != nil {
return fmt.Errorf("parse config: %w", err)
}
config.AfterConnect = func(ctx2 context.Context, conn *pgx.Conn) error {
err2 := pgxgeom.Register(ctx, conn)
if err2 != nil {
return fmt.Errorf("pgxgeom register: %w", err2)
}
return nil
}
db, err := pgxpool.NewWithConfig(ctx, config)
if err != nil {
return fmt.Errorf("new pool: %w", err)
}
bobDB := bob.NewDB(stdlib.OpenDBFromPool(db))
PGInstance = &pginstance{bobDB, db}
var current string
query := `SELECT current_database()`
err = PGInstance.BobDB.QueryRow(query).Scan(&current)
if err != nil {
return fmt.Errorf("Failed to get database current: %w", err)
}
return nil
}
func needsMigrations(connection_string string) (*bool, error) {
db, err := sql.Open("pgx", connection_string)
if err != nil {
return nil, fmt.Errorf("Failed to open database connection: %w", err)
}
defer func() {
err := db.Close()
if err != nil {
log.Error().Err(err).Msg("failed to close database connection")
}
}()
row := db.QueryRowContext(context.Background(), "SELECT version()")
var val string
if err := row.Scan(&val); err != nil {
return nil, fmt.Errorf("Failed to get database version query result: %w", err)
}
log.Info().Str("dsn", val).Msg("Connected to database")
fsys, err := fs.Sub(embedMigrations, "migrations")
if err != nil {
return nil, fmt.Errorf("Failed to get migrations embedded directory: %w", err)
}
provider, err := goose.NewProvider(goose.DialectPostgres, db, fsys)
if err != nil {
return nil, fmt.Errorf("Failed to create goose provider: %w", err)
}
hasPending, err := provider.HasPending(context.Background())
if err != nil {
return nil, err
}
return &hasPending, nil
}

View file

@ -1,26 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var AddressErrors = &addressErrors{
ErrUniqueAddressPkey: &UniqueConstraintError{
schema: "",
table: "address",
columns: []string{"id"},
s: "address_pkey",
},
ErrUniqueAddressGidUnique: &UniqueConstraintError{
schema: "",
table: "address",
columns: []string{"gid"},
s: "address_gid_unique",
},
}
type addressErrors struct {
ErrUniqueAddressPkey *UniqueConstraintError
ErrUniqueAddressGidUnique *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsEmailContactErrors = &commsEmailContactErrors{
ErrUniqueEmailPkey: &UniqueConstraintError{
schema: "comms",
table: "email_contact",
columns: []string{"address"},
s: "email_pkey",
},
}
type commsEmailContactErrors struct {
ErrUniqueEmailPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsEmailLogErrors = &commsEmailLogErrors{
ErrUniqueEmailLogPkey: &UniqueConstraintError{
schema: "comms",
table: "email_log",
columns: []string{"id"},
s: "email_log_pkey",
},
}
type commsEmailLogErrors struct {
ErrUniqueEmailLogPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsEmailTemplateErrors = &commsEmailTemplateErrors{
ErrUniqueEmailTemplatePkey: &UniqueConstraintError{
schema: "comms",
table: "email_template",
columns: []string{"id"},
s: "email_template_pkey",
},
}
type commsEmailTemplateErrors struct {
ErrUniqueEmailTemplatePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsMailerErrors = &commsMailerErrors{
ErrUniqueMailerPkey: &UniqueConstraintError{
schema: "comms",
table: "mailer",
columns: []string{"id"},
s: "mailer_pkey",
},
}
type commsMailerErrors struct {
ErrUniqueMailerPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsPhoneErrors = &commsPhoneErrors{
ErrUniquePhonePkey: &UniqueConstraintError{
schema: "comms",
table: "phone",
columns: []string{"e164"},
s: "phone_pkey",
},
}
type commsPhoneErrors struct {
ErrUniquePhonePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsTextJobErrors = &commsTextJobErrors{
ErrUniqueTextJobPkey: &UniqueConstraintError{
schema: "comms",
table: "text_job",
columns: []string{"id"},
s: "text_job_pkey",
},
}
type commsTextJobErrors struct {
ErrUniqueTextJobPkey *UniqueConstraintError
}

View file

@ -1,26 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommsTextLogErrors = &commsTextLogErrors{
ErrUniqueTextLogPkey: &UniqueConstraintError{
schema: "comms",
table: "text_log",
columns: []string{"id"},
s: "text_log_pkey",
},
ErrUniqueTextLogTwilioSidKey: &UniqueConstraintError{
schema: "comms",
table: "text_log",
columns: []string{"twilio_sid"},
s: "text_log_twilio_sid_key",
},
}
type commsTextLogErrors struct {
ErrUniqueTextLogPkey *UniqueConstraintError
ErrUniqueTextLogTwilioSidKey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommunicationErrors = &communicationErrors{
ErrUniqueCommunicationPkey: &UniqueConstraintError{
schema: "",
table: "communication",
columns: []string{"id"},
s: "communication_pkey",
},
}
type communicationErrors struct {
ErrUniqueCommunicationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var CommunicationLogEntryErrors = &communicationLogEntryErrors{
ErrUniqueCommunicationLogEntryPkey: &UniqueConstraintError{
schema: "",
table: "communication_log_entry",
columns: []string{"id"},
s: "communication_log_entry_pkey",
},
}
type communicationLogEntryErrors struct {
ErrUniqueCommunicationLogEntryPkey *UniqueConstraintError
}

View file

@ -1,26 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var ComplianceReportRequestErrors = &complianceReportRequestErrors{
ErrUniqueComplianceReportRequestPkey: &UniqueConstraintError{
schema: "",
table: "compliance_report_request",
columns: []string{"id"},
s: "compliance_report_request_pkey",
},
ErrUniqueComplianceReportRequestPublicIdKey: &UniqueConstraintError{
schema: "",
table: "compliance_report_request",
columns: []string{"public_id"},
s: "compliance_report_request_public_id_key",
},
}
type complianceReportRequestErrors struct {
ErrUniqueComplianceReportRequestPkey *UniqueConstraintError
ErrUniqueComplianceReportRequestPublicIdKey *UniqueConstraintError
}

View file

@ -1,26 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var ComplianceReportRequestMailerErrors = &complianceReportRequestMailerErrors{
ErrUniqueComplianceReportRequestMailerPkey: &UniqueConstraintError{
schema: "",
table: "compliance_report_request_mailer",
columns: []string{"id"},
s: "compliance_report_request_mailer_pkey",
},
ErrUniqueComplianceReportRequestMaiComplianceReportRequestId_Key: &UniqueConstraintError{
schema: "",
table: "compliance_report_request_mailer",
columns: []string{"compliance_report_request_id", "mailer_id"},
s: "compliance_report_request_mai_compliance_report_request_id__key",
},
}
type complianceReportRequestMailerErrors struct {
ErrUniqueComplianceReportRequestMailerPkey *UniqueConstraintError
ErrUniqueComplianceReportRequestMaiComplianceReportRequestId_Key *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var DistrictSubscriptionEmailErrors = &districtSubscriptionEmailErrors{
ErrUniqueDistrictSubscriptionEmailPkey: &UniqueConstraintError{
schema: "",
table: "district_subscription_email",
columns: []string{"organization_id", "email_contact_address"},
s: "district_subscription_email_pkey",
},
}
type districtSubscriptionEmailErrors struct {
ErrUniqueDistrictSubscriptionEmailPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var DistrictSubscriptionPhoneErrors = &districtSubscriptionPhoneErrors{
ErrUniqueDistrictSubscriptionPhonePkey: &UniqueConstraintError{
schema: "",
table: "district_subscription_phone",
columns: []string{"organization_id", "phone_e164"},
s: "district_subscription_phone_pkey",
},
}
type districtSubscriptionPhoneErrors struct {
ErrUniqueDistrictSubscriptionPhonePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FeatureErrors = &featureErrors{
ErrUniqueFeaturePkey: &UniqueConstraintError{
schema: "",
table: "feature",
columns: []string{"id"},
s: "feature_pkey",
},
}
type featureErrors struct {
ErrUniqueFeaturePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FeaturePoolErrors = &featurePoolErrors{
ErrUniqueFeaturePoolPkey: &UniqueConstraintError{
schema: "",
table: "feature_pool",
columns: []string{"feature_id"},
s: "feature_pool_pkey",
},
}
type featurePoolErrors struct {
ErrUniqueFeaturePoolPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerContainerrelateErrors = &fieldseekerContainerrelateErrors{
ErrUniqueContainerrelatePkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "containerrelate",
columns: []string{"globalid", "version"},
s: "containerrelate_pkey",
},
}
type fieldseekerContainerrelateErrors struct {
ErrUniqueContainerrelatePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerFieldscoutinglogErrors = &fieldseekerFieldscoutinglogErrors{
ErrUniqueFieldscoutinglogPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "fieldscoutinglog",
columns: []string{"globalid", "version"},
s: "fieldscoutinglog_pkey",
},
}
type fieldseekerFieldscoutinglogErrors struct {
ErrUniqueFieldscoutinglogPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerHabitatrelateErrors = &fieldseekerHabitatrelateErrors{
ErrUniqueHabitatrelatePkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "habitatrelate",
columns: []string{"globalid", "version"},
s: "habitatrelate_pkey",
},
}
type fieldseekerHabitatrelateErrors struct {
ErrUniqueHabitatrelatePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerInspectionsampleErrors = &fieldseekerInspectionsampleErrors{
ErrUniqueInspectionsamplePkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "inspectionsample",
columns: []string{"globalid", "version"},
s: "inspectionsample_pkey",
},
}
type fieldseekerInspectionsampleErrors struct {
ErrUniqueInspectionsamplePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerInspectionsampledetailErrors = &fieldseekerInspectionsampledetailErrors{
ErrUniqueInspectionsampledetailPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "inspectionsampledetail",
columns: []string{"globalid", "version"},
s: "inspectionsampledetail_pkey",
},
}
type fieldseekerInspectionsampledetailErrors struct {
ErrUniqueInspectionsampledetailPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerLinelocationErrors = &fieldseekerLinelocationErrors{
ErrUniqueLinelocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "linelocation",
columns: []string{"globalid", "version"},
s: "linelocation_pkey",
},
}
type fieldseekerLinelocationErrors struct {
ErrUniqueLinelocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerLocationtrackingErrors = &fieldseekerLocationtrackingErrors{
ErrUniqueLocationtrackingPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "locationtracking",
columns: []string{"globalid", "version"},
s: "locationtracking_pkey",
},
}
type fieldseekerLocationtrackingErrors struct {
ErrUniqueLocationtrackingPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerMosquitoinspectionErrors = &fieldseekerMosquitoinspectionErrors{
ErrUniqueMosquitoinspectionPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "mosquitoinspection",
columns: []string{"globalid", "version"},
s: "mosquitoinspection_pkey",
},
}
type fieldseekerMosquitoinspectionErrors struct {
ErrUniqueMosquitoinspectionPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerPointlocationErrors = &fieldseekerPointlocationErrors{
ErrUniquePointlocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "pointlocation",
columns: []string{"globalid", "version"},
s: "pointlocation_pkey",
},
}
type fieldseekerPointlocationErrors struct {
ErrUniquePointlocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerPolygonlocationErrors = &fieldseekerPolygonlocationErrors{
ErrUniquePolygonlocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "polygonlocation",
columns: []string{"globalid", "version"},
s: "polygonlocation_pkey",
},
}
type fieldseekerPolygonlocationErrors struct {
ErrUniquePolygonlocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerPoolErrors = &fieldseekerPoolErrors{
ErrUniquePoolPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "pool",
columns: []string{"globalid", "version"},
s: "pool_pkey",
},
}
type fieldseekerPoolErrors struct {
ErrUniquePoolPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerPooldetailErrors = &fieldseekerPooldetailErrors{
ErrUniquePooldetailPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "pooldetail",
columns: []string{"globalid", "version"},
s: "pooldetail_pkey",
},
}
type fieldseekerPooldetailErrors struct {
ErrUniquePooldetailPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerProposedtreatmentareaErrors = &fieldseekerProposedtreatmentareaErrors{
ErrUniqueProposedtreatmentareaPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "proposedtreatmentarea",
columns: []string{"globalid", "version"},
s: "proposedtreatmentarea_pkey",
},
}
type fieldseekerProposedtreatmentareaErrors struct {
ErrUniqueProposedtreatmentareaPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerQamosquitoinspectionErrors = &fieldseekerQamosquitoinspectionErrors{
ErrUniqueQamosquitoinspectionPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "qamosquitoinspection",
columns: []string{"globalid", "version"},
s: "qamosquitoinspection_pkey",
},
}
type fieldseekerQamosquitoinspectionErrors struct {
ErrUniqueQamosquitoinspectionPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerRodentlocationErrors = &fieldseekerRodentlocationErrors{
ErrUniqueRodentlocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "rodentlocation",
columns: []string{"globalid", "version"},
s: "rodentlocation_pkey",
},
}
type fieldseekerRodentlocationErrors struct {
ErrUniqueRodentlocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerSamplecollectionErrors = &fieldseekerSamplecollectionErrors{
ErrUniqueSamplecollectionPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "samplecollection",
columns: []string{"globalid", "version"},
s: "samplecollection_pkey",
},
}
type fieldseekerSamplecollectionErrors struct {
ErrUniqueSamplecollectionPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerSamplelocationErrors = &fieldseekerSamplelocationErrors{
ErrUniqueSamplelocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "samplelocation",
columns: []string{"globalid", "version"},
s: "samplelocation_pkey",
},
}
type fieldseekerSamplelocationErrors struct {
ErrUniqueSamplelocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerServicerequestErrors = &fieldseekerServicerequestErrors{
ErrUniqueServicerequestPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "servicerequest",
columns: []string{"globalid", "version"},
s: "servicerequest_pkey",
},
}
type fieldseekerServicerequestErrors struct {
ErrUniqueServicerequestPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerSpeciesabundanceErrors = &fieldseekerSpeciesabundanceErrors{
ErrUniqueSpeciesabundancePkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "speciesabundance",
columns: []string{"globalid", "version"},
s: "speciesabundance_pkey",
},
}
type fieldseekerSpeciesabundanceErrors struct {
ErrUniqueSpeciesabundancePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerStormdrainErrors = &fieldseekerStormdrainErrors{
ErrUniqueStormdrainPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "stormdrain",
columns: []string{"globalid", "version"},
s: "stormdrain_pkey",
},
}
type fieldseekerStormdrainErrors struct {
ErrUniqueStormdrainPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerTimecardErrors = &fieldseekerTimecardErrors{
ErrUniqueTimecardPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "timecard",
columns: []string{"globalid", "version"},
s: "timecard_pkey",
},
}
type fieldseekerTimecardErrors struct {
ErrUniqueTimecardPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerTrapdatumErrors = &fieldseekerTrapdatumErrors{
ErrUniqueTrapdataPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "trapdata",
columns: []string{"globalid", "version"},
s: "trapdata_pkey",
},
}
type fieldseekerTrapdatumErrors struct {
ErrUniqueTrapdataPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerTraplocationErrors = &fieldseekerTraplocationErrors{
ErrUniqueTraplocationPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "traplocation",
columns: []string{"globalid", "version"},
s: "traplocation_pkey",
},
}
type fieldseekerTraplocationErrors struct {
ErrUniqueTraplocationPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerTreatmentErrors = &fieldseekerTreatmentErrors{
ErrUniqueTreatmentPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "treatment",
columns: []string{"globalid", "version"},
s: "treatment_pkey",
},
}
type fieldseekerTreatmentErrors struct {
ErrUniqueTreatmentPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerTreatmentareaErrors = &fieldseekerTreatmentareaErrors{
ErrUniqueTreatmentareaPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "treatmentarea",
columns: []string{"globalid", "version"},
s: "treatmentarea_pkey",
},
}
type fieldseekerTreatmentareaErrors struct {
ErrUniqueTreatmentareaPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerZoneErrors = &fieldseekerZoneErrors{
ErrUniqueZonesPkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "zones",
columns: []string{"globalid", "version"},
s: "zones_pkey",
},
}
type fieldseekerZoneErrors struct {
ErrUniqueZonesPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FieldseekerZones2Errors = &fieldseekerZones2Errors{
ErrUniqueZones2Pkey: &UniqueConstraintError{
schema: "fieldseeker",
table: "zones2",
columns: []string{"globalid", "version"},
s: "zones2_pkey",
},
}
type fieldseekerZones2Errors struct {
ErrUniqueZones2Pkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FileuploadCSVErrors = &fileuploadCSVErrors{
ErrUniqueCsvPkey: &UniqueConstraintError{
schema: "fileupload",
table: "csv",
columns: []string{"file_id"},
s: "csv_pkey",
},
}
type fileuploadCSVErrors struct {
ErrUniqueCsvPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FileuploadErrorCSVErrors = &fileuploadErrorCSVErrors{
ErrUniqueErrorCsvPkey: &UniqueConstraintError{
schema: "fileupload",
table: "error_csv",
columns: []string{"id"},
s: "error_csv_pkey",
},
}
type fileuploadErrorCSVErrors struct {
ErrUniqueErrorCsvPkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FileuploadErrorFileErrors = &fileuploadErrorFileErrors{
ErrUniqueErrorFilePkey: &UniqueConstraintError{
schema: "fileupload",
table: "error_file",
columns: []string{"id"},
s: "error_file_pkey",
},
}
type fileuploadErrorFileErrors struct {
ErrUniqueErrorFilePkey *UniqueConstraintError
}

View file

@ -1,17 +0,0 @@
// Code generated by BobGen psql v0.42.5. DO NOT EDIT.
// This file is meant to be re-generated in place and/or deleted at any time.
package dberrors
var FileuploadFileErrors = &fileuploadFileErrors{
ErrUniqueFilePkey: &UniqueConstraintError{
schema: "fileupload",
table: "file",
columns: []string{"id"},
s: "file_pkey",
},
}
type fileuploadFileErrors struct {
ErrUniqueFilePkey *UniqueConstraintError
}

Some files were not shown because too many files have changed in this diff Show more