<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en_US"><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://zackdesign.biz/feed.xml" rel="self" type="application/atom+xml" /><link href="https://zackdesign.biz/" rel="alternate" type="text/html" hreflang="en_US" /><updated>2026-04-17T02:19:43+00:00</updated><id>https://zackdesign.biz/feed.xml</id><title type="html">Zack Design</title><subtitle>Software engineering, web development, and digital solutions by industry experts</subtitle><author><name>Isaac Rowntree</name><email>isaac@zackdesign.biz</email></author><entry><title type="html">Introducing SessionHQ — our flagship SaaS</title><link href="https://zackdesign.biz/sessionhq-launch/" rel="alternate" type="text/html" title="Introducing SessionHQ — our flagship SaaS" /><published>2026-04-17T00:00:00+00:00</published><updated>2026-04-17T00:00:00+00:00</updated><id>https://zackdesign.biz/sessionhq-launch</id><content type="html" xml:base="https://zackdesign.biz/sessionhq-launch/"><![CDATA[<p>After months of design, engineering, and iteration with real studio operators, Zack Design is proud to launch <strong><a href="https://sessionhq.org">SessionHQ</a></strong> — the modern check-in platform for class-based studios. It is the most ambitious product we have ever shipped, and it now runs nightly check-ins at our founding partner <a href="https://www.havanahastingsdance.com.au/">Havana on the Hastings</a> in Port Macquarie.</p>

<!-- more -->

<h2 id="what-sessionhq-does">What SessionHQ does</h2>

<p>SessionHQ replaces the spreadsheets, paper sign-in sheets, and duct-taped Mindbody workarounds that most small studios tolerate because the alternatives are too expensive, too clunky, or too generic. We built it by sitting at the front desk on a Tuesday night and asking, <em>“what actually needs to happen here?”</em></p>

<p>The answer, it turns out, is:</p>

<ul>
  <li><strong>Members walk in and check in fast.</strong> PIN pad, NFC wristband tap, or QR scan from their phone. No app install required. No “where’s my card.”</li>
  <li><strong>Passes just work.</strong> Class packs, casual rates, unlimited passes. Credits deduct automatically on check-in. Cards-on-file auto-renew the moment a pack runs out.</li>
  <li><strong>Payments happen where the student is.</strong> Square integration handles card payments inline. PCI-compliant. No raw card numbers ever touch our servers.</li>
  <li><strong>Admins see the truth.</strong> Tonight’s attendance, revenue, unpaid check-ins, LTV, retention cohorts — all updating in real time.</li>
</ul>

<p>No per-member fees. No transaction surcharges on top of Square. One flat monthly subscription.</p>

<h2 id="the-technology-behind-it">The technology behind it</h2>

<p>SessionHQ is a serious piece of software infrastructure. A quick tour of the stack:</p>

<ul>
  <li><strong>Next.js 16 &amp; React 19</strong> on the frontend, with Tailwind 4 and a custom 19-primitive design system (not shadcn — we wanted the ownership).</li>
  <li><strong>Cloudflare Workers</strong> via OpenNext for the runtime. Global edge deployment, sub-100ms cold starts, one Worker cron handling pass-lifecycle, database backup, prune, and retention sweeps.</li>
  <li><strong>Supabase</strong> for auth, Postgres, realtime, and row-level security. Every tenant-owned table enforces <code class="language-plaintext highlighter-rouge">auth_tenant_id()</code> at the database layer — a studio <em>cannot</em> see another studio’s data, period.</li>
  <li><strong>Square</strong> for payments, with Supabase Vault for token storage and PCI-safe tokenisation.</li>
  <li><strong>Resend</strong> for lifecycle email, <strong>Sentry</strong> for observability, <strong>R2</strong> for storage, <strong>Playwright</strong> and <strong>Vitest</strong> for 800+ tests across unit, integration, and E2E.</li>
</ul>

<p>Multi-tenancy, GDPR-readiness (consent capture, data export, right-to-erasure, full audit trail), idempotency, rate limiting, feature flags — all in from day one, not bolted on later.</p>

<h2 id="why-we-built-it">Why we built it</h2>

<p>We have spent 20+ years building software for other people. SessionHQ is different: <strong>it is our product.</strong> We own the roadmap, the pricing, the customer relationship. We decide which features matter. We eat the bug reports.</p>

<p>It is also a proof point. We believe small businesses deserve software that is as thoughtfully engineered as anything the enterprise market gets — without the enterprise price tag, the 12-month implementation, or the 400-page MSA. SessionHQ is our demonstration that a small, focused team can ship serious SaaS.</p>

<h2 id="founding-partner-havana-on-the-hastings">Founding partner: Havana on the Hastings</h2>

<p>SessionHQ did not launch in a vacuum. It launched with a customer.</p>

<p><a href="https://www.havanahastingsdance.com.au/">Havana on the Hastings</a> is Port Macquarie’s Latin dance community — Cuban salsa, bachata, urban kiz, and rueda (the dance that brought founders Mike and Kellie together). They run on passes, practicas, and real connection, with the warmth of a studio where “everyone starts somewhere” is not just a slogan but a weekly reality.</p>

<p>They were already operating on the pass system that SessionHQ is built around. Partnering with them meant we did not have to guess what studio operators needed — we had one telling us, in real time, what worked and what did not. Every feature in SessionHQ has been stress-tested at their front desk on a Tuesday night.</p>

<p>If you are in Port Macquarie and want to dance, <a href="https://www.havanahastingsdance.com.au/classes">drop in</a>. Absolute beginners are welcome every week.</p>

<h2 id="whats-next">What’s next</h2>

<p>SessionHQ is onboarding new studios now. If you run a dance studio, gym, yoga or pilates studio, martial arts school, or climbing gym — or if you know someone who does — we would love to talk.</p>

<ul>
  <li><strong>Visit</strong> <a href="https://sessionhq.org">sessionhq.org</a> to see the product.</li>
  <li><strong>Request access</strong> on the site, or <strong>book a 15-minute demo</strong> via <code class="language-plaintext highlighter-rouge">info@sessionhq.org</code>.</li>
  <li><strong>Founding-studio pricing is locked in</strong> for the studios who sign on before general availability.</li>
</ul>

<p>This is the start of something we are going to spend years building on. Thanks for being here for the beginning.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="product" /><category term="sessionhq" /><category term="saas" /><category term="nextjs" /><category term="supabase" /><category term="cloudflare" /><category term="square" /><category term="product-launch" /><summary type="html"><![CDATA[SessionHQ — our modern multi-tenant check-in platform for dance studios, gyms, and martial arts schools — is live, with founding partner Havana on the Hastings.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/sessionhq-launch.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/sessionhq-launch.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Stamp Scanner — iPhone + Mac + SAM 3 for cataloguing stamp collections</title><link href="https://zackdesign.biz/stamp-scanner/" rel="alternate" type="text/html" title="Stamp Scanner — iPhone + Mac + SAM 3 for cataloguing stamp collections" /><published>2026-04-16T00:00:00+00:00</published><updated>2026-04-16T00:00:00+00:00</updated><id>https://zackdesign.biz/stamp-scanner</id><content type="html" xml:base="https://zackdesign.biz/stamp-scanner/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/stamp-scanner"><code class="language-plaintext highlighter-rouge">stamp-scanner</code></a> — a two-device workflow for cataloguing stamp collections. The iPhone acts as a tethered macro scanner. The Mac runs SAM 3 segmentation, perceptual-hash deduplication, rotation correction, and a local Qwen3-VL for identification. Everything lives in a queryable SQLite library you can point external tools at.</p>

<!-- more -->

<h2 id="the-architecture-in-ascii">The architecture, in ASCII</h2>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>iPhone (ios-app/)                    Mac (mac-app/)                 Python (tools/)
┌───────────────────┐   HTTP over    ┌───────────────────┐   file   ┌─────────────────┐
│ Capture (HEIC)    ├───LAN+Bonjour─▶│ PhoneIngestServer ├──drop───▶│ sam_worker.py   │
│ MotionGate        │                │ (accepts uploads) │          │ SAM 3 + dedup   │
│ Lens picker       │                └───────────────────┘          │ + white balance │
└───────────────────┘                         │                     └────────┬────────┘
                                              │                              │ writes
                                              ▼                              ▼
                                     ┌────────────────────┐         ┌─────────────────────┐
                                     │ SwiftUI library UI │◀──GRDB──│ library.sqlite      │
                                     │ grid · detail      │         │ (~/Library/App Sup) │
                                     │ rotate · identify  │         └──────────▲──────────┘
                                     │ colnect lookup     │                    │ writes
                                     └────────────────────┘                    │
                                                │ spawns                       │
                                                ▼                              │
                                     ┌────────────────────┐                    │
                                     │ orientation_worker │───── Ollama ───────┤
                                     │   (Qwen3-VL)       │                    │
                                     │ colnect_lookup.py  │───── HTTP ─────────┘
                                     └────────────────────┘
</code></pre></div></div>

<h2 id="the-data-flow">The data flow</h2>

<ol>
  <li><strong>iPhone captures HEIC.</strong> <code class="language-plaintext highlighter-rouge">MotionGate</code> waits for the phone to be steady (accelerometer settled) before taking the shot, the lens picker selects the macro-capable camera, and the captured HEIC is uploaded over Bonjour/LAN to the paired Mac.</li>
  <li><strong>Mac receives it.</strong> <code class="language-plaintext highlighter-rouge">PhoneIngestServer</code> — a SwiftUI app wrapping a tiny HTTP listener — drops the file into <code class="language-plaintext highlighter-rouge">.run/sam_inbox/</code>.</li>
  <li><strong>SAM 3 segments the stamp.</strong> <code class="language-plaintext highlighter-rouge">sam_worker.py</code> runs the Segment Anything 3 model to cut the stamp out of the page, perceptual-hashes it to detect duplicates already in the library, warps it square, and white-balances against the untouched corners of the page.</li>
  <li><strong>SQLite writes.</strong> The segmented, deduplicated, white-balanced stamp lands in <code class="language-plaintext highlighter-rouge">library.sqlite</code> via a GRDB schema.</li>
  <li><strong>SwiftUI UI renders.</strong> The Mac app exposes a grid, a detail view, rotation tools, and “identify” / “Colnect lookup” buttons.</li>
  <li><strong>Identification is VLM-driven.</strong> Hitting “identify” spawns <code class="language-plaintext highlighter-rouge">orientation_worker</code> against a local Ollama-hosted Qwen3-VL instance. Hitting “Colnect lookup” queries the Colnect catalogue API for an official ID match.</li>
</ol>

<h2 id="why-two-devices">Why two devices</h2>

<p>Because an iPhone’s macro camera + image signal processor is genuinely excellent at stamp-sized subjects — better than a flatbed scanner at 1200 dpi for small dense subjects, and much faster. A Mac, meanwhile, is the right place for the heavy lifting: SAM 3 wants a GPU, the local VLM wants 20 GB of unified memory, and GRDB + SwiftUI want a real filesystem and a large screen. Splitting capture from processing plays to each device’s strengths.</p>

<h2 id="why-local">Why local</h2>

<p>A stamp collection is personal. You do not want to upload it to a third-party cataloguing service that might vanish in two years or quietly start charging a subscription. Local models, local SQLite, local UI. The only optional outbound call is the Colnect catalogue API, and that is a lookup against their public IDs — no collection data leaves your Mac.</p>

<h2 id="status">Status</h2>

<p>Working end-to-end for single-subject captures, deduplication, rotation, and VLM-based identification. Full architecture and build instructions in the <a href="https://github.com/isaacrowntree/stamp-scanner">README</a>. If you have a collection that deserves better than a spreadsheet, this is a solid starting point.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="ai" /><category term="swift" /><category term="python" /><category term="ios" /><category term="mac" /><category term="sam" /><category term="vlm" /><category term="philately" /><category term="local-ai" /><category term="open-source" /><summary type="html"><![CDATA[A two-device workflow that turns an iPhone into a macro scanner and a Mac into a SAM-3 segmentation, deduplication, and VLM identification pipeline for philately.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/stamp-scanner.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/stamp-scanner.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">bike-shock-planner — test-driven MTB shock fitment modelling</title><link href="https://zackdesign.biz/bike-shock-planner/" rel="alternate" type="text/html" title="bike-shock-planner — test-driven MTB shock fitment modelling" /><published>2026-04-12T00:00:00+00:00</published><updated>2026-04-12T00:00:00+00:00</updated><id>https://zackdesign.biz/bike-shock-planner</id><content type="html" xml:base="https://zackdesign.biz/bike-shock-planner/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/bike-shock-planner"><code class="language-plaintext highlighter-rouge">bike-shock-planner</code></a> — a <strong>test-driven, code-as-data</strong> planner for mountain bike rear shock replacements, coil conversions, and ebike suspension builds. It began as “can I fit a coil shock to a 2013 Trek Fuel EX 5 ebike conversion?” and grew into a reusable framework that models rear suspension geometry, shock fitment, spring rates, frame clearance, conversion hardware, and global sourcing paths for <em>any</em> bike.</p>

<!-- more -->

<h2 id="it-is-not-a-bike-specific-script">It is not a bike-specific script</h2>

<p>The 2013 Fuel EX 5 is the first “recipe” — a self-contained config describing one bike, one rider, and a set of candidate parts. Everything is written so you can drop in a new recipe for your own frame and the same fit-check and spring-rate logic runs against it. That is the whole point of the project: a single, testable model of rear-shock dimensions and fitment rules, with as many recipes layered on top as people are willing to contribute.</p>

<h2 id="who-it-is-for">Who it is for</h2>

<ul>
  <li><strong>DIY mechanics</strong> restoring an old MTB frame and trying to work out whether a modern shock will bolt up.</li>
  <li><strong>Ebike converters</strong> putting a mid-drive motor on a non-ebike frame and needing to recalculate spring rates for the extra mass and torque.</li>
  <li><strong>Frame hunters</strong> cross-checking a secondhand frame’s shock spec against catalog reality before buying.</li>
  <li><strong>Bike shops</strong> who want a reusable, forkable model of rear-shock dimensions — the catalog is just TypeScript, extend it for whatever you stock and rerun the tests to lint your inventory against real frames.</li>
  <li><strong>Anyone</strong> who has spent hours in a Trek fitment PDF trying to work out whether a shock advertised as “7.25×2.0 imperial” fits their old DRCV mount. (Spoiler: only via a conversion kit.)</li>
</ul>

<h2 id="what-it-does">What it does</h2>

<ul>
  <li><strong>Bike model.</strong> Eye-to-eye, stroke, mount styles, eyelet widths, bolt sizes, leverage ratio, progression, and the frame clearance envelope — all captured in code.</li>
  <li><strong>Shock catalog.</strong> Aftermarket shocks modelled as code, with body dimensions, piggyback status, coil spring rate range, Australian sourcing notes, and verified product URLs.</li>
  <li><strong>Fit check.</strong> Frame slot × candidate shock returns each dimensional mismatch separately — eye-to-eye, stroke, upper/lower eyelet width, bolt sizes, mount styles, body length, body diameter, reservoir clearance. No yes/no black boxes.</li>
  <li><strong>Conversion kits.</strong> Kits that rewrite a shock’s mounting hardware are modelled as functions that transform a candidate. So you can ask “does this imperial shock fit if I use the Shockcraft Deaktiv kit?” and get a real answer.</li>
  <li><strong>Spring-rate calculator.</strong> A <em>practical</em> formula that accounts for rear weight distribution — not the theoretical Fox “quick formula” that overshoots real-world spring picks by 40%.</li>
  <li><strong>Ebike load correction.</strong> Weights 40% of battery + motor mass onto the rear shock and adds a high-torque correction for ≥100 Nm motors.</li>
  <li><strong>Progression flag.</strong> Warns when a frame’s linkage does not really want a coil — e.g. Trek’s Full Floater is only ~13% progressive and is tuned for a DRCV air spring, so a linear coil will bottom harshly.</li>
  <li><strong>Documented-build flag.</strong> If no published build exists for the exact frame generation, every candidate gets an <em>experimental</em> warning.</li>
  <li><strong>Research library.</strong> Verified references to conversion kits, manufacturer product pages, global retailers, used-market venues, forum threads, and vendor email contacts — with tests enforcing that every link is HTTPS and every group is populated.</li>
  <li><strong>Pivot hardware model.</strong> OEM bearing/bolt spec plus a four-step health check so you can decide whether a full frame rebuild is required alongside the shock swap.</li>
</ul>

<h2 id="status-today">Status today</h2>

<p>Primarily a <strong>2013 Trek Fuel EX 5</strong> model. The coil catalog includes Push ElevenSix (the only currently-buildable imperial 7.25×2.0 coil in April 2026), plus Marzocchi Bomber CR, Fox DHX2, DVO Jade X, MRP Hazzard Coil, and Cane Creek DB Coil IL entries marked used-market-only. The air catalog includes Fox Float X2, RockShox Super Deluxe Ultimate, and Marzocchi Bomber Air. Real VALT Progressive sizes are captured with the 45 mm stroke that fits inside a 50 mm shock; Sprindex 55 mm is flagged as not-fitting. The conversion kit catalog covers Offset Bushings, Shockcraft Deaktiv, an unpublished custom-machine path for Huber Bushings, plus a speculative metric-to-Trek kit flagged <code class="language-plaintext highlighter-rouge">publishedSku: false</code> so the test suite warns on it.</p>

<h2 id="why-code-as-data">Why code-as-data</h2>

<p>Because every existing shock “compatibility chart” is a PDF, and PDFs cannot be run against a test suite. If you model the data in TypeScript, the test suite can assert things like “no reservoir clash on any frame in the catalog”, “every link in the research library is reachable”, and “every catalog entry has a spring rate range if it is a coil”. That turns a messy research task into something a contributor can submit a pull request against. Source on <a href="https://github.com/isaacrowntree/bike-shock-planner">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="typescript" /><category term="bikes" /><category term="mtb" /><category term="suspension" /><category term="testing" /><category term="open-source" /><summary type="html"><![CDATA[A TypeScript framework for modelling rear-shock fitment, coil conversions, ebike spring rates, and global parts sourcing for any mountain bike — starting with a 2013 Trek Fuel EX 5.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/bike-shock-planner.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/bike-shock-planner.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Clean Backdrop — free, GPU-accelerated studio backdrop cleanup</title><link href="https://zackdesign.biz/clean-backdrop/" rel="alternate" type="text/html" title="Clean Backdrop — free, GPU-accelerated studio backdrop cleanup" /><published>2026-03-25T00:00:00+00:00</published><updated>2026-03-25T00:00:00+00:00</updated><id>https://zackdesign.biz/clean-backdrop</id><content type="html" xml:base="https://zackdesign.biz/clean-backdrop/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/clean-backdrop"><code class="language-plaintext highlighter-rouge">clean-backdrop</code></a> — a free, open-source tool for cleaning up studio portrait backdrops. Shadow lift plus frequency separation on a high-quality portrait segmentation mask, running on a CUDA GPU, no AI inpainting artifacts anywhere. It is a clean-math alternative to paid tools like Retouch4me Clean Backdrop.</p>

<!-- more -->

<h2 id="the-problem">The problem</h2>

<p>Studio paper backdrops are never as clean as they look before the shoot. A six-hour session leaves scuff marks, footprints, seam shadows where the paper meets the floor, uneven lighting where the key light rolled off, and the occasional crease from the roll dispenser. Fixing those by hand in Photoshop — with the healing brush, dodge/burn layers, and a feathered mask around the subject — is a real job. For a shoot with two hundred keepers, it is unreasonable.</p>

<p>The commercial tools that automate this are excellent and expensive. <code class="language-plaintext highlighter-rouge">clean-backdrop</code> is the free alternative.</p>

<h2 id="how-it-works">How it works</h2>

<p>Two complementary techniques, run on GPU:</p>

<ol>
  <li><strong>Shadow Lift.</strong> Samples a patch of clean wall, then blends cast shadows toward that reference. Preserves the natural wall gradient (studios are not lit perfectly flat and should not be rendered that way). Adjustable 0–100%.</li>
  <li><strong>Texture Smoothing (Frequency Separation).</strong> Splits the image into a low-frequency lighting gradient and a high-frequency detail layer. Smooths the detail layer — where marks, scuffs, and paper texture live — while leaving the gradient untouched. No smudging, no false positives on the subject’s hair.</li>
</ol>

<p>Both passes run on a <strong><a href="https://github.com/ZhengPeng7/BiRefNet">BiRefNet-Portrait</a></strong> subject segmentation mask with distance-based feathering, so the boundary between subject and cleaned background has no visible “bar” artifact at any crop size.</p>

<h2 id="smart-edge-handling">Smart edge handling</h2>

<ul>
  <li><strong>Smooth subject masking.</strong> Feathering scales with image size, so a 24 MP portrait has the same clean transition as a 45 MP headshot.</li>
  <li><strong>Automatic floor detection.</strong> Real floors (wood, tile, concrete) are distinguished from wall shadow/vignetting by a colour-analysis heuristic. Real floors stay; darkening on walls gets cleaned.</li>
  <li><strong>Vertical floor transition.</strong> The wall-to-floor boundary uses a row-based ramp so the floor texture and contact shadows around the subject’s feet are never disturbed.</li>
</ul>

<h2 id="running-it">Running it</h2>

<p>There are two ways to use it:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Web UI — drag-and-drop with live sliders</span>
pip <span class="nb">install</span> <span class="nt">-r</span> requirements.txt
python app.py
<span class="c"># open http://localhost:5000</span>
</code></pre></div></div>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Batch — directory in, directory out</span>
python batch.py <span class="s2">"D:</span><span class="se">\P</span><span class="s2">hotos</span><span class="se">\E</span><span class="s2">xport</span><span class="se">\M</span><span class="s2">y Shoot"</span> <span class="nt">--lift</span> 70 <span class="nt">--texture</span> 50
</code></pre></div></div>

<p>The web UI shows four tabs — Original, Shadows, Texture, Preview — so you can dial shadow lift and texture smoothing independently and watch the separation happen. Outputs are saved next to the original with a <code class="language-plaintext highlighter-rouge">_clean</code> suffix, ICC colour profiles and EXIF metadata preserved.</p>

<h2 id="why-open-source">Why open-source</h2>

<p>Because the underlying math is not exotic — shadow lift and frequency separation have been in the photo-retouching toolkit for twenty years — and because a high-quality portrait segmentation model exists under a permissive licence. The commercial offerings are polished, but the core workflow does not need to be proprietary. If you shoot regularly against studio paper, <a href="https://github.com/isaacrowntree/clean-backdrop">clone the repo</a> and stop paying a per-seat fee for a batch operation.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="python" /><category term="photography" /><category term="cuda" /><category term="birefnet" /><category term="image-processing" /><category term="open-source" /><summary type="html"><![CDATA[An open-source alternative to Retouch4me Clean Backdrop — shadow lift plus frequency separation on a BiRefNet-Portrait mask, run on CUDA. No AI inpainting artifacts.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/clean-backdrop.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/clean-backdrop.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Ledger — Australian personal finance ETL and ATO tax dashboard</title><link href="https://zackdesign.biz/ledger/" rel="alternate" type="text/html" title="Ledger — Australian personal finance ETL and ATO tax dashboard" /><published>2026-03-20T00:00:00+00:00</published><updated>2026-03-20T00:00:00+00:00</updated><id>https://zackdesign.biz/ledger</id><content type="html" xml:base="https://zackdesign.biz/ledger/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/ledger"><code class="language-plaintext highlighter-rouge">ledger</code></a> — a terminal-first personal finance tool that ingests bank statements from multiple Australian banks, categorises transactions with regex-based rules, and renders an ATO-ready tax return view plus a net worth dashboard. Local-first, SQLite under the hood, no cloud dependency.</p>

<!-- more -->

<h2 id="the-itch">The itch</h2>

<p>Every mid-year I rebuild the same Excel spreadsheet: paste in ING transactions, paste in PayPal, paste in the Bankwest credit card, then hand-categorise everything, then try to remember which expense was for which business, then double-count a $200 transaction that appeared on both the credit card and the bank account it was paid from. Then I hand it to my accountant and we do it all over again. Ledger is the version I should have built five years ago.</p>

<h2 id="sources-supported-today">Sources supported today</h2>

<table>
  <thead>
    <tr>
      <th>Source</th>
      <th>Formats</th>
      <th>Parser</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>ING Australia</td>
      <td>PDF statements, CSV export</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/ing_pdf.py</code>, <code class="language-plaintext highlighter-rouge">ing_csv.py</code></td>
    </tr>
    <tr>
      <td>PayPal</td>
      <td>CSV activity download</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/paypal_csv.py</code></td>
    </tr>
    <tr>
      <td>Bankwest</td>
      <td>PDF eStatements, CSV</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/bankwest_pdf.py</code>, <code class="language-plaintext highlighter-rouge">bankwest_csv.py</code></td>
    </tr>
    <tr>
      <td>HSBC</td>
      <td>PDF statements</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/hsbc_pdf.py</code></td>
    </tr>
    <tr>
      <td>Coles Mastercard</td>
      <td>PDF statements</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/coles_pdf.py</code></td>
    </tr>
    <tr>
      <td>Amex</td>
      <td>CSV download</td>
      <td><code class="language-plaintext highlighter-rouge">etl/parsers/amex_csv.py</code></td>
    </tr>
  </tbody>
</table>

<p>Drop a statement into <code class="language-plaintext highlighter-rouge">staging/&lt;source&gt;/</code>, run <code class="language-plaintext highlighter-rouge">ledger ingest</code>, and the right parser picks it up. PDF parsing is per-bank because every Australian bank has a different statement layout and none of them offer a clean machine-readable export.</p>

<h2 id="what-it-does">What it does</h2>

<ul>
  <li><strong>Multi-source ingestion.</strong> The above parsers, with dedup rules to prevent double-counting when a transaction appears on both a bank account and a credit card.</li>
  <li><strong>Auto-categorisation.</strong> Regex-based merchant rules assign categories automatically and learn from manual overrides.</li>
  <li><strong>Business splits.</strong> A percentage of any expense can be allocated to a business — essential for anyone running a sole-trader side or a company with home-office overlap.</li>
  <li><strong>ATO tax return view.</strong> Output structured to match the sections of an Australian individual tax return: salary, rental schedule, business schedule, deductions.</li>
  <li><strong>Financial year view.</strong> Outgoing / incoming / rental / work-trip sub-tabs replacing the Excel sheet I had been rebuilding by hand every year.</li>
  <li><strong>Net worth dashboard.</strong> Accounts, credit cards, property, vehicles — balances pulled from the same statements.</li>
  <li><strong>Tags.</strong> Orthogonal to categories. A transaction can be in category “Travel” and tagged <code class="language-plaintext highlighter-rouge">flight</code>, <code class="language-plaintext highlighter-rouge">biz-hosting</code>, <code class="language-plaintext highlighter-rouge">rental-income</code> for finer reporting without having to invent a deeper category tree.</li>
</ul>

<h2 id="why-local-first">Why local-first</h2>

<p>Because my financial data is mine. No cloud dependency, no third-party aggregator pulling read-only access to my bank accounts, no “we are deprecating the Xero integration” email six months from now. SQLite sits in a folder, the dashboard runs on <code class="language-plaintext highlighter-rouge">localhost</code>, and if I want to back it all up I copy a single file.</p>

<h2 id="quick-start">Quick start</h2>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/isaacrowntree/ledger.git
<span class="nb">cd </span>ledger
python3 <span class="nt">-m</span> venv .venv <span class="o">&amp;&amp;</span> <span class="nb">source</span> .venv/bin/activate
pip <span class="nb">install</span> <span class="nt">-e</span> <span class="nb">.</span>

<span class="nb">cp </span>config/accounts.yaml.example config/accounts.yaml
<span class="nb">cp </span>config/categories.yaml.example config/categories.yaml
<span class="nb">cp </span>config/tax.yaml.example config/tax.yaml

ledger init
<span class="nb">mkdir</span> <span class="nt">-p</span> staging/ing staging/paypal
<span class="c"># Drop PDFs/CSVs into those folders</span>
ledger ingest
python <span class="nt">-m</span> api
<span class="c"># Open http://localhost:5050</span>
</code></pre></div></div>

<h2 id="who-it-is-for">Who it is for</h2>

<p>Anyone in Australia with more than one bank account, a side business or two, and an accountant who currently gets a hand-assembled spreadsheet every July. Source on <a href="https://github.com/isaacrowntree/ledger">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="python" /><category term="etl" /><category term="finance" /><category term="tax" /><category term="ato" /><category term="sqlite" /><category term="open-source" /><category term="local-first" /><summary type="html"><![CDATA[A terminal-first personal finance tool that ingests ING, Bankwest, HSBC, PayPal, Amex, and Coles statements, categorises transactions, and renders an ATO-ready tax view.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/ledger.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/ledger.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">local-llm-coding-guide — Qwen, Gemma, and llama.cpp as a coding assistant</title><link href="https://zackdesign.biz/local-llm-coding-guide/" rel="alternate" type="text/html" title="local-llm-coding-guide — Qwen, Gemma, and llama.cpp as a coding assistant" /><published>2026-03-14T00:00:00+00:00</published><updated>2026-03-14T00:00:00+00:00</updated><id>https://zackdesign.biz/local-llm-coding-guide</id><content type="html" xml:base="https://zackdesign.biz/local-llm-coding-guide/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/local-llm-coding-guide"><code class="language-plaintext highlighter-rouge">local-llm-coding-guide</code></a> — a no-fluff, benchmark-driven guide to running a genuinely useful local LLM as a coding assistant on consumer hardware. It covers Qwen3.5 and Gemma 4 across llama.cpp, Ollama (with MLX), and vllm-mlx, with real tokens-per-second numbers from three real machines.</p>

<!-- more -->

<h2 id="why-local">Why local</h2>

<p>Cloud LLMs are wonderful until you are on a flight, behind a client VPN, editing code with sensitive data, or burning through a monthly token budget faster than is reasonable. The quality gap between the best frontier models and the best <em>local-runnable</em> models has narrowed dramatically — a quantised 9B Qwen model on a modest NVIDIA card is now perfectly capable of the “reformat this function, add a docstring, write a test” type of work that makes up most of a coding assistant’s day.</p>

<h2 id="the-benchmarks">The benchmarks</h2>

<p>Measured on release builds, real completions, real contexts:</p>

<table>
  <thead>
    <tr>
      <th>GPU</th>
      <th>Model</th>
      <th>Tok/s</th>
      <th>Context</th>
      <th>Memory</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>RTX 4070 Ti 12GB</td>
      <td>Nemotron 3 Nano 4B Q4_K_M</td>
      <td>TBD</td>
      <td>262K</td>
      <td>~5GB</td>
    </tr>
    <tr>
      <td>RTX 4070 Ti 12GB</td>
      <td>Qwen3.5-9B Q4_K_M</td>
      <td>~65</td>
      <td>131K</td>
      <td>7.8GB</td>
    </tr>
    <tr>
      <td>RTX 3060 12GB</td>
      <td>Qwen3.5-9B Q4_K_M</td>
      <td>~43</td>
      <td>128K</td>
      <td>~7.8GB</td>
    </tr>
    <tr>
      <td>RTX 3090 24GB</td>
      <td>Qwen3.5-27B Q4_K_M</td>
      <td>~30</td>
      <td>262K</td>
      <td>~18GB</td>
    </tr>
    <tr>
      <td>M3 Pro 36GB</td>
      <td><strong>Qwen3.5-35B-A3B Q4_K_M</strong></td>
      <td><strong>~29</strong></td>
      <td>131K</td>
      <td><strong>~22GB</strong></td>
    </tr>
    <tr>
      <td>M3 Pro 36GB</td>
      <td>Qwen3.5-9B Q4_K_M</td>
      <td>~20</td>
      <td>131K</td>
      <td>~7GB</td>
    </tr>
    <tr>
      <td>M3 Pro 36GB</td>
      <td>Qwen3.5-27B Q4_K_M</td>
      <td>~9*</td>
      <td>131K</td>
      <td>~18GB</td>
    </tr>
    <tr>
      <td>M3 Pro 36GB</td>
      <td><strong>Gemma 4 26B-A4B Q4_K_M (Ollama MLX)</strong></td>
      <td><strong>~31</strong></td>
      <td>256K</td>
      <td><strong>~17GB</strong></td>
    </tr>
  </tbody>
</table>

<p>*The dense 27B is slower than the 35B-A3B MoE on 36 GB machines — see “Why MoE?” in the repo for the full story.</p>

<h2 id="why-moe-wins-on-apple-silicon">Why MoE wins on Apple Silicon</h2>

<p>Apple’s unified memory is generous but its memory <em>bandwidth</em> is not as high as a discrete NVIDIA card’s. A dense 27B model saturates that bandwidth on every token. A mixture-of-experts model like Qwen3.5-35B-A3B only activates 3B parameters per token, which means each token reads a fraction of the weights — and the model runs faster <em>and</em> smarter than the dense option it replaces. The guide walks through the tradeoff properly.</p>

<h2 id="test-machines">Test machines</h2>

<ul>
  <li><strong>Windows/WSL2:</strong> RTX 4070 Ti (12 GB), Intel Core Ultra 9 285K, 48 GB DDR5</li>
  <li><strong>macOS:</strong> M3 MacBook Pro, 36 GB unified memory</li>
</ul>

<h2 id="quick-start">Quick start</h2>

<p>The guide walks through llama.cpp from source (with <code class="language-plaintext highlighter-rouge">-DGGML_CUDA=ON</code> or <code class="language-plaintext highlighter-rouge">-DGGML_METAL=ON</code>), the <code class="language-plaintext highlighter-rouge">llama-server</code> binary, wiring it into VS Code via the Continue extension, and wiring it into Claude Code as a local endpoint. Ollama + MLX is covered as the one-command alternative for Apple Silicon.</p>

<h2 id="who-it-is-for">Who it is for</h2>

<p>Developers who want a serious coding assistant that runs on their own hardware, without a subscription, without a round-trip to a cloud inference endpoint, and without hand-tuning flags for six hours. Read it on <a href="https://github.com/isaacrowntree/local-llm-coding-guide">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="ai" /><category term="guides" /><category term="llm" /><category term="llama-cpp" /><category term="ollama" /><category term="qwen" /><category term="gemma" /><category term="local-ai" /><category term="coding-assistant" /><category term="guides" /><summary type="html"><![CDATA[A benchmarks-first guide to running Qwen3.5 and Gemma 4 locally as a coding assistant — on a 4070 Ti, a 3090, and an M3 Pro MacBook — with real tok/s numbers.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/local-llm-coding-guide.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/local-llm-coding-guide.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">audio-analysis-and-recut — reconstructing a live set from the studio master</title><link href="https://zackdesign.biz/audio-analysis-and-recut/" rel="alternate" type="text/html" title="audio-analysis-and-recut — reconstructing a live set from the studio master" /><published>2026-03-13T00:00:00+00:00</published><updated>2026-03-13T00:00:00+00:00</updated><id>https://zackdesign.biz/audio-analysis-and-recut</id><content type="html" xml:base="https://zackdesign.biz/audio-analysis-and-recut/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/audio-analysis-and-recut"><code class="language-plaintext highlighter-rouge">audio-analysis-and-recut</code></a> — a small Python + FFmpeg pipeline that takes a noisy live performance recording, works out exactly which sections of the original studio track the band played, and generates a high-fidelity recut that follows the live arrangement.</p>

<!-- more -->

<h2 id="the-problem">The problem</h2>

<p>You have two audio files:</p>

<ul>
  <li><strong>Original</strong> — the studio recording. High fidelity, the version on Spotify, the one you actually want to listen to.</li>
  <li><strong>Performance</strong> — a live recording of the same song. Great arrangement, maybe some improvisation, but also crowd noise, ambient PA colouration, and a phone mic’s idea of bass response.</li>
</ul>

<p>The live arrangement is <em>better</em> — it is the one the band actually performed — but the audio fidelity is <em>worse</em>. What you want is: the live arrangement, with studio fidelity. That is what this tool does.</p>

<h2 id="how-it-works">How it works</h2>

<ol>
  <li><strong>Band-pass filter</strong> to 200–4000 Hz on both tracks. Vocals live in that range, crowd noise and room rumble largely do not. This is what makes matching robust to a noisy live environment.</li>
  <li><strong>Sliding-window cross-correlation</strong> between chunks of the performance and the full studio track. Each chunk’s best match pins down where in the original it came from.</li>
  <li><strong>Segment detection</strong> by grouping matches with consistent time offsets. A 30-second verse played live will produce 30 seconds of chunks that all agree on the same studio offset.</li>
  <li><strong>FFmpeg concatenation</strong> of the identified original segments, in the order the live performance used them, into a clean output file.</li>
</ol>

<h2 id="example-output">Example output</h2>

<p>For “Ya Te Olvide” by Los 4 ft Laritza Bacallao (4:40 studio original → 1:56 live performance):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SEGMENT MAP:
  1  0:00-0:19  |  Orig 0:12-0:30  |  18.5s  (intro/verse start)
  2  0:19-1:04  |  Orig 0:50-1:35  |  44.5s  (verse/chorus)
  3  1:04-1:36  |  Orig 2:44-3:15  |  31.5s  (montuno section)
  4  1:36-1:46  |  Orig 4:13-4:23  |   9.5s  (ending)
  5  1:46-1:48  |  Orig 4:33-4:35  |   2.0s  (final tag)

Skipped from original:
  0:00-0:12  (11.8s) - pre-intro
  0:30-0:50  (20.1s) - transition/repeat
  1:35-2:44  (69.1s) - repeated verse section
  3:15-4:13  (57.9s) - extended montuno/breakdown
  4:23-4:33  (10.3s) - outro padding
</code></pre></div></div>

<p>The segment map reads like a director’s cut list: the band skipped the pre-intro, compressed the long breakdown, and landed on a different ending. Feeding that back into FFmpeg reconstructs the performance from clean studio audio.</p>

<h2 id="why-bother">Why bother</h2>

<p>Latin dance classes like <a href="https://www.havanahastingsdance.com.au/">Havana on the Hastings</a> often rehearse to studio recordings but perform to the band’s own arrangement — which means choreographies that work in rehearsal do not always align with the live record they are showcased against. A recut reconciles the two: same arrangement the dancers know, but clean enough to cue on.</p>

<h2 id="usage">Usage</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cp </span>original_song.mp3 staging/original.mp3
<span class="nb">cp </span>performance_recording.mp3 staging/performance.mp3
python3 analyze.py
<span class="c"># Output: output/ya_te_olvide_recut.mp3</span>
</code></pre></div></div>

<p>Dependencies are Python 3 with NumPy, plus a working FFmpeg on <code class="language-plaintext highlighter-rouge">$PATH</code>. Source on <a href="https://github.com/isaacrowntree/audio-analysis-and-recut">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="audio" /><category term="dsp" /><category term="python" /><category term="ffmpeg" /><category term="cross-correlation" /><category term="open-source" /><summary type="html"><![CDATA[A Python tool that cross-correlates a noisy live performance recording against the original studio track and rebuilds a high-fidelity recut following the live arrangement.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/audio-analysis-and-recut.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/audio-analysis-and-recut.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">react-native-nitro-unzip — a fast, Nitro-powered unzip module</title><link href="https://zackdesign.biz/react-native-nitro-unzip/" rel="alternate" type="text/html" title="react-native-nitro-unzip — a fast, Nitro-powered unzip module" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://zackdesign.biz/react-native-nitro-unzip</id><content type="html" xml:base="https://zackdesign.biz/react-native-nitro-unzip/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/react-native-nitro-unzip"><code class="language-plaintext highlighter-rouge">react-native-nitro-unzip</code></a> — a high-performance ZIP module for React Native, built on <a href="https://nitro.margelo.com/">Nitro Modules</a>. It extracts and creates password-protected archives on iOS and Android faster than the existing community options, with real progress callbacks and cancellation delivered directly over JSI rather than the old bridge.</p>

<!-- more -->

<h2 id="benchmarks-up-front">Benchmarks, up front</h2>

<p>On a 350 MB archive containing 10,000 files:</p>

<ul>
  <li><strong>iOS:</strong> ~500 files/sec</li>
  <li><strong>Android:</strong> ~474 files/sec</li>
</ul>

<p>Those numbers are measured with release builds, not debug — the bridge-free JSI path is a large part of why the difference shows up at all.</p>

<h2 id="feature-surface">Feature surface</h2>

<ul>
  <li><strong>Extraction</strong> with per-file progress (bytes extracted, files remaining, percentage complete) delivered synchronously via JSI — no bridge serialisation, no frame drops.</li>
  <li><strong>Synchronous cancellation.</strong> A cancel is honoured on the next file boundary, not at the next JS tick.</li>
  <li><strong>Password-protected archives.</strong> AES-256 encryption supported on both platforms for extraction <em>and</em> creation.</li>
  <li><strong>Zip creation.</strong> Compress a directory into an archive, optionally with a password.</li>
  <li><strong>Concurrent operations.</strong> Multiple tasks run independently without locking each other.</li>
  <li><strong>Background execution on iOS</strong> via proper <code class="language-plaintext highlighter-rouge">UIApplication</code> background task management, so a 500 MB archive can finish extracting even if the user backgrounds the app.</li>
</ul>

<h2 id="quick-example">Quick example</h2>

<div class="language-typescript highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">import</span> <span class="p">{</span> <span class="nx">getUnzip</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">react-native-nitro-unzip</span><span class="dl">'</span><span class="p">;</span>

<span class="kd">const</span> <span class="nx">unzip</span> <span class="o">=</span> <span class="nx">getUnzip</span><span class="p">();</span>
<span class="kd">const</span> <span class="nx">task</span> <span class="o">=</span> <span class="nx">unzip</span><span class="p">.</span><span class="nx">extract</span><span class="p">(</span><span class="dl">'</span><span class="s1">/path/to/archive.zip</span><span class="dl">'</span><span class="p">,</span> <span class="dl">'</span><span class="s1">/path/to/output</span><span class="dl">'</span><span class="p">);</span>

<span class="nx">task</span><span class="p">.</span><span class="nx">onProgress</span><span class="p">((</span><span class="nx">p</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
  <span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">`</span><span class="p">${(</span><span class="nx">p</span><span class="p">.</span><span class="nx">progress</span> <span class="o">*</span> <span class="mi">100</span><span class="p">).</span><span class="nx">toFixed</span><span class="p">(</span><span class="mi">0</span><span class="p">)}</span><span class="s2">% — </span><span class="p">${</span><span class="nx">p</span><span class="p">.</span><span class="nx">extractedFiles</span><span class="p">}</span><span class="s2">/</span><span class="p">${</span><span class="nx">p</span><span class="p">.</span><span class="nx">totalFiles</span><span class="p">}</span><span class="s2"> files`</span><span class="p">);</span>
<span class="p">});</span>

<span class="kd">const</span> <span class="nx">result</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">task</span><span class="p">.</span><span class="k">await</span><span class="p">();</span>
<span class="nx">console</span><span class="p">.</span><span class="nx">log</span><span class="p">(</span><span class="s2">`Extracted </span><span class="p">${</span><span class="nx">result</span><span class="p">.</span><span class="nx">extractedFiles</span><span class="p">}</span><span class="s2"> files in </span><span class="p">${</span><span class="nx">result</span><span class="p">.</span><span class="nx">duration</span><span class="p">}</span><span class="s2">ms`</span><span class="p">);</span>
</code></pre></div></div>

<p>Progress callbacks fire on every file, and because they ride JSI they never queue up behind the bridge.</p>

<h2 id="why-it-exists">Why it exists</h2>

<p>React Native has had ZIP libraries for years. Most of them predate Nitro and therefore predate modern JSI — which means every progress tick had to serialise across the bridge, every cancellation had to round-trip through an async message queue, and every archive operation paid the bridge tax proportional to the number of files. For the kind of app that extracts a single 2 MB archive at install time, none of that matters. For the kind of app that handles large user-uploaded archives, downloads payload bundles from a server, or packages up content for offline use, it matters a lot.</p>

<p>Internally the native side leans on battle-tested libraries — <code class="language-plaintext highlighter-rouge">SSZipArchive</code> on iOS, an optimised <code class="language-plaintext highlighter-rouge">ZipInputStream</code> path on Android — rather than reinventing the compression format. The contribution is the JSI layer, the cancellation machinery, and the progress plumbing that sits on top.</p>

<h2 id="installation-and-docs">Installation and docs</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>npm <span class="nb">install </span>react-native-nitro-unzip react-native-nitro-modules
<span class="nb">cd </span>ios <span class="o">&amp;&amp;</span> pod <span class="nb">install</span>
</code></pre></div></div>

<p>Requires React Native 0.75+, Nitro Modules 0.34+, iOS 15.5+, and Java 17 on Android. Full docs — including extraction, compression, password handling, cancellation semantics, and the API reference auto-generated from TypeScript — live at <a href="https://isaacrowntree.github.io/react-native-nitro-unzip/">isaacrowntree.github.io/react-native-nitro-unzip</a>.</p>

<p>Source on <a href="https://github.com/isaacrowntree/react-native-nitro-unzip">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="react-native" /><category term="nitro" /><category term="ios" /><category term="android" /><category term="typescript" /><category term="performance" /><category term="open-source" /><summary type="html"><![CDATA[A React Native ZIP library built on Nitro Modules — ~500 files/sec on iOS, ~474 on Android, AES-256 support, cancellation, and zero bridge serialisation.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/react-native-nitro-unzip.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/react-native-nitro-unzip.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">claude-social-skills — Claude Code plugins for social, email, and eBay</title><link href="https://zackdesign.biz/claude-social-skills/" rel="alternate" type="text/html" title="claude-social-skills — Claude Code plugins for social, email, and eBay" /><published>2026-02-15T00:00:00+00:00</published><updated>2026-02-15T00:00:00+00:00</updated><id>https://zackdesign.biz/claude-social-skills</id><content type="html" xml:base="https://zackdesign.biz/claude-social-skills/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/claude-social-skills"><code class="language-plaintext highlighter-rouge">claude-social-skills</code></a> — a Claude Code plugin marketplace bundling three genuinely useful capabilities into a single install. No MCP servers, no background daemons — just Python scripts plus <code class="language-plaintext highlighter-rouge">SKILL.md</code> files that teach Claude how to use them.</p>

<!-- more -->

<h2 id="what-is-in-the-marketplace">What is in the marketplace</h2>

<table>
  <thead>
    <tr>
      <th>Plugin</th>
      <th>What it does</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>social-post</strong></td>
      <td>Post to Twitter/X, Reddit, Facebook Pages, and Instagram Business/Creator accounts</td>
    </tr>
    <tr>
      <td><strong>ebay-listing</strong></td>
      <td>List items for sale on eBay — title, description, photos, pricing, shipping</td>
    </tr>
    <tr>
      <td><strong>himalaya-email</strong></td>
      <td>Read, send, and manage email using the excellent <a href="https://pimalaya.org/himalaya/">Himalaya CLI</a></td>
    </tr>
  </tbody>
</table>

<p>Each plugin is a self-contained directory with its own skill definition, scripts, and dependency list. Install only what you need.</p>

<h2 id="install">Install</h2>

<p>Inside Claude Code:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/plugin marketplace add isaacrowntree/claude-social-skills
/plugin install social-post@social-skills
/plugin install ebay-listing@social-skills
/plugin install himalaya-email@social-skills
</code></pre></div></div>

<h2 id="social-post--one-command-four-platforms">social-post — one command, four platforms</h2>

<p><code class="language-plaintext highlighter-rouge">social-post</code> wraps four very different APIs behind a uniform skill surface. Credentials are exported as environment variables (Twitter API keys, Reddit client ID/secret, Facebook Page token, Instagram Business token), and the skill tells Claude which ones are required for which platform.</p>

<p>You can ask Claude things like:</p>

<blockquote>
  <p>Post this to Reddit r/programming: “Check out this tool…”</p>
</blockquote>

<blockquote>
  <p>Share on Twitter and Reddit: “Big announcement…”</p>
</blockquote>

<p>And it figures out the right endpoints, handles the rate limits, and reports back with the posted URLs. Under the hood, each platform is a ~100-line Python script, so nothing is magic — you can run <code class="language-plaintext highlighter-rouge">python3 scripts/tweet.py "Hello world"</code> directly if you want.</p>

<h2 id="ebay-listing--structured-not-chatty">ebay-listing — structured, not chatty</h2>

<p>Listing on eBay through their API is finicky: item specifics, shipping policies, photos, tax categories, return policies, payment profiles. The skill defines a structured JSON shape for “a listing” and hands Claude enough context to map a rough human description (“I’ve got a 2022 Shimano derailleur, barely used, want $80”) into a valid eBay payload.</p>

<h2 id="himalaya-email--email-but-agent-shaped">himalaya-email — email, but agent-shaped</h2>

<p>Himalaya is a terminal-native email client that speaks IMAP, SMTP, Maildir, and Notmuch. Wrapping it in a skill means Claude can triage, summarise, reply, and draft email the same way a human would — without Yet Another OAuth dance or a bespoke Gmail integration that breaks on IMAP-only accounts.</p>

<h2 id="why-this-shape">Why this shape</h2>

<p>I experimented with full-blown MCP servers for these capabilities and ended up preferring the scripts-plus-skill approach: zero extra processes, zero ports to manage, and the underlying scripts stay runnable by a human on the CLI. Every credential is whatever the underlying service hands you — Twitter developer tokens, Reddit app credentials, Facebook Graph tokens, IMAP passwords in your keychain — stored however you already store them.</p>

<p>If you run Claude Code and you want to turn it into a genuinely useful operator for your daily online life, <a href="https://github.com/isaacrowntree/claude-social-skills">install the marketplace</a> and pick the skills you actually use.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="ai" /><category term="claude-code" /><category term="plugins" /><category term="social-media" /><category term="email" /><category term="ebay" /><category term="python" /><category term="open-source" /><summary type="html"><![CDATA[A Claude Code plugin marketplace bundling social-post, ebay-listing, and himalaya-email — scripts + skills, no MCP servers required.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/claude-social-skills.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/claude-social-skills.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">deflicker-runner — killing LED PWM flicker in 4K video, in 400MB of RAM</title><link href="https://zackdesign.biz/deflicker-runner/" rel="alternate" type="text/html" title="deflicker-runner — killing LED PWM flicker in 4K video, in 400MB of RAM" /><published>2026-02-08T00:00:00+00:00</published><updated>2026-02-08T00:00:00+00:00</updated><id>https://zackdesign.biz/deflicker-runner</id><content type="html" xml:base="https://zackdesign.biz/deflicker-runner/"><![CDATA[<p>Zack Design has published <a href="https://github.com/isaacrowntree/deflicker-runner"><code class="language-plaintext highlighter-rouge">deflicker-runner</code></a> — a Python pipeline for removing the subtle but maddening LED flicker that rolling-shutter cameras capture when shooting under overhead LED panels. It runs on arbitrarily long 4K footage in about 400 MB of RAM, which is the interesting part.</p>

<!-- more -->

<h2 id="the-problem">The problem</h2>

<p>LED panels do not emit light continuously. They pulse at 100 Hz (50 Hz mains, rectified to both halves). A rolling-shutter sensor at 59.94 fps captures a slightly different slice of that pulse train per frame — and the beat frequency between the two lands right around 20 Hz. The result is a ~2% whole-frame brightness oscillation, which is small enough to miss on set and loud enough to ruin the footage in post.</p>

<p>Two flavours show up in real shoots:</p>

<ul>
  <li><strong>Whole-frame pulsing</strong> on the broad surfaces that reflect the overhead LEDs — walls, ceilings, dark backgrounds.</li>
  <li><strong>Spot flicker</strong> on small sources: filament bulbs, fairy lights, practicals that are themselves PWM-driven.</li>
</ul>

<h2 id="the-pipeline">The pipeline</h2>

<p><code class="language-plaintext highlighter-rouge">deflicker-runner</code> ships a handful of modes, each suited to a different failure:</p>

<table>
  <thead>
    <tr>
      <th>Approach</th>
      <th>Best for</th>
      <th>How it works</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">temporal-median</code></td>
      <td>Whole-frame LED flicker</td>
      <td>Per-pixel temporal median at 540p, upscaled back to 4K. Eliminates the 3-frame cycle entirely.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">spot-replace</code></td>
      <td>Small filament bulbs</td>
      <td>Full-resolution detection of bright flickering spots, then per-box temporal median replacement.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">running-mean</code></td>
      <td>General purpose</td>
      <td>Per-row running temporal mean baseline. Good balance of speed and quality.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">pixel-smooth</code></td>
      <td>Motion-preserving</td>
      <td>Per-pixel BCC-style weighted average that respects motion.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">fft-notch</code></td>
      <td>Precise frequency removal</td>
      <td>FFT notch at the exact LED beat frequency (~19.88 Hz). Subtle ~2% correction.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">physical-model</code></td>
      <td>Physics-based</td>
      <td>Fits the LED waveform and rolling-shutter timing model. Most theoretically correct.</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">global-row</code></td>
      <td>Global brightness variation</td>
      <td>Two-stage: global normalisation plus per-row residual.</td>
    </tr>
  </tbody>
</table>

<p>For most footage, <code class="language-plaintext highlighter-rouge">auto</code> picks the right mode for you. <code class="language-plaintext highlighter-rouge">temporal-median</code> is the default recommendation.</p>

<h2 id="the-memory-trick">The memory trick</h2>

<p>A 4K frame is 8 MB raw. Naively running temporal median on a full-length clip means holding hundreds of frames in RAM at once. The runner sidesteps that by:</p>

<ol>
  <li>Downscaling to 540p before the median stage (4× fewer pixels in each dimension, 16× less memory).</li>
  <li>Running median over a small rolling window — 5 to 11 frames depending on mode.</li>
  <li>Upscaling the <em>correction delta</em> (not the frame itself) back to 4K and applying it to the original-resolution pixel stream.</li>
</ol>

<p>The result is a deflicker pass that runs happily on a laptop, not a workstation, and handles a full talk recording without ever buffering the whole thing into RAM.</p>

<h2 id="why-ship-it">Why ship it</h2>

<p>LED flicker is one of those problems that is unsolvable in a single <code class="language-plaintext highlighter-rouge">ffmpeg</code> filter — none of the stock filters know about the beat frequency, and none of them can route different correction strategies at different scales. A small, focused Python tool makes the tradeoffs explicit and gives a colourist or editor an honest knob to turn. Source on <a href="https://github.com/isaacrowntree/deflicker-runner">GitHub</a>.</p>]]></content><author><name>Isaac Rowntree</name></author><category term="open-source" /><category term="video" /><category term="ffmpeg" /><category term="python" /><category term="deflicker" /><category term="rolling-shutter" /><category term="led-flicker" /><category term="open-source" /><summary type="html"><![CDATA[A streaming temporal-median deflicker pipeline for 4K footage shot under LED lights — full-length video, tiny memory budget, multiple correction modes.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://zackdesign.biz/images/blog/deflicker-runner.jpg" /><media:content medium="image" url="https://zackdesign.biz/images/blog/deflicker-runner.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>